text
stringlengths
15
59.8k
meta
dict
Q: Proper approach of listening solidity events in NestJS I am developing dApp. I am using NestJS and I have written contracts in solidity. There are few events emitted from solidity. I need to listen to those events in NestJS. My code to listen to solidity events is contract.on('EventName', async(val, event)=>{ // do something }) What is the appropriate way to listen to events in NestJS? Right now I am trying to place event listening code in onModuleInit but wanted to confirm if there are any other better approach to do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/73017467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Passing ant command line options to an exec'd ant process? I'm using ant to build a mixture of Java and C++ (JNI) code that makes up a client project here. I've recently switched the C++ part of the build to using ant with cpptasks to build the C++ code instead of having ant invoke the various versions of Visual Studio that are necessary to build the code. In order to get this to work, it is necessary to use ant's exec task to spawn off a shell in which a shell script or a batch file executes to set up the compiler environment before triggering another ant that executes the cpptasks-based C++ build. Essentially, the C++-related build tasks in the main ant build file look like this for Windows: <target name="blah"> <exec executable="cmd" failonerror="true"> <arg value="/C"/> <arg line="&quot;${cpp.compiler.path}/vsvars32.bat&quot; &amp;&amp; %ANT_HOME%/bin/ant -f cpp-build.xml make-cpp-stuff" /> </exec> </target> There is no way to get rid of the vsvars32.bat invocation as the code has to be built with multiple VS versions and on build machines where none of the Visual Studio setup can be part of the build user's environment. The above works, but the issue I'm running into is that I would like to pass certain command line options (like -verbose, -quiet, -emacs) through to the child ant if they have been passed to the parent ant. Is it possible to get access at the command line options given to the parent ant at all? Please note I'm not talking about the usual property definitions, but the ant-internal options. A: <target name="blah"> <property environment="env"/> <exec executable="cmd" failonerror="true"> <arg value="/C"/> <arg value="${cpp.compiler.path}/vsvars32.bat"/> <arg value="&amp;&amp;"/> <arg value="${env.ANT_HOME}/bin/ant.bat"/> <arg value="-f" /> <arg value="cpp-build.xml" /> <arg value="make-cpp-stuff" /> </exec> </target> Addition You can create an external batch file that will run the vsvars and the ant, and then you will have only one process to create. I believe the && is not working as you expect it to: run-ant-vs.bat: ....\vsvars32.bat %ANT_HOME\bin\ant.bat -f cpp-build.xml make-cpp-stuff A: I'm not sure if this could help you. Is Java client execution using a parameter you pass when you execute ant build, you can try to adapt this example (exec is more generic than java task, but is a similar concept) Ant task example: <target name="run"> <java classname="my.package.Client" fork="true" failonerror="true"> <arg line="-file ${specific.file}"/> </java> </target> Invocation example: ant run -Dspecific.file=/tmp/foo.txt
{ "language": "en", "url": "https://stackoverflow.com/questions/1454208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Google Apps Script Function generating an extra blank row I have an Apps Script function that manipulates rows of data in a sheet by first repositioning them and then transposing column data into rows. Everything works well except that I end up with an extra blank row between each block of 48 rows of data.see sheet 2 "Raw Data" Here is the portion of the function code where something is going wrong. // find the initial last row in the 'Raw Data' sheet var sheet2LastRow = sheet2.getLastRow(); Logger.log(sheet2LastRow); // 47 var numRows = (sheet2LastRow - 1); Logger.log(numRows); // 46 // find the Current Row to process and move to the destination row for (var CR = 1 ; CR < sheet2LastRow; CR++) { Logger.log(CR); // 1 > 46 sheet2.getRange((CR + 1), 1, 1, 59).moveTo(sheet2.getRange((CR * 49), 1)); // transpose data from rows to columns for Data Studio sheet2.getRange(1, 12, 1, 48).copyTo(sheet2.getRange((CR * 49), 10), SpreadsheetApp.CopyPasteType.PASTE_VALUES, true); // Dimension sheet2.getRange((CR * 49), 12, 1, 48).copyTo(sheet2.getRange((CR * 49), 11), SpreadsheetApp.CopyPasteType.PASTE_VALUES, true); // Opportunity sheet2.getRange((CR * 49), 1, 1, 9).copyTo(sheet2.getRange((CR * 49), 1, 48, 9), SpreadsheetApp.CopyPasteType.PASTE_VALUES); // Respondent Info A: I think that the reason of your issue is this line sheet2.getRange(row, 1, 1, 9).copyTo(sheet2.getRange(row, 1, 48, 9), SpreadsheetApp.CopyPasteType.PASTE_VALUES). In this line, the value is copied to 48 rows. But at the next loop, the value is copied to the row of (48 + 1). In order to remove this issue, how about the following modification? Pattern 1: In this modification pattern, the for loop is modified by adjusting the row number. From: for (var CR = 1 ; CR < sheet2LastRow; CR++) { Logger.log(CR); // 1 > 46 sheet2.getRange((CR + 1), 1, 1, 59).moveTo(sheet2.getRange((CR * 49), 1)); // transpose data from rows to columns for Data Studio sheet2.getRange(1, 12, 1, 48).copyTo(sheet2.getRange((CR * 49), 10), SpreadsheetApp.CopyPasteType.PASTE_VALUES, true); // Dimension sheet2.getRange((CR * 49), 12, 1, 48).copyTo(sheet2.getRange((CR * 49), 11), SpreadsheetApp.CopyPasteType.PASTE_VALUES, true); // Opportunity sheet2.getRange((CR * 49), 1, 1, 9).copyTo(sheet2.getRange((CR * 49), 1, 48, 9), SpreadsheetApp.CopyPasteType.PASTE_VALUES); // Respondent Info } To: for (var CR = 1 ; CR < sheet2LastRow; CR++) { var row = (CR * 49) - (CR - 1); sheet2.getRange((CR + 1), 1, 1, 59).moveTo(sheet2.getRange(row, 1)); sheet2.getRange(1, 12, 1, 48).copyTo(sheet2.getRange(row, 10), SpreadsheetApp.CopyPasteType.PASTE_VALUES, true); sheet2.getRange(row, 12, 1, 48).copyTo(sheet2.getRange(row, 11), SpreadsheetApp.CopyPasteType.PASTE_VALUES, true); sheet2.getRange(row, 1, 1, 9).copyTo(sheet2.getRange(row, 1, 48, 9), SpreadsheetApp.CopyPasteType.PASTE_VALUES); } Pattern 2: In this modification pattern, the for loop is not modified. After all values were copied, the empty rows are deleted. From: sheet2.deleteRows(2, 47); var sheet2NewLastRow = sheet2.getLastRow(); To: sheet2.deleteRows(2, 47); var values = sheet2.getRange("A2:DL").getValues(); for (var i = values.length - 1; i >= 0; i--) { if (values[i].every(e => !e.toString())) sheet2.deleteRow(i + 2); } var sheet2NewLastRow = sheet2.getLastRow();
{ "language": "en", "url": "https://stackoverflow.com/questions/63124005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Predict probablilities from glm models after purrr::map with a new dataset I am replacing r for loop with purrr::map, and predicting probabilities with a new dataset. Using for-loop, I have been able to obtain predicted probabilities for different subgroups with a new dataset. I am trying to reproduce the same analysis with purrr::map as a new R user, but just not sure where to find the relevant instructions. library(tidyverse) data("mtcars") newdata <- expand.grid(mpg = 10:34) output <- setNames(data.frame(matrix(ncol = 3, nrow = 0)), c("mpg", "am", "pr_1")) for (i in c(0, 1)) { md_1 <- glm(vs ~ mpg, data = filter(mtcars, am == i), family ="binomial") pr_1 <- predict(md_1, newdata, type = "response") output_1 <- data.frame(newdata, am = i, pr_1) output <- bind_rows(output_1, output) } # Try purrr::map my_predict<-mtcars %>% split(.$am) %>% map(~glm(vs~mpg, family = "binomial", data = .x)) # then? predict(my_predict, newdata, type="response") not working I expect a new dataset with predicted probabilities for different subgroups just like the for-loop above. A: We could use new group_split to split the dataframe based on groups (am) and then use map_df to create a new model for each group and get the prediction values based on that. library(tidyverse) mtcars %>% group_split(am) %>% map_df(~{ model <- glm(vs~mpg, family = "binomial", data = .) data.frame(newdata,am = .$am[1], pr_1 = predict(model,newdata, type = "response")) }) # mpg am pr_1 #1 10 0 0.0000831661 #2 11 0 0.0002519053 #3 12 0 0.0007627457 #4 13 0 0.0023071316 #5 14 0 0.0069567757 #6 15 0 0.0207818241 #7 16 0 0.0604097519 #8 17 0 0.1630222293 #9 18 0 0.3710934960 #10 19 0 0.6412638468 #.....
{ "language": "en", "url": "https://stackoverflow.com/questions/56896309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Linq Retrieve Entities using a Navigational Property Linq and EF. I'm pretty new so I have some problem to retrieve entities using a Navigational Property (CmsContents). I can return as an List but not as an IEnumerable. * *Could you tell me what is wrong in my code? *Also do you know a better approach to retrieve Entities suing Navigational Properties? Please provide me an example of code thanks! public IEnumerable<CmsGroupsType> GetMostPopularContents() { using (var context = new CmsConnectionStringEntityDataModel()) { context.CmsGroupsTypes.MergeOption = MergeOption.NoTracking; var contents = context.CmsGroupsTypes.Single(g => g.GroupTypeId == 1).CmsContents; return contents.ToList(); } } Error 1 Cannot implicitly convert type 'System.Collections.Generic.List<WebProject.DataAccess.DatabaseModels.CmsContent>' to 'System.Collections.Generic.IEnumerable<WebProject.DataAccess.DatabaseModels.CmsGroupsType>'. An explicit conversion exists (are you missing a cast?) A: The generic types don't match: Your .ToList() is of CmsContent, but your return type is an IEnumerable of CmsGroupsType. I'm not sure if that was intentional, but changing the return type to IEnumerable<CmsContent> will make everything work. A: Change your return type from CmsGroupsType to WebProject.DataAccess.DatabaseModels.CmsContent
{ "language": "en", "url": "https://stackoverflow.com/questions/6914562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Modify working AddHandler to match files only in the CURRENT directory, NOT child directories The following works fine for allowing PHP to be executed on two XML files: <FilesMatch ^(opensearch|sitemap)\.xml$> AddHandler application/x-httpd-php5 .xml </FilesMatch> However unfortunately this rule would allow this to happen in any child directory as well. * */opensearch.xml, working/desired match */henchman24/opensearch.xml, working/NOT desired match How do we force Apache to only match the files in the current directory and not child directories? I'd really like to: * *Avoid adding a child .htaccess file in every possible child directory. *Avoid using an absolute server path. A: If directive can be used to provide a condition for the handler to be added only for files matching the pattern in the current folder. The following example will add the handler for only files in the document root, such as /sitemap.xml and /opensearch.xml but not for /folder/sitemap.xml and /folder/opensearch.xml <FilesMatch ^(opensearch|sitemap)\.xml$> <If "%{REQUEST_URI} =~ m#^\/(opensearch|sitemap)\.xml$#"> AddHandler application/x-httpd-php .xml </If> </FilesMatch> In the above example, the condition is checking that the REQUEST_URI matches the regex pattern delimited in m# #. The ~= comparison operator checks that a string match a regular expression. The pattern ^\/(opensearch|sitemap)\.xml$ matches REQUEST_URI variable (the path component of the requested URI) such as /opensearch.xml or /sitemap.xml ^ # startwith \/ # escaped forward-slash (opensearch|sitemap) # "opensearch" or "sitemap" \. # . xml # xml $ # endwith A: Have you tried H= in a RewriteRule? RewriteEngine On RewriteRule ^(opensearch|sitemap)\.xml$ . [H=application/x-httpd-php5] Rewrite in htaccess has the built-in property that those filenames in any subdir will have the subdir present in the string the regex tests against so the anchored regex will not match in subdirs.
{ "language": "en", "url": "https://stackoverflow.com/questions/61097306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Angular FormBuild email validation fails on the init (the value is an object) In my Angular v5.1.2 project I'm using FormBuilder to create simple form: this.userDetailsForm = this.formBuilder.group({ id: [{value: this.user.userid || '', disabled: true}, [Validators.required]], email: [{value: this.user.email || ''}, [Validators.required, Validators.email]], firstName: [{value: this.user.firstName || ''}, [Validators.required]], lastName: [{value: this.user.lastName || ''}, [Validators.required]] }); And I have an issue with email field. It becomes invalid after the Form has been initialized. If I switch the initialization of the email field value from object notation to email: [this.user.email, [Validators.required, Validators.email]] the things going work properly. I made a little investigation and with the help of custom validator I realized that the Form field value is an object and it remains the object until any manual change has been done. For example, email: [{value: this.user.email || ''}, [Validators.required, this.myCustomValidator]], where private myCustomValidator(control) { console.log(control.value); } gives following: * *control.value is equal to { value: '[email protected]' } on init *control.value is equal to '[email protected]' after edit That's why I didn't see problems with other fileds: they have only required validator and !!({ value: ... }) is always true. So, is it possible to use object notation for FormBuilder fields initialization and what could be wrong in my case? I'd like to have a possibility to set up options like {value: 'myValue', disabled: true} but currently I can't use it due to validators issue. A: Angular distinguishes so-called boxed and unboxed values. Boxed value is a value satisfying the following condition: _isBoxedValue(formState: any): boolean { return typeof formState === 'object' && formState !== null && Object.keys(formState).length === 2 && 'value' in formState && 'disabled' in formState; } https://github.com/angular/angular/blob/7c414fc7463dc90fe189db5ecbf3e5befcde6ff4/packages/forms/src/model.ts#L621-L624 As we can see we should provide both value and disabled properties for boxed value. If you pass unboxed value then angular will treat it like a value. More information could be found here * *https://github.com/angular/angular/blob/master/packages/forms/test/form_control_spec.ts#L50-L77
{ "language": "en", "url": "https://stackoverflow.com/questions/48252534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to import the google map timeline Json data into a DataFrame I got my commuting report from google map timeline and it is in json format. I use these codes: with open('Location History.json', encoding='utf-8') as data_file: data = json.loads(data_file.read()) pd.DataFrame(data) the dataframe has only one 'location' column. {"locations" : [ { "timestampMs" : "1501812184856", "latitudeE7" : 390632197, "longitudeE7" : -771227158, "accuracy" : 10, "velocity" : 1, "heading" : 226, "altitude" : 146, "verticalAccuracy" : 12 }, { "timestampMs" : "1501813902831", "latitudeE7" : 390624516, "longitudeE7" : -771212199, "accuracy" : 10, "velocity" : 5, "heading" : 316, "altitude" : 126, "verticalAccuracy" : 16 }, any advice how I can read the file into multiple columns and one row for each member of dict. A: extract 'location' from initial json, and then convert to DataFrame with open('Location History.json', encoding='utf-8') as data_file: data = json.loads(data_file.read()) pd.DataFrame(data['locations'])
{ "language": "en", "url": "https://stackoverflow.com/questions/45517568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: FileNotFound Exception MediaPlayer android I have a1.ogg in the res/raw file try { mp = MediaPlayer.create(this, R.raw.a1); } catch (Exception e) { Log.e("msg",e.getMessage()); } Give me java.io.FileNotFoundException The same file in wav format work A: Try this: MediaPlayer mp = new MediaPlayer(); mp.setDataSource("/data/test.ogg"); // replace with correct location mp.prepare(); mp.start();
{ "language": "en", "url": "https://stackoverflow.com/questions/6161415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: EMC VNXe3200 LUN cli command I'm using the below command to view the LUN configuration in VNXe1600. /stor/prov/luns/lun show -detail. And i'm trying the same command in VNXe3200 but, nothing is display. /metrics/metric show This command is used to list all performance command supported by VNXe. Is there any Unisphere commands to list all configuration command supported by VNXe. Thanks in advance.
{ "language": "en", "url": "https://stackoverflow.com/questions/43225245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Spring Boot: Prevent Jackson from "reformatting" XMLGregorianCalendar upon JSON serialization My Spring boot application queries an external SOAP service. Having generated the classes from its WSDL/XSDs this is what a class looks like. As you can see, the dateOfBirth is of type XMLGregorianCalendar. The SOAP response contains the date of birth in the following format: 1991-11-08+01:00. public class Applicant { // other properties @XmlElement(required = true) @XmlSchemaType(name = "date") protected XMLGregorianCalendar dateOfBirth; // getters & setters } The Spring application receives the response body from the SOAP service and returns it "as is" to its calling client as JSON serialized by Jackson. The problem is, it seems as if Jackson serializes the dateOfBirth into another format. This is the format the cilent finally receives: { "dateOfBirth": "1976-11-12T23:00:00.000+00:00" } Is there some configuration or custom implementation I could use so that Jackson won't reformat this date? In the worst case I could write a class which maps the SOAP response but this sounds quite tedious. A: Try configuring your mapper like so: mapper.setDateFormat(new SimpleDateFormat("dd-MM-yyyy+hh:mm")); that should work but if you want more control you can use @JsonFormat annotation: public class Applicant { @XmlElement(required = true) @XmlSchemaType(name = "date") @JsonFormat( shape = JsonFormat.Shape.STRING, pattern ="dd-MM-yyyy+hh:mm") protected XMLGregorianCalendar dateOfBirth; } for even more control : https://www.baeldung.com/jackson-annotations source : https://github.com/FasterXML/jackson-docs
{ "language": "en", "url": "https://stackoverflow.com/questions/65330525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Android Listview Item Remove I want to remove the item in setOnItemLongClickListener, deletion isn't working. Can anyone see what the problem is in the code ? Adapter public abstract class myArrayAdapter<T> extends ArrayAdapter<T> { protected List<T> items = new ArrayList<>(); protected int resource; protected LayoutInflater layoutInflater; public myArrayAdapter(Context context, int resource) { super(context, resource); this.resource = resource; this.layoutInflater = LayoutInflater.from(context); } @Override public View getView(int position, View convertView, ViewGroup parent) { View view = layoutInflater.inflate(resource, null, false); getView(position, getItem(position), view); return view; } public abstract void getView(int position, T model, View view); public void setItems(List<T> items) { this.items = items; notifyDataSetChanged(); } @Override public T getItem(int position) { return items.get(position); } @Override public int getCount() { return items.size(); } public List<T> getItems() { return items; } @Override public int getPosition(T item) { return items.indexOf(item); } } my activity public class QuoteDetailActivity extends Activity { @Inject QuoteDetailViewModel viewModel; @BindView(R.id.toolbar) Toolbar toolbar; @BindView(R.id.price_text) TextView priceTextView; @BindView(R.id.list_view_materials) ListView materialsListView; private int quoteId; myArrayAdapter<LinkedTreeMap<String, Object>> adapter; public static void start(Context context, int quoteId) { Intent starter = new Intent(context, QuoteDetailActivity.class); starter.putExtra("QUOTE_ID", quoteId); context.startActivity(starter); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_quote_detail); ButterKnife.bind(this); setSupportActionBar(toolbar); getSupportActionBar().setDisplayHomeAsUpEnabled(true); toolbar.setNavigationOnClickListener(v -> finish()); quoteId = getIntent().getIntExtra("QUOTE_ID", 0); initMaterialList(); bindToViewModel(); } private void bindToViewModel() { viewModel.quoteModel() .compose(bindToLifecycle()) .subscribe(quoteModel -> { getSupportActionBar().setTitle(String.valueOf(quoteModel.get("QuoteName"))); priceTextView.setText(String.valueOf(quoteModel.get("TotalCost")) + " + KDV"); }); viewModel.quoteMaterialModel() .compose(bindToLifecycle()) .subscribe(materialsModel -> { adapter.setItems(materialsModel); }); //teklif silme viewModel.materialDelete() .compose(bindToLifecycle()) .subscribe(aBoolean -> { if (aBoolean) { finish(); } }); attachToViewModel(viewModel); } @Override protected void onResume() { super.onResume(); viewModel.getQuoteDetail(quoteId); } private void initMaterialList(){ adapter = new myArrayAdapter<LinkedTreeMap<String, Object>>(this, R.layout.layout_listview_item_quote_material) { @Override public void getView(int position, LinkedTreeMap<String, Object> model, View view) { TextView materialNameTextView = (TextView) view.findViewById(R.id.material_name); TextView priceAndAmountNameTextView = (TextView) view.findViewById(R.id.price_and_amount); TextView totalCostNameTextView = (TextView) view.findViewById(R.id.total_cost); materialNameTextView.setText(String.valueOf(model.get("MaterialName"))); priceAndAmountNameTextView.setText("Fiy. x Mik : " + String.valueOf(model.get("Cost")) + " x " + String.valueOf(model.get("MaterialCount"))); totalCostNameTextView.setText(String.valueOf(model.get("TotalCost"))); } }; materialsListView.setAdapter(adapter); materialsListView.setOnItemLongClickListener(new AdapterView.OnItemLongClickListener() { @Override public boolean onItemLongClick(AdapterView<?> parent, View view, int position, long id) { Resources r = getResources(); int px = (int) TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 20, r.getDisplayMetrics()); int pxTop = (int) TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 6, r.getDisplayMetrics()); AlertDialog.Builder alertDialog = new AlertDialog.Builder(QuoteDetailActivity.this); alertDialog.setTitle("Delete."); alertDialog.setPositiveButton("Yes", (dialog, which) -> { LinkedTreeMap<String, Object> selectedItem = adapter.getItem(position); int QuoteMaterialId = ((Double) selectedItem.get("QuoteMaterialId")).intValue(); viewModel.deleteMaterial(quoteId,QuoteMaterialId); adapter.remove(adapter.getItem(position)); adapter.notifyDataSetChanged(); }); alertDialog.setNegativeButton("No", (dialog, which) -> { dialog.dismiss(); }); alertDialog.show(); return true; } }); } @Override public void setupComponent(ActivityComponent activityComponent) { DaggerQuoteComponent.builder() .activityComponent(activityComponent) .build() .inject(this); } } Deletion is not happening. Where am I making mistakes? Thanks. A: You have to work with the data set, not with the adapter. e.g: If you fill a ListView with a ArrayList<T> object, if you want to delete a row in the list you have to delete it from the ArrayList and then call the notifyDataSetChanged(). // ArrayList<T> items filled with data // delete the item that you want items.remove(position); // so, communicate to the adapter that the dataset is changed adapter.notifyDataSetChanged(); In your specific case, the item from materialsModel, then notufy it to the adapter, something like follwing: // remove the item // I don't know which method you must call, hope you do ;) materialsModel.remove(position) // then notify the adapter that the dataset is changed adapter.notifyDataSetChanged(); A: try the following code: materialsListView.setOnItemLongClickListener(new AdapterView.OnItemLongClickListener() { @Override public boolean onItemLongClick(AdapterView<?> parent, View view, int position, long id) { Resources r = getResources(); int px = (int) TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 20, r.getDisplayMetrics()); int pxTop = (int) TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 6, r.getDisplayMetrics()); AlertDialog.Builder alertDialog = new AlertDialog.Builder(QuoteDetailActivity.this); alertDialog.setTitle("Delete."); alertDialog.setPositiveButton("Yes", (dialog, which) -> { LinkedTreeMap<String, Object> selectedItem = adapter.getItem(position); int QuoteMaterialId = ((Double) selectedItem.get("QuoteMaterialId")).intValue(); viewModel.deleteMaterial(quoteId,QuoteMaterialId); adapter.remove(selectedItem); adapter.notifyDataSetChanged(); //adapter.remove(adapter.getItem(position)); //adapter.notifyDataSetChanged(); }); alertDialog.setNegativeButton("No", (dialog, which) -> { dialog.dismiss(); }); alertDialog.show(); return true; } }); A: Either use a different constructor with list items in MyArrayAdapter public MyArrayAdapter(Context context, int resource, List<T> objects) { super(context, resource, objects); } or override the remove method of the adapter and manually delete the items @Override public void remove(T object) { items.remove(object); }
{ "language": "en", "url": "https://stackoverflow.com/questions/41429016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Inheritance - Sharing info between child and parent controllers Context I have a custom Event Entity which has several child Entities: Problem and Maintenance (and few others but those two should be enough to describe the problem) entity classes inherit from Event entity class. The addAction(), seeAction() and modifyAction() of ProblemController and MaintenanceController are (obviously) very similar but with some differences. I want to have a button to display the see view of an Event, no matter if it is a Problem or a Maintenance. Same for modify. For the add action it is a bit different: the user has to say (by clicking on child-specific button) what kind of child he want to add. How I handle this so far In my seeAction() and modifyAction(), I just forward the "call" depending on the type of the child: public function seeAction(Event $event) { if($event instanceof \Acme\EventBundle\Entity\Problem){ return $this->forward('AcmeEventBundle:Problem:see', array('event_id' => $event->getId())); } elseif($event instanceof \Acme\EventBundle\Entity\Maintenance){ return $this->forward('AcmeEventBundle:Maintenance:see', array('maintenance_id' => $event->getId())); } } I have no Event::addAction() but I have a Event::addCommon() which gathers the common parts of the addAction of Problem and Maintenance. Then I call this Event::addCommon() with Controller inheritance. class ProblemController extends EventController { public function addAction(MeasurementSite $measurementSite) { $problem = new Problem(); $problem->setMeasurementSite($measurementSite); $form = $this->createForm(new ProblemType($measurementSite), $problem); $response = parent::addCommon($problem, $form); return $response; } Problem All this looks pretty ugly to me. If I want to share common things between Problem::seeAction() and Maintenance::seeAction(), I will have to call an Event function, but Event already forwarded something!! Information jumps from Parent to Child and vice versa... I would like to know what is the proper way to manage this problem? I looked a bit at setting Controller as a service, using PHP Traits, Routing inheritance but I couldn't extract anything clear and clean from this research... A: I can see how you might end up chasing your tail on this sort of problem. Instead of multiple controllers, consider have one EventController for all the routes along with individual ProblemHelper and MaintainenceHelper objects. The helper objects would have your add/see/modify methods and could extend a CommonHelper class. Your controller would check the entity type, instantiate the helper and pass control over to it.
{ "language": "en", "url": "https://stackoverflow.com/questions/21991678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Datepicker change event fires multiple times on manual entry I have a datepicker, that has onSelect property that will execute a "change" function on the datepicker input field. The "Change" event runs a function to fill a select list. This works so long as I choose the datepicker, but if I manually enter in a date in the input field, the change event will execute twice and the select list will have duplicated values. $("#txtStartDate").datepicker({ defaultDate: +1, dateFormat: "mm/dd/yy", //yy format represents a (4) digit year minDate: new Date($("#ctl00_ContentPlaceHolder2_hidfldJunDate").val()), maxDate: new Date($("#ctl00_ContentPlaceHolder2_hidfldMayDate").val()), showOtherMonths: true, changeMonth: true, changeYear: true, beforeShowDay: $.datepicker.noWeekends, firstDay: 1, showButtonPanel: true, showOn: "button", onSelect: function () { $(this).change(); }, buttonImage: "../Scripts/JQuery/ui/calendar.gif", buttonImageOnly: true }).mask("99/99/9999"); $("#txtStartDate").change(function () { if (($("#txtStartDate").val() != "") && ($("#txtStartDate").val() != "__/__/____")) { var DoDate = CheckDate($("#txtStartDate").val()); if (DoDate != "Good") { alert("Please enter a valid Start Date"); $("#txtStartDate").val(""); $("#txtStartDate").focus(); } else { SemesterList($("#txtStartDate").val(), 0); } } }); So what do I need to do to prevent the multiple firings of this change event so I don't get duplicates in the select list. I have even tried selectlist.remove thinking this would remove all elements in the list, but it still multiplies. Any help is appreciated. A: Use this syntax to remove the original binding by the Datepicker: $("#txtStartDate").unbind('change').change(function () { // your code });
{ "language": "en", "url": "https://stackoverflow.com/questions/9926809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Detect mouse speed using jQuery I want to detect mouse speed, for example the event will be fired only if the detected number is greater than 5 I found one example which I think is something similar: http://www.loganfranken.com/blog/49/capturing-cursor-speed/ but i am not able to make it see the number which is inside div and fire the event based on the resulting number, this is my attempt: <!doctype html> <html> <head> <meta charset="UTF-8"> <title>Untitled Document</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script> <script src="jquery.cursometer.1.0.0.min.js"></script> <script> $(function() { var $speedometer = $('#speedometer'); $('#test-area').cursometer({ onUpdateSpeed: function(speed) { $speedometer.text(speed); }, updateSpeedRate: 20 }); $("#test-area").mouseover(function(){ if(this.value > 5) { console.log("asdasd"); } else { console.log("bbb"); } }) }); </script> <style> #test-area { background-color: #CCC; height: 300px; } </style> </head> <body> <div id="test-area"></div> <div id="speedometer"></div> </body> </html> A: (I'm the author of the plug-in that's causing the trouble here) The example isn't working because this.value isn't referring to the speed (it's undefined). Here's an updated version of your example: http://jsfiddle.net/eY6Z9/ It would probably be more efficient to store the speed value in a variable rather than within the text value of an HTML element. Here’s an updated version with that enhancement: http://jsfiddle.net/eY6Z9/2/ Hope that helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/19416948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best practice to authenticate synchronized realm Trying to find any guidance on how to best authenticate to a synchronized realm and making sure not to use any reference to it before. Let's assume there is no need for a user to login, but e.g. a tableview that is being populated by binding it to a realm.objects query. If I authenticate to the remote realm in e.g. viewDidLoad() that is too late, applicationDidFinishLaunching() also too late. I could, of course, show an empty results list first or an empty local realm, but to me that all doesn't look clean. Any suggestions? A: I'd recommend you to not to use Realm before you have an authenticated user, you can show some login view to handle authentication and show your other view controller after user is authenticated. // LogInViewController ... func logIn() { SyncUser.authenticate(with: credential, server: serverURL) { user, error in if let user = user { Realm.Configuration.defaultConfiguration = Realm.Configuration( syncConfiguration: (user, syncURL) ) // Show your table view controller or use `try! Realm()` } else { // Present error } } } Please check also RealmTasks example here: https://github.com/realm/RealmTasks
{ "language": "en", "url": "https://stackoverflow.com/questions/39914191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I use a list to create a table in html? I am writing a django app with bootstrap and I have a table with a head in one of my pages like the one below: <table class="table table-striped table-sm table-bordered table-fixed table"> <thead class="thead-inverse table_head"> <tr class="table_head_row"> <th class="table_head_item">header 1</th> <th class="table_head_item">header 2</th> <th class="table_head_item">header 3</th> <th class="table_head_item">header 4</th> <th class="table_head_item">header 5</th> <th class="table_head_item">header 6</th> <th class="table_head_item">header 7</th> <th class="table_head_item">header 8</th> <th class="table_head_item">header 9</th> <th class="table_head_item">header 10</th> </tr> </thead> </table> In the above example each header has its own line of code, 10 headers, 10 lines. There is a lot of repetition. I am looking for a way using HTML and not django, to not type almost the same line of code every time. Using django I could pass the list with the headers from the page's view, like so: <table class="table table-striped table-sm table-bordered table-fixed table"> <thead class="thead-inverse table_head"> <tr class="table_head_row"> {% for header in table_headers %} <th class="table_head_item">header</th> {% endfor %} </tr> </thead> </table> Below is a mock up in HTML of what it could be like: table_headers = ["header_1", "header_2", ....] <table class="table table-striped table-sm table-bordered table-fixed table"> <thead class="thead-inverse table_head"> <tr class="table_head_row"> for header in table_headers <th class="table_head_item">header</th> </tr> </thead> </table> Can I do something like this just in html and in django ?
{ "language": "en", "url": "https://stackoverflow.com/questions/48684573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to validate the length of character is equals to 5 using express-validator? I am planning to make partnerid field character length as 5 which means user will get error message if he enters less than 5 character or more than 5 characters which will include alpha-numeric character. How can we do it using express-validator? I tried using below code but it didn't worked Thanks req.checkBody('partnerid', 'Partnerid field must be 5 character long ').len(5); A: You can use isLength() option of the express-validator to check max and min length of 5: req.checkBody('partnerid', 'Partnerid field must be 5 character long ').isLength({ min: 5, max:5 }); A: You can use matches option of express-validator to check if the partner field only contains alphanumeric and has a length of 5 req.checkBody('partnerid', 'Partnerid field must be 5 character long ').matches(/^[a-zA-Z0-9]{5}$/, "i"); A: .len(5) not support in express validation, you can use .isLength(5) to check for a max and min length of 5. req.checkBody('partnerid', 'Partnerid field must be 5 character long strong text').isLength(5);
{ "language": "en", "url": "https://stackoverflow.com/questions/50405183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Added new website to IIS8 and still goes to Default Website Setting up a new server - 2012 with IIS8. I added a new web site to IIS8. I set the IP address and the hostname. Yet when I browse to the site, it goes to the Default Website instead. I stopped the Default web site and I get a 404. I tried restarting IIS as well. I read a few pages on how to set up a new website and I seem to be in compliance. The Default Website is just a fallback as I understand it. With the hostname and/or the IP address set, IIS should direct traffic to the new web site, right? I made sure the folder had permissions for the app pool (though the Test Settings is still failing the Authorization Test.) and the IUSR. I temporarily used the Administrator account instead of pass-thtough so I could pass the Authorization Test. I enabled ASP.Net impersonation. I verified default document file name. I tried a simple .html to get around any ASP.net issues. I deleted it and started from scratch, using the DefaultAppPool this time. What am I missing here? Thanks, Brad
{ "language": "en", "url": "https://stackoverflow.com/questions/18708659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Higher validation accuracy, than training accurracy using Tensorflow and Keras I'm trying to use deep learning to predict income from 15 self reported attributes from a dating site. We're getting rather odd results, where our validation data is getting better accuracy and lower loss, than our training data. And this is consistent across different sizes of hidden layers. This is our model: for hl1 in [250, 200, 150, 100, 75, 50, 25, 15, 10, 7]: def baseline_model(): model = Sequential() model.add(Dense(hl1, input_dim=299, kernel_initializer='normal', activation='relu', kernel_regularizer=regularizers.l1_l2(0.001))) model.add(Dropout(0.5, seed=seed)) model.add(Dense(3, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='categorical_crossentropy', optimizer='adamax', metrics=['accuracy']) return model history_logs = LossHistory() model = baseline_model() history = model.fit(X, Y, validation_split=0.3, shuffle=False, epochs=50, batch_size=10, verbose=2, callbacks=[history_logs]) And this is an example of the accuracy and losses: and . We've tried to remove regularization and dropout, which, as expected, ended in overfitting (training acc: ~85%). We've even tried to decrease the learning rate drastically, with similiar results. Has anyone seen similar results? A: You can check the Keras FAQ and especially the section "Why is the training loss much higher than the testing loss?". I would also suggest you to take some time and read this very good article regarding some "sanity checks" you should always take into consideration when building a NN. In addition, whenever possible, check if your results make sense. For example, in case of a n-class classification with categorical cross entropy the loss on the first epoch should be -ln(1/n). Apart your specific case, I believe that apart from the Dropout the dataset split may sometimes result in this situation. Especially if the dataset split is not random (in case where temporal or spatial patterns exist) the validation set may be fundamentally different, i.e less noise or less variance, from the train and thus easier to to predict leading to higher accuracy on the validation set than on training. Moreover, if the validation set is very small compared to the training then by random the model fits better the validation set than the training.] A: There are a number of reasons this can happen.You do not shown any information on the size of the data for training, validation and test. If the validation set is to small it does not adequately represent the probability distribution of the data. If your training set is small there is not enough data to adequately train the model. Also your model is very basic and may not be adequate to cover the complexity of the data. A drop out of 50% is high for such a limited model. Try using an established model like MobileNet version 1. It will be more than adequate for even very complex data relationships. Once that works then you can be confident in the data and build your own model if you wish. Fact is validation loss and accuracy do not have real meaning until your training accuracy gets reasonably high say 85%. A: I solved this by simply increasing the number of epochs A: This indicates the presence of high bias in your dataset. It is underfitting. The solutions to issue are:- * *Probably the network is struggling to fit the training data. Hence, try a little bit bigger network. *Try a different Deep Neural Network. I mean to say change the architecture a bit. *Train for longer time. *Try using advanced optimization algorithms. A: This actually a pretty often situation. When there is not so much variance in your dataset you could have the behaviour like this. Here you could find an explaination why this might happen. A: This happens when you use Dropout, since the behaviour when training and testing are different. When training, a percentage of the features are set to zero (50% in your case since you are using Dropout(0.5)). When testing, all features are used (and are scaled appropriately). So the model at test time is more robust - and can lead to higher testing accuracies. A: Adding dropout to your model gives it more generalization, but it doesn't have to be the cause. It could be because your data is unbalanced (has bias) and that's what I think.. A: I don't think that it is a drop out layer problem. I think that it is more related to the number of images in your dataset. The point here is that you are working on a large training Set and a too small validation/test set so that this latter is way too easy to computed. Try data augmentation and other technique to get your dataset bigger! A: I agree with @Anas answer, the situation might be solved after you increase the epoch times. Everything is ok, but sometimes, it is just a coincidence that the initialized model exhibits a better performance in the validation/test dataset compared to the training dataset.
{ "language": "en", "url": "https://stackoverflow.com/questions/43979449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88" }
Q: Data returns array instead of an object Its probably some stupid mistake but tell me what am i doing wrong? I made it in 2 ways. * *In the first solution i send the data from my VsCode to the database and then retrieve it. *In the second solution i just send parsed json file as a resposne directly from VsCode folder. I want the data from the first solution return me an object (like in the 2nd solution) not an array. 1 solution: //IMPORTING TO DATABASE const questions = JSON.parse( fs.readFileSync(`${__dirname}/questions.json`, "utf-8") ); const importData = async () => { try { await Questions.create(questions); console.log("Data successfully loaded!"); } catch (err) { console.log(err); } process.exit(); }; //MODEL const mongoose = require("mongoose"); const questionsSchema = new mongoose.Schema({ beginner: [ { questionText: String, answerOptions: Array, }, ], intermediate: [ { questionText: String, answerOptions: Array, }, ], advanced: [ { questionText: String, answerOptions: Array, }, ], }); // RESPONSE const getAllQuestions = async (req, res, next) => { try { const data = await Questions.find(); res.status(200).json({ status: "success", data, }); } catch (err) { res.status(500).json({ status: "fail", message: err, }); } next(); }; my json structure: { "beginner": [...], "intermediate": [...], "advanced": [...], } 2nd solution: When i just simply read the json file and send it as a response it works correctly: const fs = require("fs"); const dataPath = require("path").resolve(__dirname, "../data/questions.json"); const data = JSON.parse(fs.readFileSync(dataPath)); exports.getAllQuestions = async (req, res, next) => { try { res.status(200).json({ status: "success", data, }); } catch (err) { res.status(500).json({ status: "fail", message: err, }); } next(); }; A: To add further to @eol's response, if you want to query on the entry db collection, you need to pass an empty object to the Question.find({}) method. If you want to return one of the different difficulties within the doc, then I believe you treat the response like any other object with properties.
{ "language": "en", "url": "https://stackoverflow.com/questions/75022686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Neo4j not performing for undirected relationship I've been working with neo4j 4.1 for a while now and whilst I feel that the graph structure should be a good fit for my problem, I can't get it to perform in any reasonable time. I'll detail the model and problem below, but I'm wondering whether (a) graphs are just not a good fit or (b) I've modelled the problem incorrectly. In my domain, I have two labels: Person and Skill. Each as an id attribute and there is an index on this attribute. Skills are related to one another in a parent-child relationship, implying that one or more child skills belong to the parent skill, as follows: (s:Skill)-[r:IS_IN_CAT]->(s2:Skill) A Person is related to a Skill as follows: (p:Person)-[r:HAS_SKILL]->(s:Skill) This is illustrated as below: The question I want to ask is, given a Person who has a skill, find me all paths to all other people via that skill. In the diagram above, if Person A was the person, I'd expect 2 paths: (Person A) - [HAS_SKILL] - (Skill 1-1-1) - [IS_IN_CAT] - (Skill 1-1) - [IS_IN_CAT] - (Skill 1-1-2) - [HAS_SKILL] - (Person B) And (Person A) - [HAS_SKILL] - (Skill 1-1-1) - [IS_IN_CAT] - (Skill 1-1) - [IS_IN_CAT] - (Skill 1) - [IS_IN_CAT] - (Skill 1-2) - [HAS_SKILL] - (Person C) The way I'm asking this query is as follows. MATCH (p:Person {id: 100}) - [h:HAS_SKILL] -> (s:Skill) - [r:IS_IN_CAT*..] - (s2:Skill) <- [h2:HAS_SKILL] - (p2:Person) For any moderately sized graph (10,000 skills, 1000 people, 5 skills per person) this doesn't ever return. I'm fairly sure it's the undirected nature of the [r:IS_IN_CAT*..] part of the query but I don't see how to re-model to make this perform any better. Any help would be appreciated. A: I eventually solved this by changing my query to rely on directed relationships only. This brought the performance down to sub-second for very large data sets. The query ended up looking like: MATCH (p:Person {id: 100}) - [h:HAS_SKILL] -> (s:Skill) - [r:IS_IN_CAT*..] -> (parentSkill:Skill) <- [r:IS_IN_CAT*..] - (s2:Skill) <- [h2:HAS_SKILL] - (p2:Person) The introduction of the parentSkill allowed the relationships to stay directional.
{ "language": "en", "url": "https://stackoverflow.com/questions/65717194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Convert a context diff to unified diff format I have received a patch in the context diff format, and I need to apply it in Git. As far as I know, Git can only apply patches that are in the unified diff format. Is there any way to convert a context diff into unified diff format so that I can then git apply the modified patch? A: Here's a solution I've recently found when looking for a solution to the same issue. Using quilt: dev-util/quilt-0.65::gentoo was built with the following: USE="-emacs -graphviz" ABI_X86="(64)" from gentoo, and the following command line session, I was able to painlessly convert a context diff into a unified diff, and adjust the strip level (option -p in patch) from -p0 to -p1 (always use -p1 guys, it will make your and others' lives much easier!) $ tar xf SDL2-2.0.8.tar.gz $ cd SDL2-2.0.8 $ quilt new SDL2-2.0.8.unified.patch $ quilt --quiltrc - fold -p 0 < ../SDL2-2.0.8.context.patch # arbitrary -p0 context diff I created for this exercise $ quilt refresh # your new -p1 unified diff can be found at SDL2-2.0.8/patches/SDL2-2.0.8.unified.patch Answering this here as this is one of the highest results in google for queries related to converting a context diff to a unified one. Should work in any distro, I'm just reporting exactly what I have for posterity's sake. Just found a 'better' way, but requires its own sort of prep work. For this you just need the patch file itself. You will require patchutils dev-util/patchutils-0.3.4::gentoo was built with the following: USE="-test" ABI_X86="(64)" $ $EDITOR SDL2-2.0.8.context.patch # remove all lines like: 1 diff -cr SDL2-2.0.8/src/SDL.c SDL2-2.0.8.new/src/SDL.c (apparently not needed in current git) # save and quit $ filterdiff --format=unified < SDL2-2.0.8.context.patch > SDL2-2.0.8.unified.patch # you can even do this inside vim with :%!filterdiff --format=unified Hope this helps! A: Since git diff can only be configured to produce a context diff (or be filtered to produce one), a possible simple approach would be to use patch to apply manually that context diff, then use git add to detect the change.
{ "language": "en", "url": "https://stackoverflow.com/questions/50166072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Add the data after the header row in google sheets I am successfully able to retrieve data from an API and paste the data to the sheet. This works well. Code.gs const sheet = SpreadsheetApp.getActiveSheet(); const headers = Object.keys(data.data[0]); const values = [headers, ...data.data.map(o => headers.map(h => o[h]))]; sheet.getRange(sheet.getLastRow() + 1, 1, values.length, values[0].length).setValues(values); This is the problem, I am facing: When I run the query or the function for the last 30 days, it pastes the data to the sheet which is all good. But when I run the query again, it pastes the data with the headers and rows again hence the header data is duplicated. What I want If I run the query for yesterday's date, it should paste headers and row with yesterday's data. And then if I run the query in today's date, it should only add a row with today's data. A: In your situation, how about the following modification? From: const values = [headers, ...data.data.map(o => headers.map(h => o[h]))]; To: var values = data.data.map(o => headers.map(h => o[h])); if (sheet.getLastRow() == 0) values.unshift(headers); * *In this modification, by checking whether the 1st row is existing, the header row is added.
{ "language": "en", "url": "https://stackoverflow.com/questions/68014817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WCF Async Service or MVC Async controller thread usage? Can anyone explain when to use async controller versus an async wcf service? As in thread usage etc? My async controller looks like this public class EventController : AsyncController { [HttpPost] public void RecordAsync(EventData eventData) { AsyncManager.OutstandingOperations.Increment(); Debug.WriteLine(string.Empty); Debug.WriteLine("****** Writing to database -- n:{0} l:{1} ******", eventData.Name, eventData.Location); new Thread(() => { AsyncManager.Parameters["eventData"] = eventData; AsyncManager.OutstandingOperations.Decrement(); }).Start(); } public ActionResult RecordCompleted(EventData eventData) { Debug.WriteLine("****** Complete -- n:{0} l:{1} ******", eventData.Name, eventData.Location); Debug.WriteLine(string.Empty); return new EmptyResult(); } } vs my WCF Service [ServiceBehavior(Namespace = ServiceConstants.Namespace)] public class EventCaptureService: IEventCaptureService { public EventCaptureInfo EventCaptureInfo { get; set; } public IAsyncResult BeginAddEventInfo(EventCaptureInfo eventInfo, AsyncCallback wcfCallback, object asyncState) { this.EventCaptureInfo = eventInfo; var task = Task.Factory.StartNew(this.PersistEventInfo, asyncState); return task.ContinueWith(res => wcfCallback(task)); } public void EndAddEventInfo(IAsyncResult result) { Debug.WriteLine("Task Completed"); } private void PersistEventInfo(object state) { Debug.WriteLine("Foo:{0}", new object[]{ EventCaptureInfo.Foo}); } } Notice the WCF Service uses Task whereas the Controller uses Thread. I know little about threads and how they work. I was just wondering which of these would be more effecient and not bomb my server. The overall goal is to capture some activity in a Web App and call either the controller or the service (either of which will be on a different domain) and which is a better approach is better. Which is truly async? Do they use the same threads? Any help, tips, tricks, links etc... is always appreciated. My basic question is Async WCF Service or Async MVC Controller? Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/11198404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Integrating Bootstrap Carousel Indicators with ACF Plugin I am using the advanced custom fields Wordpress plugin to create various slides for a slider. To display the slider I am using Bootstraps Carousel. The body of the slider if functioning fine. I don't, however, know how to loop through, count the slides and print a carousel indicator to the page for each slide. I currently have 3 hardcoded at the top of the slider. <ul id="carouselExampleIndicators" class="carousel slide" data-ride="carousel"> <ol class="carousel-indicators"> <li data-target="#carouselExampleIndicators" data-slide-to="0" class="active"></li> <li data-target="#carouselExampleIndicators" data-slide-to="1"></li> <li data-target="#carouselExampleIndicators" data-slide-to="2"></li> </ol> <li class="carousel-inner"> <?php $c = 0; $class = ''; while ( have_rows('slide') ) : the_row(); $c++; if ( $c == 1 ){ $class = ' active';} else{ $class=''; } ?> <?php $image = get_sub_field('image'); ?> <div class="carousel-item <?php echo $class; ?> image" style="background: url('<?php echo $image; ?>') no-repeat; background-size: cover; background-position: left center;"> </div> <?php endwhile; ?> </li> <!-- end li.image --> </ul> <!-- end ul --> I need to find a way to open the ordered list before the slider starts and close it when it ends. At the same time, I need to echo out its li elements for each slide. A: <section id="banner"> <?php if( have_rows('slides') ) { ?> <?php $num = 0; $active = 'active'; ?> <div id="carouselExampleIndicators" class="carousel slide" data-ride="carousel"> <ol class="carousel-indicators"> <?php while( have_rows('slides') ) : the_row() ; ?> <li data-target="#carouselExampleIndicators" data-slide-to="<?php echo $num; ?>" class="<?php echo $active; ?>"></li> <?php $num++; $active = ''; ?> <?php endwhile; ?> </ol> <div class="carousel-inner"> <?php $active = 'active'; ?> <?php while( have_rows('slides') ) : the_row() ; $image = get_sub_field('image'); $mainText = get_sub_field('main_text'); $subText = get_sub_field('sub_text'); ?> <div class="carousel-item <?php echo $active; ?>"> <img class="d-block w-100" src="<?php echo $image['url']; ?>" alt="<?php echo $image['alt']; ?>"> <div class="carousel-caption d-none d-md-block"> <h5><?php echo $mainText; ?></h5> <p><?php echo $subText; ?></p> </div> </div> <?php $active = ''; ?> <?php endwhile; ?> </div> <a class="carousel-control-prev" href="#carouselExampleIndicators" role="button" data-slide="prev"> <span class="carousel-control-prev-icon" aria-hidden="true"></span> <span class="sr-only">Previous</span> </a> <a class="carousel-control-next" href="#carouselExampleIndicators" role="button" data-slide="next"> <span class="carousel-control-next-icon" aria-hidden="true"></span> <span class="sr-only">Next</span> </a> </div> <?php } ?> </section> A: If your HTML is like this : <div id="carouselExampleIndicators" class="carousel slide" data-ride="carousel"> <ol class="carousel-indicators"> <li data-target="#carouselExampleIndicators" data-slide-to="0" class="active"></li> <li data-target="#carouselExampleIndicators" data-slide-to="1"></li> <li data-target="#carouselExampleIndicators" data-slide-to="2"></li> </ol> <div class="carousel-inner" role="listbox"> <div class="carousel-item active"> <img class="d-block img-fluid" src="..." alt="First slide"> </div> <div class="carousel-item"> <img class="d-block img-fluid" src="..." alt="Second slide"> </div> <div class="carousel-item"> <img class="d-block img-fluid" src="..." alt="Third slide"> </div> </div> <a class="carousel-control-prev" href="#carouselExampleIndicators" role="button" data-slide="prev"> <span class="carousel-control-prev-icon" aria-hidden="true"></span> <span class="sr-only">Previous</span> </a> <a class="carousel-control-next" href="#carouselExampleIndicators" role="button" data-slide="next"> <span class="carousel-control-next-icon" aria-hidden="true"></span> <span class="sr-only">Next</span> </a> </div> Then use this php code : <?php $sliders = get_field('slide'); if($sliders){ ?> <div id="carouselExampleIndicators" class="carousel slide" data-ride="carousel"> <ol class="carousel-indicators"> <?php $isActive =''; foreach($sliders as $key=>$slider){ if($key==0){ $isActive = 'active'; } echo '<li data-target="#carouselExampleIndicators" data-slide-to="'.$key.'" class="'.$isActive.'"></li>'; } ?> </ol> <div class="carousel-inner" role="listbox"> <?php $activeSlide =''; foreach($sliders as $key=>$sliderimg){ if($key==0){ $activeSlide = 'active'; } echo '<div class="carousel-item '.$activeSlide.'">'; echo '<img class="d-block img-fluid" src="'.$sliderimg['image']." alt="First slide">'; echo '</div>'; ?> </div> <a class="carousel-control-prev" href="#carouselExampleIndicators" role="button" data-slide="prev"> <span class="carousel-control-prev-icon" aria-hidden="true"></span> <span class="sr-only">Previous</span> </a> <a class="carousel-control-next" href="#carouselExampleIndicators" role="button" data-slide="next"> <span class="carousel-control-next-icon" aria-hidden="true"></span> <span class="sr-only">Next</span> </a> <?php } ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/46996997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Passing value to Function through variables in R I have the following dataframe library(dplyr) vec1 = 1:10 vec2 = 11:20 df = data.frame(col1 = vec1, col2 = vec2) x <- df %>% summarize(mu = mean(col1, na.rm=TRUE)) x New instead of directly using the col1 in mean function, I want to first save the col1 into a variable and then pass that value to mean. Here is what I want to do. library(dplyr) vec1 = 1:10 vec2 = 11:20 df = data.frame(col1 = vec1, col2 = vec2) var = df$col1 x <- df %>% summarize(mu = mean(var, na.rm=TRUE)) x But R doesn't accepts it and and throws the following error. Error: unexpected symbol in "x <- df %>% summarize(mu = mean(var, na.rm=TRUE)) x" So how do I pass value through a variable into mean? A: Your code works without any error but I think what you were trying to do was : library(dplyr) var = 'col1' x <- df %>% summarize(mu = mean(.data[[var]], na.rm=TRUE)) x
{ "language": "en", "url": "https://stackoverflow.com/questions/67244274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Javascript/PHP - reCaptcha styles are broken I'm using reCaptcha in this form I build, but for some reason, it loses it's styling... Does anyone have a clue why? The form: -link no longer needed- The code for the form: <td align="center" colspan="2"> <script type="text/javascript">/*<![CDATA[*/ var RecaptchaOptions = { theme : "clean", lang: "en" }; /*]]>*/</script> <script type="text/javascript" src="http://api.recaptcha.net/challenge?k=6LfExcoSAAAAAFuAzQEMIDXCkWN3Y9nRd9uLfetc"></script> <iframe src="http://api.recaptcha.net/noscript?k=6LfExcoSAAAAAFuAzQEMIDXCkWN3Y9nRd9uLfetc" height="250px" width="100%" frameborder="0" title="CAPTCHA test"></iframe><br /> <textarea name="recaptcha_challenge_field" id="tswcaptcha" rows="3" cols="40"></textarea> <input type="hidden" name="recaptcha_response_field" value="manual_challenge" /> </td> It just doesn't make sense that the style is broken, because I've got it to work on another site... Could it be because I load the reCaptcha form using jQuery? A: i cant even see where the reCaptcha should get its style from? reCaptcha Docs A: I ended up using the AJAX API for the ReCaptcha, which works like a charm! More info about there here: http://code.google.com/intl/nl/apis/recaptcha/docs/display.html#AJAX
{ "language": "en", "url": "https://stackoverflow.com/questions/8368465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Ravendb memory leak on query I'm having a hard problem solving an issue with RavenDB. At my work we have a process to trying to identify potential duplicates in our database on a specified collection (let's call it users collection). That means, I'm iterating through the collection and for each document there is a query that is trying to find similar entities. So just imagine, it's quite a long task to run. My problem is, when the task starts running, the memory consumption for RavenDB is going higher and higher, it's literally just growing and growing, and it seems to continue until it reaches the maximum memory of the system. But it doesn't really makes sense, since I'm only doing query, I'm using one single index and take a default page size when querying (128). Anybody meet a similar problem like this? I really have no idea what is going on in ravendb. but it seems like a memory leak. RavenDB version: 3.0.179 A: First, a recommendation: if you don't want duplicates, store them with a well-known ID. For example, suppose you don't want duplicate User objects. You'd store them with an ID that makes them unique: var user = new User() { Email = "[email protected]" }; var id = "Users/" + user.Email; // A well-known ID dbSession.Store(user, id); Then, when you want to check for duplicates, just check against the well known name: public string RegisterNewUser(string email) { // Unlike .Query, the .Load call is ACID and never stale. var existingUser = dbSession.Load<User>("Users/" + email); if (existingUser != null) { return "Sorry, that email is already taken."; } } If you follow this pattern, you won't have to worry about running complex queries nor worry about stale indexes. If this scenario can't work for you for some reason, then we can help diagnose your memory issues. But to diagnose that, we'll need to see your code. A: When i need to do massive operations on large collections i work following this steps to prevent problems on memory usage: * *I use Query Streaming to extract all the ids of the documents that i want to process (with a dedicated session) *I open a new session for each id, i load the document and then i do what i need
{ "language": "en", "url": "https://stackoverflow.com/questions/47546798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Versioning git repo inside other git repo I am working on developing a little piece of software that works with git repositories, and I would like to be able to write some tests. In order to do so, I created a test-repo folder inside my project that itself is a git repository. In my tests I reference that repository to ensure that the commands I run against a repository in precisely known state. My question is: Can I version that nested repo as part of the main repo of the project? Please note this is not really the same problem that submodules solve: The nested repo is really part of the enclosing project, not an externally referenced piece of software. A: I think the problem is that git detects its own .git files and doesn't allow to work with them. If you however rename your test repo's .git folder to something different, e.g. _git it will work. Only one thing you need to do is to use GIT_DIR variable or --git-dir command line argument in your tests to specify the folder. A: Even though it is not an "externally referenced piece of software", submodules are still a good approach, in that it helps to capture known state of repositories. I would rather put both repo and test-repo within a parent repo "project": project repo rest-repo That way, I can record the exact SHA1 of both repo and test-repo.
{ "language": "en", "url": "https://stackoverflow.com/questions/14634347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Insert data into blog in WordPress by Json link I want to update my post by JSON link, I have got all post data by this link. http://xyz/wp-json/custom/v1/all-posts. how can I setup cron jobs for auto-update. A: $slices = json_decode(file_get_contents('http://27.109.19.234/decoraidnew/wp-json/custom/v1/all-posts'),true); if ($slices) { foreach ($slices as $slice) { $title = $slice[1]; } } $my_post = array( 'post_title' => $title, 'post_content' => 'This is my content', 'post_status' => 'publish', 'post_author' => 1, 'post_category' => array(8,39) ); $post_id = wp_insert_post( $my_post, $wp_error );
{ "language": "en", "url": "https://stackoverflow.com/questions/53720547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What are some tips for troubleshooting builds of complicated software? Sometimes I want to build Python or GCC from scratch just for fun, but I can't parse the errors I get, or don't understand statements like "libtool link error # XYZ". What are some tricks that unix/systems gurus use to compile software of this size from scratch? Of course I already do things like read config.log (if there is one), google around, and post in newsgroups. I'm looking for things that either * *make the process go smoother or *get me more information about the error to help me understand and fix it. It's a little tough to get this information sometimes, because some compile bugs can be quite obscure. What can I do at that point? A: My 5 cents: * *The only way to have things as smooth as possible is to read whatever the INSTALL/README/other instructions say and follow them as closely as possible. Try the default options first. No other silver bullets, really. *If things don't go smooth, bluntly copy-paste the last error message into google. With high probability you are not the first one to get it and you'll easily find the fix. *If you are the first one to get it, re-read INSTALL/README and think twice on what might be the peculiarity of your particular system config. If you don't know of any, prepare for longer battles. Whenever possible, I would avoid dealing with software that gets me to this point. *An exception to the above rule is a linker error. Those are usually easy to understand and most often mean that your system libraries don't match the ones expected by the software. Re-read documentation, fetch the correct libraries, recompile. On some distributions this might be a lot of pain, though. *My personal experience shows that whenever I get a compilation error which I can't resolve easily, going into the code and looking at the specific line which caused the error helps more than anything else. Excuse me if that's all obvious stuff. A: * *Understand what are the steps in general/particular build process (feature checks, dependency checks, generation of derived sources, compilation, linking, installation, etc.) *Understand the tools used for the above (automake might be an exception here :) *Check the prerequisites (OS and libraries versions, additional packages, etc.) *Have a clean build environment - installing everything with all possible features on the same system will sure get you into dependency conflicts sooner or later. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/2657151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Issue using kernel linked list in user space I have a linked list like this. The problem i am facing is i have initialized the list head and then i am doing malloc for that struct object. What i need is i want to make struct nsds_st *r as head of linked list and other nodes after it. #ifndef __LIST_H #define __LIST_H struct list_head { struct list_head *next, *prev; }; typedef struct list_head st_list_head; #define INIT_LIST_HEAD(ptr) do { \ (ptr)->next = (ptr); (ptr)->prev = (ptr); \ } while (0) /* * __list_add: insert new between prev and next. */ static inline void __list_add(struct list_head *new, struct list_head *prev, struct list_head *next) { next->prev = new; new->next = next; new->prev = prev; prev->next = new; } /** * list_add: add a new entry to the end of the list. */ static inline void list_add1(struct list_head *new, struct list_head *head) { __list_add(new, head->prev, head); } /* *__list_del: making the prev/next entries point to each other. */ static inline void __list_del(struct list_head *prev, struct list_head *next) { next->prev = prev; prev->next = next; } /** * list_del: deletes entry from list. */ static inline void list_del(struct list_head *entry) { __list_del(entry->prev, entry->next); entry->next = (void *) 0; entry->prev = (void *) 0; } /** * list_empty: check is list is empty. */ static inline int list_empty(struct list_head *head) { return head->next == head; } /** * list_entry: retrieve the container struct. */ #define list_entry(ptr, type, member) \ ((type *)((char *)(ptr)-(unsigned long)(&((type *)0)->member))) /** * list_for_each: traverse through the list. */ #define list_for_each(pos, head) \ for (pos = (head)->next; pos != (head); \ pos = pos->next) /** * list_for_each_delete: traverse through the list with safe delete */ #define list_for_each_delete(pos, n, head) \ for (pos = (head)->next, n = pos->next; pos != (head); \ pos = n, n = pos->next) #endif In my main file i have a structure like this typedef struct nsds_s { // struct nsds_s *link; char client_id[18]; char nguid[56]; char mac[18]; short int machine; cbt_list_head list; }nsds_st; nsds_st* nvmar_reply_packet (nsds_st *ptr) { char cmd[500]; MYSQL_RES *result; MYSQL_ROW row; int num_rows; nsds_st *r=NULL,*temp=NULL; INIT_LIST_HEAD(&r->list); // i have intialized the head here sprintf(cmd, "SELECT * FROM ABCD_TABLE WHERE MACHINE = %.4x", ptr->machine); /* Running the sql query to check for fields with value in database */ if (mysql_query(g_db_ctxt.db_handle.nsds, cmd)) { FINISH_WITH_ERROR(g_db_ctxt.db_handle.nsds); num_rows = -1; goto done; } result = mysql_store_result(g_db_ctxt.db_handle.nsds); if (result == NULL) { NVMAR_ERROR("Select Query Failed"); FINISH_WITH_ERROR(g_db_ctxt.db_handle.nsds); num_rows = -1; goto done; } num_rows = mysql_num_rows(result); while ((row = mysql_fetch_row(result))) { r= (nsds_st *)malloc(sizeof(nsds_row_st)); memcpy(r->registered_hosts,row[5],strlen(row[5])+1); memcpy(r->client_id,row[0],strlen(row[0])+1); //list_add(struct list_head *new, struct list_head *head) // list_add1(&temp->list,&r->list); //it will go wrong here, i want to make r(structure r as head and then add other strutcure objects) // r->link = NULL; //list_add1(r, ptr); #if 0 if(temp == NULL) { temp =r; ptr=r; } else { temp->link = r; temp = temp->link; } #endif } done: mysql_free_result(result); return r; }
{ "language": "en", "url": "https://stackoverflow.com/questions/32658260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Get the IDs of all the controls of Office 2010 ribbon and interact with ribbon shortcut from custom Add-ins I want to get all the controls list in the powerpoint 2010 ribbon like the one in the powerpoint option -> customize ribbon-> all commands. Furthermore, I want to interact with ribbon shortcut from custom Add-ins A: You will find all office id's you want on microsoft website http://www.microsoft.com/en-us/download/details.aspx?id=6627. You will find your id's in PowerPointControls.xlsx file. For create you own menu : Open your Ribbon.xml And add following after <ribbon> <tabs> <tab idMso="TabAddIns"> <group id="ContentGroup" label="Content"> <button id="textButton" label="Insert Text" screentip="Text" onAction="OnTextButton" supertip="Inserts text at the cursor location."/> <button id="tableButton" label="Insert Table" screentip="Table" onAction="OnTableButton" supertip="Inserts a table at the cursor location."/> </group> </tab> </tabs> For a custom addin shortcut, I think you have to add a new tab : <tab id="YourTab" visible="true" label="Name"> <group id="YourGroup" label="name"> <button onAction="CallAddinsHere();" label="Call add-ins"/> </group> </tab> If you want to interact with custom addin shortcut, have a look at : Automate Office Ribbon through MSAA (CSOfficeRibbonAccessibility)
{ "language": "en", "url": "https://stackoverflow.com/questions/12639337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to print the nodes in bst whose grandparent is a multiple of five? I'm sorry that was my first time for asking question in stackoverflow. I just read the faq and knew I disobeyed the rules. I was not just coping and pasting the questions. I use an in-order traverse method to do the recursion and check whether the node is a multiple of five and I don't know what to do next. Should I use a flag to check something? void findNodes(BSTNode<Key,E> *root) const { if(root==NULL) return; else { if(root->key()%5==0)//I know it's wrong, but I don't know what to do { findNodes(root->left()); cout<<root->key()<<" "; findNodes(root->right()); } else { findNodes(root->left()); findNodes(root->right()); } } } A: Printing nodes whose grandparent is a multiple of 5 is complicated as you have to look "up" the tree. It is easier if you look at the problem as find all the nodes who are a multiple of 5 and print their grandchildren, as you only have to go down the tree. void printGrandChildren(BSTNode<Key,E> *root,int level) const{ if(!root) return; if(level == 2){ cout<<root->key()<<" "; return; }else{ printGrandChildren(root->left(),level+1); printGrandChildren(root->right(),level+1); } } Then modify your findNodes to void findNodes(BSTNode<Key,E> *root) const { if(root==NULL) return; else { if(root->key()%5==0) { printGrandChildren(root,0); } else { findNodes(root->left()); findNodes(root->right()); } } } A: Try this: int arr[height_of_the_tree]; //arr[10000000] if needed; void findNodes(BSTNode<Key,E> *root,int level) const { if(root==NULL) return; arr[level] = root -> key(); findNodes(root -> left(), level + 1); if(2 <= level && arr[level - 2] % 5 == 0) cout << root->key() << " "; findNodes(root -> right(), level + 1); } int main() { ... findNodes(Binary_search_tree -> root,0); ... } A: If you're just trying to print our all child nodes which have an ancestor which has a key which is a multiple of 5, then one way would be to pass a bool to your findNodes function which stores this fact. Something along the lines of: void findNodes(BSTNode<Key,E>* node, bool ancesterIsMultOf5) const { if (node) { if (ancesterIsMultOf5) std::cout << node->key() << std::endl; ancesterIsMultOf5 |= (node->key() % 5 == 0); findNodes(node->left(), ancesterIsMultOf5); findNodes(node->right(), ancesterIsMultOf5); } } Alternately, if you're trying to draw the tree, it has been answered before: C How to "draw" a Binary Tree to the console A: Replace the following cout<<root->key()<<" "; with if(root->left) { if(root->left->left) cout<<root->left->left->key()<< " "; if(root->left->right) cout<<root->left->right->key()<< " "; } if(root->right) { if(root->right->left) cout<<root->right->left->key()<< " "; if(root->right->right) cout<<root->right->right->key()<< " "; }
{ "language": "en", "url": "https://stackoverflow.com/questions/13637946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Copy & Paste plain text within HTML I have a small templating webapp, where the authors can add placeholders within their richttext editor. To prevent errors I want to provide a list of valid placeholders which then can be copied and pasted. My problem here is the way I can restrict what get's copied. I tried two approaches, both failed. First of all how the list of placeholders looks like: <ul class="placeholders"> <li>${address.name}</li> <li>${address.street}</li> <li>${address.city}</li> <li>${address.zip}</li> </ul> * *Copy to clipboard with JS: This doesn't work as the clipboard cannot be accessed because of security concerns. I tried the ZeroClipboard but it's documentation is not clear for me and even the examples I found here at SO weren't helpful. I want to copy the content of the <li> if the user clicks on it. I tried to set instantiate with new ZeroClipboard(jQuery('ul.placeholders li'). But this didn't work at all. In Firefox as soon as I hover over an li the loading wheel appears. *Just select the whole text with a range object: This basically works with the selection, but when I paste it in the Rich Text Editor, Firefox und IE also paste the li tag. Again as I don't have access to the clipboard I can't control, what gets copied. And as it is a RTE, I don't have much control over how it gets pasted. Has anyone an idea on how I could make either of the approaches work?
{ "language": "en", "url": "https://stackoverflow.com/questions/26468445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Performance Issue with QClipboard Class I have an instance of QClipboard and I want to shift data to it whenever user clicks somewhere in the Application. It seems like there are sometimes performance issues with QClipboard which causes the application to freeze because data gets put on OS clipboard of linux. QClipboard* clipboard = QApplication::clipboard(); clipboard->setText(QString("Glorious Text"), QClipboard::Clipboard); It does not happen every time but every fifth or sixth click it freezes for some seconds. So I can not really reproduce properly. A: https://www.medo64.com/2019/12/copy-to-clipboard-in-qt/ solved it for me. QClipboard* clipboard = QApplication::clipboard(); clipboard->setText(text, QClipboard::Clipboard); if (clipboard->supportsSelection()) { clipboard->setText(text, QClipboard::Selection); } #if defined(Q_OS_LINUX) QThread::msleep(1); //workaround for copied text not being available... #endif
{ "language": "en", "url": "https://stackoverflow.com/questions/67212286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: IBM dashDB sql statement for "insert row if not already existing" I am currently working with the IBM dashDB and I need to know the sql statement for inserting a new row if this row does not already exist based to certain criteria. So, something like this: INSERT INTO tablexyz (Col1, Col2, Col3) VALUES (val1, val2, val3) IF NOT EXIST (SELECT * FROM tablexyz WHERE val1 = x, val2 = y) How can I do this? A: Depending on the context you could define a primary key or unique index on (Col1,Col2) and let the plain Insert fail if there is a duplicate. Or define a procedure that runs the Select and checks the return code. However, the closest match to your SQL example would be a MERGE statement like MERGE into tablexyz using ( values (1,2,9) ) newdata(val1,val2,val3) on tablexyz.Col1 = newdata.val1 and tablexyz.Col2 = newdata.val2 when not matched then insert values(val1,val2,val3);
{ "language": "en", "url": "https://stackoverflow.com/questions/40312650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Angular 8: value used in comonent.html is not the value in the component.ts (azure app service) I got a really strange behavior in one of my angular app. In a component.html, I want to display "UAT" and color the angular mat markups with a bright orange when in UAT, otherwise it should be blue and no mention of of UAT anywhere (aka PROD). In local, no problem. when I deploy to azure nothing makes sense. So here is a bit of my code. In my angular solution I got this env.json file that looks like this { "production": false, "isTestEnvironment": false, } In VSTS, I build the app once, and use it for all deployements. So in the UAT deployement, I replace the iSTestingEnvironment with "true", and ship it to azure. for production deployement, I replace production with "true", isTestingEnvironment by "false", and ship it. This env.json is extracted using something like this in the app.module.ts const appInitializer = { provide: APP_INITIALIZER, useFactory: configFactory, deps: [AppConfig], multi: true }; and appInitializer is registered in the providers section. And the AppConfig injectable is: import {Injectable} from '@angular/core'; import {HttpClient} from '@angular/common/http'; @Injectable() export class AppConfig { public config: any; constructor(private http: HttpClient) {} public load() { return new Promise((resolve, reject) => { this.http.get('env.json').subscribe((p: string) => { this.config = p; resolve(true); }); }); } } In my component's contructor, I start by getting the isTestingEnvironment: this.isTestEnvironment = appConfig.config.isTestEnvironment; At this very moment, when I console.log the values, it is always correct: In UAT, it's true. In PROD, it's false. And in local, it's watchever i put in the env.json, even when I hot change the value while ng serve is running. When I look at the env.json that is deployed (using kudu/powershell), the file is well formed with the right values across all environments. So far so good. But this code then breaks everything in the component.html: <h5 style="margin: auto;" *ngIf="isTestEnvironment">UAT</h5> In UAT and PROD, it always show up. regardless of the isTestingEnvrionment being true or false just instants before in the console log. However it works ust fine in local debug using VSC (1.42.1) and node (v10.18.1). At this point, Angular is telling true == false, so I'm at a complete loss. I triple checked, at no point in the angular solution is the variable isTestingEnvironment set to a value anywhere else than in the env.json. So yeah, I'm either completely missing something obvious or something is really wrong in my code. Any help will be greatly appreciated. Thanks. A: So, I figured it out. TLDR : It was caused by VSTS release task called "Azure App Service Deploy" that change the variables to string in the "File Transform & Variable Substitution Option" section. After looking again at the deployed file I finaly saw that the file that is : { "isTestEnvironment": true, } in my sources had became : { "isTestEnvironment": "true", } on the azure server. This caused every check we made against this supposedly boolean variable bogus, since a non-empty string is equivalent to true in js. So I'm now using JSON.parse(myVariable) when I extract it from the env.json file. I could reproduce it in local and tested it. It works. To anyone using the same releas etask as we do, be careful that boolean will be changed into string. I checked our release pipeline variable and they are NOT declared with quotes around them; plain simple true/false values. In any case, thank you guys for giving potential workarounds.
{ "language": "en", "url": "https://stackoverflow.com/questions/60227802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Socket implementation is showing problem in react js On implementing the socket.io-client in my project it is showing an error and the connection status is false. This is the code of socket. socket = io('url', { transports: ['websocket'] }) console.log(socket) and I am using the socket version "socket.io-client": "^4.5.1", But the console is showing It seems you are trying to reach a Socket.IO server in v2.x with a v3.x client, but they are not compatible (more information here: https://socket.io/docs/v3/migrating-from-2-x-to-3-0/) A: Changed Version from "socket.io-client": "^4.5.1", To "socket.io-client": "^1.7.4", And Changed import {io} from "socket.io-client To import io from "socket.io-client
{ "language": "en", "url": "https://stackoverflow.com/questions/73003253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How and when to call the base class constructor in C# How and when to call the base class constructor in C# A: It's usually a good practice to call the base class constructor from your subclass constructor to ensure that the base class initializes itself before your subclass. You use the base keyword to call the base class constructor. Note that you can also call another constructor in your class using the this keyword. Here's an example on how to do it: public class BaseClass { private string something; public BaseClass() : this("default value") // Call the BaseClass(string) ctor { } public BaseClass(string something) { this.something = something; } // other ctors if needed } public class SubClass : BaseClass { public SubClass(string something) : base(something) // Call the base ctor with the arg { } // other ctors if needed } A: You can call the base class constructor like this: // Subclass constructor public Subclass() : base() { // do Subclass constructor stuff here... } You would call the base class if there is something that all child classes need to have setup. objects that need to be initialized, etc... Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/5335488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Avoid Segfault in C++ code if user redefines initialize() in Ruby with Rice One problem I am struggling with while writing a C++ extension for Ruby is to make it really safe even if the user does silly things. He should get exceptions then, but never a SegFault. A concrete problem is the following: My C++ class has a non-trivial constructor. Then I use the Rice API to wrap my C++ class. If the user redefines initialize() in his Ruby code, then the initialize() function created by Rice is overwritten and the object is neither allocated nor initialized. One toy example could be the following: class Person { public: Person(const string& name): m_name (name) {} const string& name() const { return m_name; } private: string m_name; } Then I create the Ruby class like this: define_class<Person>("Person") .define_constructor(Constructor<Person, const string&>(), Arg("name")) .define_method("name", &Person::name); Then the following Ruby Code causes a Segfault require 'MyExtension' class Person def initialize end end p = Person.new puts p.name There would be two possibilities I would be happy about: Forbid overwriting the initialize function in Ruby somehow or check in C++, if the Object has been allocated correctly and if not, throw an exception. I once used the Ruby C API directly and then it was easy. I just allocated a dummy object consisting of a Null Pointer and a flag that is set to false in the allocate() function and in the initialize method, I allocated the real object and set the flag to true. In every method, I checked for that flag and raised an exception, if it was false. However, I wrote a lot of stupid repetitive code with the Ruby C API, I first had to wrap my C++ classes such that they were accessible from C and then wrap and unwrap Ruby types etc, additionally I had to check for this stupid flag in every single method, so I migrated to Rice, which is really nice and I am very glad about that. In Rice, however, the programmer can only provide a constructor which is called in the initialize() function created by rice and the allocate() function is predefined and does nothing. I don't think there is an easy way to change this or provide an own allocate function in an "official" way. Of course, I could still use the C API to define the allocate function, so I tried to mix the C API and Rice somehow, but then I got really nasty, I got strange SegFaults and it was really ugly, so I abandoned that idea. Does anyone here have experiences with Rice or does anyone know how to make this safe? A: How about this class Person def initialize puts "old" end alias_method :original_initialize, :initialize def self.method_added(n) if n == :initialize && !@adding_initialize_method method_name = "new_initialize_#{Time.now.to_i}" alias_method method_name, :initialize begin @adding_initialize_method = true define_method :initialize do |*args| original_initialize(*args) send method_name, *args end ensure @adding_initialize_method = false end end end end class Person def initialize puts "new" end end Then calling Person.new outputs old new i.e. our old initialize method is still getting called This uses the method_added hook which is called whenever a method is added (or redefined) at this point the new method already exists so it's too late to stop them from doing it. Instead we alias the freshly defined initialize method (you might want to work a little harder to ensure the method name is unique) and define another initialize that calls the old initialize method first and then the new one. If the person is sensible and calls super from their initialize then this would result in your original initialize method being called twice - you might need to guard against this You could just throw an exception from method_added to warn the user that they are doing a bad thing, but this doesn't stop the method from being added: the class is now in an unstable state. You could of course realias your original initialize method on top of their one. A: In your comment you say that in the c++ code, this is a null pointer. If it is possible to call a c++ class that way from ruby, I'm afraid there is no real solution. C++ is not designed to be fool-proof. Basically this happens in c++; Person * p = 0; p->name(); A good c++ compiler will stop you from doing this, but you can always rewrite it in a way the compiler cannot detect what is happening. This results in undefined behaviour, the program can do anything, including crash. Of course you can check for this in every non-static function; const string& Person::name() const { if (!this) throw "object not allocated"; return m_name; } To make it easier and avoid double code, create a #define; #define CHECK if (!this) { throw "object not allocated"; } const string& name() const { CHECK; return m_name; } int age() const { CHECK; return m_age; } However it would have been better to avoid in ruby that the user can redefine initialize. A: This is an interesting problem, not endemic to Rice, but to any extension which separates allocation and initialization into separate methods. I do not see an obvious solution. In Ruby 1.6 days, we did not have allocate/initialize; we had new/initialize. There may still be code left over in Rice which defines MyClass.new instead of MyClass.allocate and MyClass#initialize. Ruby 1.8 separated allocation and initialization into separate methods (http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/23358) and Rice uses the new "allocation framework". But it has the problem you pointed out. The allocate method cannot construct the object, because it does not have the parameters to pass to the constructor. Rice could define .new instead (as was done on 1.6), but this doesn't work with #dup and Marshal.load. However, this is probably the safer (and right) solution. A: I now think that this is a problem of the Rice library. If you use Rice in the way it is documented, you get these problems and there is no obvious way to solve it and all workarounds have drawbacks and are terrible. So I guess the solution is to fork Rice and fix this since they seem to ignore bug reports.
{ "language": "en", "url": "https://stackoverflow.com/questions/12496817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Python class returning multiple instances I'm musing over the design of a class. Currently, I have a list comprehension over a list of data that instantiates an instance of the class on each member of the list, thus returning me a list of instances of my class. Would it be better, or indeed possible, to have instead a class method that takes a list and returns a list of instances? Essentially, I'm wondering, would: data = [lots of data] [MyClass(point) for point in data] or @classmethod def from_list(cls, data_list): return [cls(point) for point in data_list] be better/more pythonic? If it matters, in the usage I intend, I will always be instantiating the class from a list of data. A: As it stands, there isn't much to choose between them. However, the @classmethod has one major plus point; it is available to subclasses of MyClass. This is far more Pythonic than having separate functions for each type of object you might instantiate in a list. A: I would argue that the first method would be better (list comprehension) since you will always be initializing the data from a list. This makes things explicit.
{ "language": "en", "url": "https://stackoverflow.com/questions/21203515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are sparse matrices typically stored in column major order or row major order? A little background: I'm interested in doing some research on sparse matrix*vector multiplication. I've been looking through this database of sparse matrices: The University of Florida Sparse Matrix Collection I noticed that there are 3 formats the matrices are available in: * *MATLAB (.mat) *Matrix Market (.mtx) *Harwell-Boeing (.rb) It appears that the matrices are stored in column major order (i.e. columns are stored one right after each other, rather than rows right after each other). However, in the literature it appears that the compressed sparse row (CSR) format is apparently the most common format (see "Scientific Computing Kernels on the Cell Processor Samuel"). I know that somehow just the index (i,j) and the value at those coordinates are stored, but I think I would have to reformat the data first in order to perform the matrix*vector multiplication efficiently. For my implementation, it would make more sense to have the data stored in row major order, so that the elements in a row could be accessed in order because they would be stored in consecutive memory addresses. The CSR format appears to assume data is stored in row major order however. So what I'm wondering is this: How is data typically stored in memory for sparse matrices? And does part of the sparse matrix*vector computation involve regrouping the data from column-major to row-major order? I'm asking because I'm wondering if this conversion is typically considered in sparse matrix benchmark results. A: I am afraid there is no short answer. The best storage scheme depends on the problem you are trying to solve. The things to consider are not only the storage size, but also how efficient, from a computational and hardware perspective, access and operations on this storage format are. For sparse matrix vector multiplication CSR is a good format as it allows linear access to the elements of a matrix row which is good for memory and cache performance. However CSR induces a more irregular access pattern into the multiplicand: fetch elements at different positions, depending on what index you retrieve from the row; this is bad for cache performance. A CSC matrix vector multiplication can remove the irregular access on the multiplicand, at the cost of more irregular access in the solution vector. Depending on your matrix structure you may choose one or another. For example a matrix with a few, long rows, with a similar nonzero distribution may be more efficient to handle in a CSC format. Some examples in the well known software packages/tools: * *To the best of my knowledge Matlab uses a column storage by default. *Scientific codes (and BLAS) based on Fortran also use a column storage by default. This is due mostly to historical reasons since Fortran arrays were AFAIK column oriented to begin with and a large number of Dense/Sparse BLAS codes were originally written in Fortran. *Eigen also uses a column storage by default, but this can be customised. *Intel MKL requires you to choose IIRC. *Boost ublas uses a row based storage format by default. *PetSC, which is a widely used tool in larger scale scientific computing, uses a row based format (SequentialAIJ stands for CSR), but also allows you to choose from a wide variety of storage formats (see the MatCreate* functions on their documentation) And the list could go on. As you can see there is some spread between the various tools, and I doubt the criteria was the performance of the SpMV operation :) Probably aspects such as the common storage formats in the target problem domains, typical expectations of programmers in the target problem domain, integration with other library aspects and already existing codes have been the prime reason behind using CSR / CSC. These differ on a per tool basis, obviously. Anyhow, a short overview on sparse storage formats can be found here but many more storage formats were/are being proposed in sparse matrix research: * *There are also block storage formats, which attempt to leverage locally dense substructures of the matrix. See for example "Fast Sparse Matrix-Vector Multiplication by Exploiting Variable Block Structure" by Richard W. Vuduc, Hyun-Jin Moon. *A very brief but useful overview of some common storage formats can be found on the Python scipy documentation on sparse formats http://docs.scipy.org/doc/scipy/reference/sparse.html. *Further information of the advantages of various formats can be found in the following texts (and many others): * *Iterative methods for sparse linear systems, Yousef Saad *SPARSKIT: A basic tool kit for sparse matrix computation, Tech. Rep. CSRD TR 1029, CSRD, University of Illinois, Urbana, IL, 1990. *LAPACK working note 50: Distributed sparse data structures for linear algebra operations, Tech. Rep. CS 92-169, Computer Science Department, University of Tennessee, Knoxville, TN, 1992. I have been doing research in the sparse matrix area on creating custom hardware architectures for sparse matrix algorithms (such as SpMV). From experience, some sparse matrix benchmarks tend to ignore the overhead of conversion between various formats. This is because, in principle, it can be assumed that you could just adapt the storage format of your entire algorithm. SpMV itself is hardly used in isolation, and generally a part of some larger iterative algorithm (e.g. a linear or nonlinear solver). In this case, the cost of converting between formats can be amortised across the many iterations and total runtime of the entire algorithm. Of course you would have to justify that your assumption holds in this situation. As a disclaimer, in my area we are particularly inclined to make as many assumptions as possible, since the cost and time to implement a hardware architecture for a linear solver to benchmark a new SpMV storage format is usually substantial (on the order of months). When working in software, it is a lot easier to test, qualify and quantify your assumptions by running as many benchmarks as possible, which would probably take less than a week to set up :D A: This is not the answer but can't write in the comments. Best representation format depends on the underlying implementation. For example, let M = [m_11 m_12 m_13; == [r1; == [c1 c2 c3] m_21 m_22 m_23] r2] where r1,2 are the rows and c1,2,3 are columns and v = [v1; v2; v3] You can implement M*v as M*v = [r1.v; r2.v] as dot product of vectors, or M*v = v1*c1 + v2*c2 + v3*c3 where * is the scalar vector multiplication. You can minimize the number of operations by choosing the format depending on the sparsity of the matrix. Usually the fewer the rows/columns the better.
{ "language": "en", "url": "https://stackoverflow.com/questions/37309771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: VBA Dynamically Filter Multi-Column Table Using Column of Product Numbers I am trying to figure out how to filter a data table of products as I type a product # into an Active X Control listbox. I named the data table (found in the Design tab which is visible upon having focus on the table) 'Item' and linked an empty cell 'A1' (not positive what this does but know I need to do it for this solution to work) in the listbox properties. I then wrote the following macro: ActiveSheet.Range("Item").AutoFilter , Field:=1, Criteria1:="" & CStr(Excel.ActiveSheet.Range("A1").Value) & "" After typing the macro I exit Design Mode and go ahead typing in the listbox. If my list of product IDs looked liked this: 101 205 309 413 517 If I type 10 in the listbox, I would like all rows whose product number starts with 10 to remain in the table. However, the only thing this macro is allowing me to do is type the exact product number and then it will display that row. I tried adding a * in the "" following the string value in the Criteria but this did not work to make the Criteria 'begins with'. How can I modify my code to make the search filter as I type a product number to show rows whose product numbers start with the number I have typed? A: Add an “=“ at the beginning of your Criteria and an asterisk at its end Range("Item").AutoFilter , Field:=1, Criteria1:="=" & CStr(Range("A1").Value) & "*" See I omit ”ActiveSheet” since it’s implicitly assumed But you’d better always explicitly qualify your ranges up to workbook and worksheet references like: With WorkBooks("MyWorkbookName").WorkSheets("MySheetName") .Range("Item").AutoFilter , Field:=1, Criteria1:="=" & CStr(.Range("A1").Value) & "*" End With
{ "language": "en", "url": "https://stackoverflow.com/questions/48970216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to allow one jPOS ISOField to contain Control character? I am using w3c dom object that represnet an XML file to create the jPOS ISOMessage object. (loop over the dom object ans set ISOMessage Fields) The Question is: In the resulting ISOMessage Object, How to allow one ISOField to contain Control character? Note: I am using a Custom Packager that reads the format of the ISOMessage from xml file with such contents: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE isopackager SYSTEM "genericpackager.dtd"> <isopackager> <isofield id="0" length="4" name="MESSAGE TYPE INDICATOR" pad="true" class="org.jpos.iso.IFE_NUMERIC"/> <isofield id="1" length="16" name="BIT MAP" class="org.jpos.iso.IFB_BITMAP"/> <isofield id="2" length="19" name="PAN - PRIMARY ACCOUNT NUMBER" pad="false" class="org.jpos.iso.IFE_LLNUM"/> .............. ................. ................... A: You're encoding in UTF-8, so you can just encode the Unicode control characters like any character. But something tells me you mean something else. A: You can also change your fieldpackager to a BINARY type (see IF*BINARY) and use an hex representation, i.e.: <field id="xx" value="0123456789ABCDEF" type="binary" />
{ "language": "en", "url": "https://stackoverflow.com/questions/1412245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: php json_encode fails without error PHP Version: 5.6.15RC1 Compiler: MSVC11 (Visual C++ 2012) Architecture:x86 I am having problem json_encode ing a multi dimensional php array. The main problem is that no error is generated (json_last_error=0). The array is indexed by a string and testing each of these indexes seperately has been done within the array compilation has been done on the outer indexes by using: $test = json_encode($account[$q_id]); if (strlen($test) < 2) { $error = json_last_error(); } Stepping through several showed correct json output. Then a breakpoint in the if statement was placed to identify any encoding issues, however it never stopped on the $error... line. Database connection: $connection_cfg = array("Database" => $db["database"], "CharacterSet" => "UTF-8", "UID" => $db["uname"], "PWD" => $db["pword"], "ReturnDatesAsStrings" =>true); $this->connection = sqlsrv_connect($db["host"], $connection_cfg); I am stuck on how to proceed to debug this. A: An empty JSON string looks like this: [] That is 2 characters. Your error-detection will not work because it checks for 1 or less characters. Change if (strlen($test) < 2) { $error = json_last_error(); } To if (strlen($test) == 2) { $error = json_last_error(); } And your error-detection should work. If you are unable to resolve the problem that is indicated in the JSON error, please update us with the error. A: Lesson learnt, there should always be some distrust in debugging tools (netbeans watches etc). I created the error check when seeing a random '{' as the output (don't know why netbeans displayed it like this). It was the javascript that was broken in a line such as var a=<?php echo $thejsonstuff ?> and I attempted the php debug from netbeans results first.
{ "language": "en", "url": "https://stackoverflow.com/questions/33419117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to redirect users to IE when content using Active X controls is opened with Edge? Based on this link: Edge redirecting to IE (at Reddit), there is apparently a way to show a special purpose page displaying: This website needs Internet Explorer This website uses technology that will work best in Internet Explorer. Open with Internet Explorer Keep going in Microsoft Edge It will be very helpful for my applications using Active X controls. I already have code such as: <meta http-equiv="X-UA-Compatible" content="IE=10;requiresActiveX=true" /> in all my head sections (added in the past to avoid the Metro style for Windows 8), but it doesn't do the trick (unlike what someone mentioned in the link above). A: One way of doing it is by including your site on the Enterprise Mode Site List so it will open in IE11 automatically: The steps and the details can be found in this blog post by the Microsoft Edge team: http://blogs.windows.com/msedgedev/2015/08/26/how-microsoft-edge-and-internet-explorer-11-on-windows-10-work-better-together-in-the-enterprise/
{ "language": "en", "url": "https://stackoverflow.com/questions/32571929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Resize view and change layout of controls based on subview In my viewcontroller I have created a container view and some other controls. Based on some conditional logic the appropriate subview (xib) is created and added to the container view in ViewDidLoad() I am trying to work out a how to make the container view change height so it's the same size as the subview and make all the other controls change position (i.e. by moving up/ down) depending on the height. I am trying to do this by using layout constraints because I'm targetting iPhones and iPads. I tried adding a height constraint so the height of the container view is the same as the height of the subview but this did not work: MyContainerView.HeightAnchor.ConstraintEqualTo(MySubView.HeightAnchor, 1).Active = true; As you can see in the screen shots the container view does not change height. What am I doing wrong? Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/47268746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to set a security context for java HttpUrlConnection? I am testing HTTP endpoints with HttpUrlConnection: HttpURLConnection httpCon = (HttpURLConnection) new URL(url).openConnection(); httpCon.setDoOutput(true); httpCon.setRequestMethod(httpMethod); httpCon.setRequestProperty("Content-Type", "application/json"); requestHeaders.forEach(httpCon::setRequestProperty); I also add a Basic auth header at the end. The only problem is that there is a request filter that reads the Principal from the SecurityContext of the request for authorization. So far I have not found a way to set the SecurityContext and the Principal when using HttpURLConnection. Is there a way to do so? Edit: So the scenario is that there is an embedded Rest Server that serves the requests. I use HttpURLConnection to send requests to a given endpoint. The Rest Server is using basic auth (so I add the credentials in the header), but also does authz check with request filters. This request filter reads the Principal from the request, and does the authz check based on that. I need a way to add the principal to the request I send. A: To download something that requires authentication, I do public InputStream openStream(URL url, String user, String password) throws IOException { if (user != null) { Authenticator.setDefault(new MyAuthenticator(user, password)); } URLConnection connection = url.openConnection(); return connection.getInputStream(); } A more up to date answer since Java 11 would be to use the HttpClient and it's Builder. In there you can directly set the authenticator.
{ "language": "en", "url": "https://stackoverflow.com/questions/73908225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to customize the bottom bar in react native? I am using a bottom bar in react native. How do I change the background color and make the active bar highlighted with a line at bottom as shown in the image? Code - export const InternalStacks = TabNavigator({ Home: { screen: HomeStack }, Graph: { screen: GraphStack } },{ navigationOptions: ({ navigation }) => ({ tabBarIcon: ({ focused, tintColor }) => { const { routeName } = navigation.state; switch(routeName){ case 'Home': iconName = require('../assets/icons/home.png'); iconNameFocused = require('../assets/icons/home.png'); break; case 'Graph': iconName = require('../assets/icons/chart.png'); iconNameFocused = require('../assets/icons/chart.png'); break; } if(focused) return ( <Image style={{width: 20, height: 20, tintColor }} source={iconNameFocused} /> ); else return ( <Image style={{width: 20, height: 20, tintColor }} source={iconName} /> ); } }), tabBarComponent: TabBarBottom, tabBarPosition: 'bottom', tabBarOptions: { activeTintColor: '#FBC530', inactiveTintColor: 'black', }, animationEnabled: false, swipeEnabled: false, }); Current design - Required Design - Tried with the below, tabBarColor: '#E64A19', backgroundColor: 'white', but none of the worked. What is the better way to achieve the required design? PS - Not worried about the icons. A: You have access to more tabBarOptions that might help. Here's how we style ours: { tabBarPosition: 'bottom', tabBarOptions: { showLabel: false, showIcon: true, activeTintColor: black, inactiveTintColor: gray, activeBackgroundColor: white, inactiveBackgroundColor: white, style: { backgroundColor: white, }, tabStyle: { backgroundColor: white, }, }, } as far as adding the bottom bar, you can toggle icons when they are focused like this: HOME: { screen: HomeScreen, navigationOptions: { tabBarIcon: ({ tintColor, focused }) => <HomeIcon focused={focused ? UnderlinedIcon : RegularIcon } />, }, }, So maybe in one of the icons you add a line at the bottom and in the other you don't, and then they'll toggle when focused. Hope this helps!!
{ "language": "en", "url": "https://stackoverflow.com/questions/50363418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Items not getting placed in the grid Need help with figuring out why items are not getting placed in the grid on this test page: https://codepen.io/srikat-the-vuer/pen/KLdOXN?editors=1100. They are appearing in a single column. @import url("https://fonts.googleapis.com/css?family=Roboto:400,400i,700"); .grid { display: grid; grid-gap: 1rem; grid-template-rows: 1fr 1fr 1fr; grid-template-columns: repeat(7, 1fr); grid-template-areas: "1 1 1 1 4 4 4" "2 2 3 3 4 4 4" "2 2 3 3 4 4 4"; max-width: 1000px; margin: 0 auto; } .pro-features { grid-area: 1; } .feature-privacy { grid-area: 2; } .feature-collab { grid-area: 3; } .feature-assets { grid-area: 4; } a { text-decoration-color: orange; text-decoration-style: double; text-decoration-skip: none; color: inherit; font-weight: bold; display: inline-block; } .grid > div { background: #444; color: white; border-radius: 1rem; padding: 1rem; border-top: 1px solid #666; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.75); } h1, h2 { margin: 0; line-height: 1; } body { background: #222; margin: 0; padding: 1rem; line-height: 1.3; font-family: Roboto, sans-serif; } <div class="grid"> <div class="pro-features"> <h1>CodePen PRO Features</h1> <p>CodePen has many PRO features including these four!</p> </div> <div class="feature-privacy"> <h2>Privacy</h2> <p>You can make as many <a href="https://codepen.io/pro/privacy/">Private</a> Pens, Private Posts, and Private Collections as you wish! Private <a href="https://codepen.io/pro/projects">Projects</a> are only limited by how many total Projects your plan has.</p> </div> <div class="feature-collab"> <h2>Collab Mode</h2> <p><a href="https://blog.codepen.io/documentation/pro-features/collab-mode/">Collab Mode</a> allows more than one person to edit a Pen <em>at the same time</em>.</p> </div> <div class="feature-assets"> <h2>Asset Hosting</h2> <p>You'll be able to <a href="https://blog.codepen.io/documentation/pro-features/asset-hosting/">upload files</a> directly to CodePen to use in anything you build. Drag and drop to the Asset Manager and you'll get a URL to use. Edit your text assets at any time.</p> </div> </div> It is based on this article: https://css-tricks.com/simple-named-grid-areas/. Any ideas? A: Changing the "names" of the grid areas from numbers to strings fixed it. @import url("https://fonts.googleapis.com/css?family=Roboto:400,400i,700"); .grid { display: grid; grid-gap: 1rem; grid-template-rows: 1fr 1fr 1fr; grid-template-columns: repeat(7, 1fr); grid-template-areas: "p1 p1 p1 p1 p4 p4 p4" "p2 p2 p3 p3 p4 p4 p4" "p2 p2 p3 p3 p4 p4 p4"; max-width: 1000px; margin: 0 auto; } .pro-features { grid-area: p1; } .feature-privacy { grid-area: p2; } .feature-collab { grid-area: p3; } .feature-assets { grid-area: p4; } a { text-decoration-color: orange; text-decoration-style: double; text-decoration-skip: none; color: inherit; font-weight: bold; display: inline-block; } .grid > div { background: #444; color: white; border-radius: 1rem; padding: 1rem; border-top: 1px solid #666; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.75); } h1, h2 { margin: 0; line-height: 1; } body { background: #222; margin: 0; padding: 1rem; line-height: 1.3; font-family: Roboto, sans-serif; } <div class="grid"> <div class="pro-features"> <h1>CodePen PRO Features</h1> <p>CodePen has many PRO features including these four!</p> </div> <div class="feature-privacy"> <h2>Privacy</h2> <p>You can make as many <a href="https://codepen.io/pro/privacy/">Private</a> Pens, Private Posts, and Private Collections as you wish! Private <a href="https://codepen.io/pro/projects">Projects</a> are only limited by how many total Projects your plan has.</p> </div> <div class="feature-collab"> <h2>Collab Mode</h2> <p><a href="https://blog.codepen.io/documentation/pro-features/collab-mode/">Collab Mode</a> allows more than one person to edit a Pen <em>at the same time</em>.</p> </div> <div class="feature-assets"> <h2>Asset Hosting</h2> <p>You'll be able to <a href="https://blog.codepen.io/documentation/pro-features/asset-hosting/">upload files</a> directly to CodePen to use in anything you build. Drag and drop to the Asset Manager and you'll get a URL to use. Edit your text assets at any time.</p> </div> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/56051640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unable to have display: inline-block and width:100% at the same time I am not a CSS professional. So I only know some basics of it. My task is to have 100% body width and at the same time it should not wrap the contents when browser window resizes. To achieve this I added display: inline-block to my body tag's CSS style. But when I does that the width gets decreased. Then I try to add width:100% to body style. The width changed as I wanted but then display: inline-block stopped working. How to enable both these properties at the same time? <body style="display: inline-block; width:100%"><!--content---></body> A: Why you want to fix the body width? simply use a container inside the body if you want like CSS body { height: 100%; width: 100%; } .container { width: 1000px; margin: 0 auto; } HTML <body> <div class="container"></div> </body> OR if you want your content area to be 100% than just change your CSS to this CSS body { height: 100%; width: 100%; } .container { width: 100%; }
{ "language": "en", "url": "https://stackoverflow.com/questions/13377392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best way to update html after promise result I wonder to know the best way of binding result of a promise which is as an injection to html tag using angular 2(I use ionic 2)... As you know the main problem with async coding is loosing reference to the current object. It seems I should pass current object as a prameter to Promise function generator. I searched internet for better solution but nothing I found! So is there any better approch? Ionic 2 itself use observation and subscribe to do async proccess. But the major problem is that for existing functions which are not observable it couldn't work! My approch: Injectable class: export class PromiseComponent { doPromise = function (obj: any) { return new Promise(function (resolve2) { setTimeout(function () { resolve2({ num: 3113, obj: obj }); }, 5000); }); } } Call on click: promiseVal = 0 doMyPromise() { this.myPromise.doPromise(this).then(this.secondFunc);//UPDATED HERE } //UPDATED HERE secondFunc = function (res) { this.promiseVal = res.num } Html: <div>{{promiseVal}} </div> <button (click)="doMyPromise()">Do Promise</button> A: As you know the main problem with async coding is losing reference to the current object That's not true, the arrow function does not bind its own this therefore you don't need to send this to doPromise export class PromiseComponent { doPromise () { return new Promise(function (resolve) { setTimeout(function () { resolve({ num: 3113 }) }, 5000) }) } } promiseVal = 0 doMyPromise() { this.myPromise.doPromise() .then(res => { this.promiseVal = res.num }) } A: If you want to consume a promise inside your component: promiseVal = 0 doMyPromise() { this.myPromise.doPromise().then((res) => { this.promiseVal = res.num }); } And I don't know the reasoning behind your Service but it usually is like this (optional): export class PromiseComponent { doPromise() { //This method will return a promise return new Promise(function (resolve2) { setTimeout(function () { resolve2({ num: 3113, obj: obj }); }, 5000); }); } } After OP edited the post: You can change this: doMyPromise() { this.myPromise.doPromise(this).then(this.secondFunc);//UPDATED HERE } to doMyPromise() { this.myPromise.doPromise(this).then(this.secondFunc.bind(this));//UPDATED HERE }
{ "language": "en", "url": "https://stackoverflow.com/questions/43387183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Geofencing in background Question on start: how I can add/remove geofences without app opening? When user switch off location all geofences are removed. When geofencing was introduced I was testing it and I was able to add/remove geofences in background by handling "android.location.PROVIDERS_CHANGED" broadcast, still have this code on branch. Currently since API26 this broadcast isn't firing when put in manifest, we need to register (and un-) for this action e.g. during runtime of Activity or Service. And this isn't solution for my case, I want to add geofences again when location turned on or just prevent removing already added. I do see two solutions for this case, but both have some flaws... * *Foreground service - sticky service which add/remove geofences when location goes on/off, Service can receive PROVIDERS_CHANGED. But I don't like sticky Notification present always just for this trivial purpose... *Job/work - checking location state and basing on result add/remove geofences, at the end re-schedule own instance for e.g. 1h, and again. Short-living services (5 sec max) don't need to have "visual representation" - Activity on foreground or sticky Notification - but few secs is sufficient for checking location state and work with geofences (just logic, so pretty fast) I didn't tested second approach, but it seems to be a kind of workaround, very unefficient for sure. I want to be fair, I'm informing users about purpose of location "listening" in background when asking for permission (also ACCESS_BACKGROUND_LOCATION on API29). So what can I do for keeping these fences alive when app is in background/killed? A: ForegroundService is better option you have, as of today, there are many Custom ROMs which kills application which is in the background. I have used foregroundService for the same purpose and it is working perfectly.
{ "language": "en", "url": "https://stackoverflow.com/questions/58608969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Compare columns in different pandas dataframes I have two dataframes, one with daily info starting in 1990 and one with daily info starting in 2000. Both dataframes contain information ending in 2016. df1: Date A B C 1990-01-01 3.0 40.0 70.0 1990-01-02 20.0 50.0 80.0 1990-01-03 30.0 60.0 90.0 1990-01-04 2.0 1.0 1.0 1990-01-05 1.0 8.0 3.0 df2: Date A B C 2000-01-01 NaN NaN NaN 2000-01-02 5.0 NaN NaN 2000-01-03 1.0 NaN 5.0 2000-01-04 2.0 4.0 8.0 2000-01-05 1.0 3.0 4.0 I need to compare columns in df1 and df2 which have the same name, which wouldn't usually be too complicated, but I need to compare them from the point at which there is data available in both dataframes for a given column (e.g from df2, 2000-01-02 in column 'A', 2000-01-04 in 'B'). I need to return True if they are the same from that point on and False if they are different. I have started by merging, which gives me: df2.merge(df1, how = 'left', on = 'Date') Date A.x B.x C.x A.y B.y C.y 2000-01-01 NaN NaN NaN 3.0 4.0 5.0 2000-01-02 5.0 NaN NaN 5.0 9.0 2.0 2000-01-03 1.0 NaN 5.0 1.0 6.0 5.0 2000-01-04 2.0 4.0 8.0 2.0 4.0 1.0 2000-01-05 1.0 3.0 4.0 1.0 3.0 3.0 I have figured out how to find the common date, but am stuck as to how to do the same/different comparison. Can anyone help me compare the columns from the point at which there is a common value? A dictionary comes to mind as a useful output format, but wouldn't be essential: comparison_dict = { "A" : True, "B" : True, "C" : False } Many thanks. A: Assuming the Date column is the index. * *Stacking will drop nan by default *Align with 'inner' logic *Check equality *Group and check all True pd.Series.eq(*df1.stack().align(df2.stack(), 'inner')).groupby(level=1).all() If Date is not the index pd.Series.eq( *df1.set_index('Date').stack().align( df2.set_index('Date').stack(), 'inner' ) ).groupby(level=1).all() A: Check with eq and isnull Data from user3483203 ((df1.eq(df2))|df2.isnull()|df1.isnull()).all(0) Out[22]: A True B True C False dtype: bool A: Using fillna with eq df2.fillna(df1).eq(df1).all(0) A True B True C False dtype: bool This works by filling in NaN values with valid values from df1, so they will always be equal where df2 is null (essentially the same as ignoring them). Next, we create a boolean mask comparing the two arrays: df2.fillna(df1).eq(df1) A B C 2000-01-01 True True True 2000-01-02 True True True 2000-01-03 True True True 2000-01-04 True True False 2000-01-05 True True False Finally, we assert that all the values for each column are True, in order for the columns to be considered equal. Setup It looks like you copied the wrong DataFrame for df1 based on your desired output and merge, so I derived it from your merge: df1 = pd.DataFrame({'A': {'2000-01-01': 3.0, '2000-01-02': 5.0, '2000-01-03': 1.0, '2000-01-04': 2.0, '2000-01-05': 1.0}, 'B': {'2000-01-01': 4.0, '2000-01-02': 9.0, '2000-01-03': 6.0, '2000-01-04': 4.0, '2000-01-05': 3.0}, 'C': {'2000-01-01': 5.0, '2000-01-02': 2.0, '2000-01-03': 5.0, '2000-01-04': 1.0, '2000-01-05': 3.0}}) df2 = pd.DataFrame({'A': {'2000-01-01': np.nan, '2000-01-02': 5.0, '2000-01-03': 1.0, '2000-01-04': 2.0, '2000-01-05': 1.0}, 'B': {'2000-01-01': np.nan, '2000-01-02': np.nan, '2000-01-03': np.nan, '2000-01-04': 4.0, '2000-01-05': 3.0}, 'C': {'2000-01-01': np.nan, '2000-01-02': np.nan, '2000-01-03': 5.0, '2000-01-04': 8.0, '2000-01-05': 4.0}})
{ "language": "en", "url": "https://stackoverflow.com/questions/52119585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Making a daemon for a jailbroken iOS I have been looking for a way to launch daemons on the iPhone and I created a little test application with Xcode by learning from the ants application's source code, which taught me that I should use launchctl But unfortunately it is not working. I have installed my application with SSH on my iPod Touch in /Applications/, I then launch it with SSH thru the account mobile and my log says this: Script started on Thu Feb 24 19:33:28 2011 bash-3.2$ ssh [email protected] [email protected]'s password: iPod-van-Henri:~ mobile$ cd /Applications iPod-van-Henri:/Applications mobile$ cd DaemonUtility.app/ iPod-van-Henri:/Applications/DaemonUtility.app mobile$ ./DaemonUtility 2011-02-24 19:35:08.022 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:09.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:10.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:11.021 DaemonUtility[1369:107] Read 0 bytes Bug: launchctl.c:2367 (24307):13: (dbfd = open(g_job_overrides_db_path, O_RDONLY | O_EXLOCK | O_CREAT, S_IRUSR | S_IWUSR)) != -1 launchctl: CFURLWriteDataAndPropertiesToResource(/private/var/stash/Applications.pwn/DaemonUtility.app/com.developerief2.daemontest.plist) failed: -10 launch_msg(): Socket is not connected 2011-02-24 19:35:12.039 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:13.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:14.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:15.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:16.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:17.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:18.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:19.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:20.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:21.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:22.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:23.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:24.021 DaemonUtility[1369:107] Read 0 bytes 2011-02-24 19:35:25.021 DaemonUtility[1369:107] Read 0 bytes ^C iPod-van-Henri:/Applications/DaemonUtility.app mobile$ exit logout Connection to 192.168.1.8 closed. bash-3.2$ exit exit Script done on Thu Feb 24 19:34:49 2011 When I launch it with the root (doing it with su), I get the daemon to run, but it doesn't do anything. My daemon should display a UIViewAlert every ten seconds since it's launch: **main.m (Daemon)** // // main.m // DaemonTest // // Created by ief2 on 23/02/11. // #import <UIKit/UIKit.h> @interface DAAppDelegate : NSObject <UIApplicationDelegate> { NSDate *_startupDate; NSTimer *_messageTimer; } @property (nonatomic, retain) NSDate *startupDate; @end @interface DAAppDelegate (PrivateMethods) - (void)showMessage:(NSTimer *)timer; @end @implementation DAAppDelegate @synthesize startupDate=_startupDate; - (void)dealloc { [_startupDate dealloc]; [_messageTimer dealloc]; [super dealloc]; } - (void)applicationDidFinishLaunching:(UIApplication *)theApplication { UIAlertView *myView; myView = [[UIAlertView alloc] initWithTitle:@"Daemon Launched" message:@"The daemon was launched" delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [myView show]; [myView release]; self.startupDate = [NSDate date]; NSTimer *myTimer = [NSTimer scheduledTimerWithTimeInterval:10 target:self selector:@selector(showMessage:) userInfo:nil repeats:YES]; _messageTimer = [myTimer retain]; } - (void)applicationWillTerminate:(UIApplication *)theApplication { [_messageTimer invalidate]; UIAlertView *myView; myView = [[UIAlertView alloc] initWithTitle:@"Daemon Terminated" message:@"The daemon was terminated" delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [myView show]; [myView release]; } - (void)showMessage:(NSTimer *)timer { NSTimeInterval mySec; mySec = [self.startupDate timeIntervalSinceNow]; NSString *format = [NSString stringWithFormat: @"The daemon has been running for %llu seconds", (unsigned long long)mySec]; UIAlertView *myView; myView = [[UIAlertView alloc] initWithTitle:@"Daemon Message" message:format delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [myView show]; [myView release]; } @end int main(int argc, const char **argv) { NSAutoreleasePool *mainPool = [[NSAutoreleasePool alloc] init]; UIApplicationMain(argc, (char **)argv, nil, @"DAAppDelegate"); [mainPool drain]; return 0; } The full application's source code can be found on my computer: http://81.82.20.197/DaemonTest.zip Thank you in advance, ief2 A: You are working too hard. All you need to do is create a .plist file with the app identifier and path in it and add it to the /System/Library/LaunchDaemon folder. Then make sure your app is in the /Applications folder. Reboot and it will work each time the phone is booted. Google "Chris Alvares daemon" and look at his tutorial... A: i don't think launchD can trigger GUI-level apps. Anything that is "Aqua" level has to be a "StartupItem" or a "Login Item". You can still start them as root, based on where they are started from, and who they are owned by, but launchd doesn't touch that stuff... I dont think you can even have a menu bar icon if you want launchd to handle it.... also if you are talking jailbroken iphones... if you wan to open a GUI app from the "mobileterminal" you should look in Cydia for the app that "does that". it's not as easy as just launching the executable.. there is some funky springboard interaction.. that that utility takes care of. it is callled...... "AppsThruTerm" (bigboss repo) once installed.. you launch your "app" with the command att blahblahblah
{ "language": "en", "url": "https://stackoverflow.com/questions/5109059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Jquery appears undefined in the middle of a working script This is really curious to me and I would like to know your opinion about possible causes for this behaviour. I have a long script in javascript (jQuery) in a Wordpress. At certain point (line 500 more or less) the console outputs the following error: TypeError: jQuery(...).each(...) is not a function The script is working fine till this point. The line looks like this: jQuery('.excard').each(function(){ After some trials, I added this conditional sentence before the failure line: if(!window.jQuery) { console.log('jQuery is gone??'); } else { jQuery('.excard').each(function(){ Here comes the funny part: after adding this, I'm not getting any error or messages in the console and the code runs now smoothly. Can somebody bring some light over this?
{ "language": "en", "url": "https://stackoverflow.com/questions/34864205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is cat printing only the first and last line of file? I have a txt file with an #header. If I use cat the output is only the first and the last lines despite the file being a few hundred lines long. What is going on? EDIT A minimal example is pretty much what you imagine: file.txt is #header 2.0 2.0 2.1 2.2 ... ... ... 18.2 and the result of cat file.txtis [Desktop]$ cat file.txt #header 2.0 18.2[Desktop]$
{ "language": "en", "url": "https://stackoverflow.com/questions/54290113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: R - how to append a data set to a list? How can I append a data set into a list in R? I have this below and keeps crashing or freezing my computer when I have this line listData <- append(listData, data1), # Empty list for storing listData later. listData <- list() # Prepare SQL query1. dataQuery <- "SELECT * ...." # Store the result in data1. data1 = dbGetQuery(DB, dataQuery) if(nrow(data1) > 0) { # Append the data to the list. listData <- append(listData, data1) } # Merge data sets. set.seed(1) dataList = listData allData = Reduce(function(...) merge(..., all=T), dataList) Am I doing it wrong in R to append the data set to the list? What is the proper way of doing it then? A: Find the names of the variables that you want to put in the list: dataVars <- ls(pattern = "^data[[:digit:]]+$) Use mget to retrieve them as a list. dataList <- mget(dataVars, envir = parent.frame())
{ "language": "en", "url": "https://stackoverflow.com/questions/30422776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: update @HostBinding Angular 4 Animation I am trying to get animation between routes working in an Angular 4 project - but need to be able to alter the direction of the animation (translateX) based on the way the user is navigating through the app. I have discovered that the only way to keep both the entering and exiting components in the DOM is to use void state. Also, I have to bind the animation to the host element. If I try and bind it to an element within the component the exiting component is just replaced with the entering component. I want this because I don't want the exiting component to disappear - I want it to slide off as the entering component slides in to get a smooth native feeling application. I have a single trigger set up with four transitions: trigger('routing',[ state('*',style({transform: 'translateX(0)'})), transition('void => back',[ style({transform: 'translateX(-100%'}), animate('800ms') ]), transition('back => void',[ animate('800ms',style({ transform:'translateX(100%)' })) ]), transition('void => forward',[ style({transform: 'translateX(100%'}), animate('800ms') ]), transition('forward => void',[ animate('800ms',style({ transform:'translateX(-100%)' })) ]) ]) In my components exported class I am binding this to the component host elements with: @HostBinding('@routing') routing I can manipulate routing (setting it to either 'back' or 'forward' to control direction) but this seems to create a new instance of the variable so that if I want to change animation direction the exiting page animates in the opposite direction to the incoming component because the instance of routing doesn't seem to have changed. Is there any way to update the instance of routing bound to the host element? Is there an alternative way to achieve the result I need? Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/43384107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to compile java in 1.6 or older version without changing the classpath and path? We only need to use command line to execute the same How to compile java in 1.6 or older version without changing the classpath and path? We only need to use command line to execute the same. like : javac var -1.6 Hello.java java Hello A: To compile your code and target Java 1.6 you can specify target and source compiler options. Something like, javac -target 1.6 -source 1.6 Hello.java The javac -help explains, -source <release> Provide source compatibility with specified release -target <release> Generate class files for specific VM version A: to compile java in 1.6 or older version without changing the classpath and path javac -source 1.6 Test.java this will help you to understand by using javac -help -source <release> Provide source compatibility with specified release
{ "language": "en", "url": "https://stackoverflow.com/questions/37506795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Improve parsing speed of delimited configuration block I have a quite large configuration file that consists of blocks delimited by #start <some-name> ... #end <some-name> were some-name has to be the same for the block. The block can appear multiple times but is never contained within itself. Only some other blocks may appear in certain blocks. I'm not interested in these contained blocks, but on the blocks in the second level. In the real file the names do not start with blockX but are very different from each other. An example: #start block1 #start block2 /* string but no more name2 or name1 in here */ #end block2 #start block3 /* configuration data */ #end block3 #end block1 This is being parsed with regex and is, when run without a debugger attached, quite fast. 0.23s for a 2k 2.7MB file with simple rules like: blocks2 = re.findAll('#start block2\s+(.*?)#end block2', contents) I tried parsing this with pyparsing but the speed is VERY slow even without a debugger attached, it took 16s for the same file. My approach was to produce a pyparsing code that would mimic the simple parsing from the regex so I can use some of the other code for now and avoid having to parse every block now. The grammar is quite extense. Here is what I tried block = [Group(Keyword(x) + SkipTo(Keyword('#end') + Keyword(x)) + Keyword('#end') - x )(x + '*') for x in ['block3', 'block4', 'block5', 'block6', 'block7', 'block8']] blocks = Keyword('#start') + block x = OneOrMore(blocks).searchString(contents) # I also tried parseString() but the results were similar. What am I doing wrong? How can I optimize this to come anywhere close to the speed achieved by the regex implementation? Edit: The previous example was way to easy compared to the real data, so i created a proper one now: /* all comments are C comments */ VERSION 1 0 #start PROJECT project_name "what is it about" /* why not another comment here too! */ #start SECTION where_the_wild_things_are "explain this section" /* I need all sections at this level */ /* In the real data there are about 10k of such blocks. There are around 10 different names (types) of blocks */ #start INTERFACE_SPEC There can be anything in the section. Not Really but i want to skip anything until the matching (hash)end. /* can also have comments */ #end INTERFACE_SPEC #start some_other_section name 'section name' #start with_inner_section number_of_points 3 /* can have comments anywhere */ #end with_inner_section #end some_other_section /* basically comments can be anywhere */ #start some_other_section name 'section name' other_section_attribute X ref_to_section another_section #end some_other_section #start another_section degrees #start section_i_do_not_care_about_at_the_moment ref_to some_other_section /* of course can have comments */ #end section_i_do_not_care_about_at_the_moment #end another_section #end SECTION #end PROJECT For this i had to expand your original suggestion. I hard coded the two outer blocks (PROJECT and SECTION) because they MUST exist. With this version the time is still at ~16s: def test_parse(f): import pyparsing as pp import io comment = pp.cStyleComment start = pp.Literal("#start") end = pp.Literal("#end") ident = pp.Word(pp.alphas + "_", pp.printables) inner_ident = ident.copy() inner_start = start + inner_ident inner_end = end + pp.matchPreviousLiteral(inner_ident) inner_block = pp.Group(inner_start + pp.SkipTo(inner_end) + inner_end) version = pp.Literal('VERSION') - pp.Word(pp.nums)('major_version') - pp.Word(pp.nums)('minor_version') project = pp.Keyword('#start') - pp.Keyword('PROJECT') - pp.Word(pp.alphas + "_", pp.printables)( 'project_name') - pp.dblQuotedString + pp.ZeroOrMore(comment) - \ pp.Keyword('#start') - pp.Keyword('SECTION') - pp.Word(pp.alphas, pp.printables)( 'section_name') - pp.dblQuotedString + pp.ZeroOrMore(comment) - \ pp.OneOrMore(inner_block) + \ pp.Keyword('#end') - pp.Keyword('SECTION') + \ pp.ZeroOrMore(comment) - pp.Keyword('#end') - pp.Keyword('PROJECT') grammar = pp.ZeroOrMore(comment) - version.ignore(comment) - project.ignore(comment) with io.open(f) as ff: return grammar.parseString(ff.read()) EDIT: Typo, said it was 2k but it instead it is a 2.7MB file. A: First of all, this code as posted doesn't work for me: blocks = Keyword('#start') + block Changing to this: blocks = Keyword('#start') + MatchFirst(block) at least runs against your sample text. Rather than hard-code all the keywords, you can try using one of pyparsing's adaptive expressions, matchPreviousLiteral: (EDITED) def grammar(): import pyparsing as pp comment = pp.cStyleComment start = pp.Keyword("#start") end = pp.Keyword('#end') ident = pp.Word(pp.alphas + "_", pp.printables) integer = pp.Word(pp.nums) inner_ident = ident.copy() inner_start = start + inner_ident inner_end = end + pp.matchPreviousLiteral(inner_ident) inner_block = pp.Group(inner_start + pp.SkipTo(inner_end) + inner_end) VERSION, PROJECT, SECTION = map(pp.Keyword, "VERSION PROJECT SECTION".split()) version = VERSION - pp.Group(integer('major_version') + integer('minor_version')) project = (start - PROJECT + ident('project_name') + pp.dblQuotedString + start + SECTION + ident('section_name') + pp.dblQuotedString + pp.OneOrMore(inner_block)('blocks') + end + SECTION + end + PROJECT) grammar = version + project grammar.ignore(comment) return grammar It is only necessary to call ignore() on the topmost expression in your grammar - it will propagate down to all internal expressions. Also, it should be unnecessary to sprinkle ZeroOrMore(comment)s in your grammar, if you have already called ignore(). I parsed a 2MB input string (containing 10,000 inner blocks) in about 16 seconds, so a 2K file should only take about 1/1000th as long.
{ "language": "en", "url": "https://stackoverflow.com/questions/38662521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Garbage Collection in R process running inside docker I have a process where we currently large amount of data per day, perform a map-reduce level of functions and use only the output of the function. We currently run a code sequence that looks like the below lapply(start_times, function(start_time){ <get_data> <setofoperations> } so currently we loop through start times , which helps us get data for a particular day , analyse and output dataframes of results per output per day. set of operations is a series of functions that keep working on and return dataframes. While running this on a docker container with a memory limit , we often see that the process runs out of memory when its dealing with large data (around 250-500MB) over periods of days and R isnt able to effectively do garbage collection. Im trying an approach to monitor each process using cadvisor and notice spikes , but not really able to understand better. * *If R does a lazy gc, ideally the process should be able to reuse the memory over and over, is there something that is not being captured through the gc process? *How can an R process reclaim more memory when its the only primary process running in the docker container ?
{ "language": "en", "url": "https://stackoverflow.com/questions/59385510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: I'm trying to wrap a Stack widget on Flutter into an Expanded widget. How could I make the Stack fit in the Expanded with 0 margin? I'm trying to wrap a Stack widget on Flutter into an Expanded widget. It looks like the Expanded widget forces the stack to be positioned with a default margin. So, How could I make the Stack fit in the Expanded widget with 0 margin? Or what would you recommend to layout widgets (with overlap) and make sure to use the suitable widget ensuring the responsivity of my app. import 'package:WEW/constants.dart'; import 'package:WEW/size_config.dart'; import 'package:flutter/foundation.dart'; import 'package:flutter/material.dart'; // The body of my Scaffold widget in the main class class Body extends StatefulWidget { @override _BodyState createState() => _BodyState(); } // Implementation of state State class _BodyState extends State<Body> { @override Widget build(BuildContext context) { return SafeArea( child: SizedBox( height: getScreenHeight(), width: getScreenWidth(), child: Column( crossAxisAlignment: CrossAxisAlignment.start, mainAxisAlignment: MainAxisAlignment.start, children: <Widget>[ Expanded( // The stack I'm trying to fit into Expanded child: Stack( children: <Widget>[ Positioned( top: 0, left: 0, child: Image.asset( "assets/images/welcome_top.png", height: getProportionateScreenHeight(144), width: getProportionateScreenWidth(193), ), ), Positioned( top: 0, left: 0, child: Image.asset( "assets/images/WEW_logo_light.png", height: getProportionateScreenHeight(149), width: getProportionateScreenWidth(228), ), ), ], ), ), Expanded( flex: 2, child: Column( children: <Widget>[ Text( "Werkgerers & Werknemers", style: TextStyle( fontSize: getProportionateScreenHeight(36), //color: cPrimaryColor, fontWeight: FontWeight.bold, ), ), Text("Verbinden van de wereld van logistiek"), ], ), ), Expanded( child: SizedBox(), ), ], ), ), ); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/65865782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Windows service is communicating with other servers only for few minutes after restarting it I have windows service application which should be communicating with other server. Problem is that it works for a while after I restart the windows service but after some time it stops working (response.StatusCode = 0). var client = new RestClient("https://..../auth/token") { Proxy = new WebProxy(host, port) }; var request = new RestRequest(Method.POST); request.AddHeader("Content-Type", "application/json"); var content = new { sp = sp, officeId = officeId }; request.AddJsonBody(_javaScriptSerializer.Serialize(content)); var response = client.Execute(request); Windows service is implemented in .NET Framework 4.5.2. Security protocol for TLS 1.2 is enabled: System.Net.ServicePointManager.SecurityProtocol |= SecurityProtocolType.Tls12; I created dummy windows service which contains only this 1 call to the other server and I can see that this dummy windows service is working fine without any issues. Do you have please any idea on what should I focus or what could be the root cause of it?
{ "language": "en", "url": "https://stackoverflow.com/questions/67720239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Vega-lite geojson: only identity projection works, otherwise only shows last feature I'm trying to visualise the World Bank Official Boundaries:World Boundaries GeoJSON - Low Resolution data from https://datacatalog.worldbank.org/dataset/world-bank-official-boundaries with vega-lite. However, it looks like only the last feature is displayed, with all others ignored. For example: { "$schema": "https://vega.github.io/schema/vega-lite/v5.json", "width": 630, "height": 630, "data": { "url": "/map", "format": {"property": "features"} }, "projection": { "type": "mercator" }, "mark": { "type": "geoshape", "stroke": "black", "strokeWidth": 0.5 } } seems to only display New Zealand, which I think is the last feature in the file: A similar example that has the same problem and includes only data for New Zealand and one other country before can be seen in a gist If I change the projection to 'identity', then that projection seems to work as expected. How can I get all the features to display in vega-lite in a non-identity projection? A: It looks like the coordinates of World Bank barrier JSON don't have a winding order that vega-lite interprets properly. I suspect this makes it think some islands are lakes, and puts the land outside them rather than inside. Using https://github.com/mapbox/geojson-rewind fixed it: geojson-rewind --clockwise WB_countries_Admin0_lowres.geojson > WB_countries_Admin0_lowres-clockwise.geojson
{ "language": "en", "url": "https://stackoverflow.com/questions/68248497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Changing Color Space from RGB to HSV after segmentation (OpenCv Python) I am segmenting an image and then converting it into HSV format. But after converting it into HSV and separating out each of the channels, the granularity of the segmented region is lost. Following is the segmentation code. import cv2 from os import listdir from os.path import isfile, join from mpl_toolkits.mplot3d import Axes3D import numpy as np import matplotlib.pyplot as plt path = "C:/Users/Intern/Desktop/dataset/rust images/" files_test = [f for f in listdir(path+ 'Input/') if isfile(join(path+ 'Input/', f))] for img_name in files_test: img = cv2.imread(path + "Input/" + img_name) gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) gray_blur = cv2.GaussianBlur(gray_img, (7, 7), 0) adapt_thresh_im = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 20) max_thresh, thresh_im = cv2.threshold(gray_img, 100, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU) thresh = cv2.bitwise_or(adapt_thresh_im, thresh_im) kernel = np.ones((3,3),np.uint8) opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2) sure_bg = cv2.dilate(thresh,kernel,iterations=2) img[sure_bg == 0] = [0,0,0] cv2.imwrite(path + "Segmented/" + img_name, img) Following is the input image. Following is the corresponding output. Now, In a new program I try to read this output and convert it into HSV format. Following is the code. import cv2 from os import listdir from os.path import isfile, join import numpy as np path = "C:/Users/Intern/Desktop/dataset/rust images/" files_test = [f for f in listdir(path+ "Segmented/") if isfile(join(path+ "Segmented/", f))] for img_name in files_rust: img = cv2.imread(path + "Segmented/" + img_name) img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) print img_hsv.shape h, s, v = cv2.split(img_hsv) cv2.imshow("hsv image", s) cv2.waitKey(0) Following is the output after converting into HSV. We can observe that compared to the original one the granularity of the black spaces has reduced. How can I solve this problem? Thanks for the help. Photograph taken from 4 A: You code showed you applied GaussianBlur(), cv2.adaptiveThreshold() and cv2.morphologyEx(), all those filtering would likely make the details lost in some degree in the resulted image. If you need to convert color space from BGR to HSV, cv2.cvtColor(img, cv2.COLOR_BGR2HSV), then you may just do minimal preprocessing to reduce the distortion before converting the image to HSV, and before you further any processing in HSV color space.
{ "language": "en", "url": "https://stackoverflow.com/questions/44257117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Problems with TableView inside of ScrollView I followed the helpful comments by you people and have currently gotten this far: I have a single TableView with a header cell. The problem now is that I am only able to display one set of data and the TableView will not currently scroll (Maybe because of only one set of data being displayed.) Here is my Code: ViewController: struct TableData { var section: String = "" var data = Array<String>() var dataS = Array<String>() init(){} } var data = Array<TableData>() var dataS = Array<TableData>() class MyCustomCell: UITableViewCell { @IBOutlet var label: UILabel! @IBOutlet var labelS: UILabel! } class MyCustomHeader: UITableViewCell { @IBOutlet var header: UILabel! } class TypeViewController: BaseViewController , UITableViewDelegate, UITableViewDataSource { @IBOutlet var tableView: UITableView! @IBOutlet var scrollView: UIScrollView! public func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return data[section].data.count } public func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath) as! MyCustomCell cell.label.text = data[indexPath.section].data[indexPath.row] cell.labelS.text = dataS[indexPath.section].data[indexPath.row] return cell } func tableView(_ tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? { let headerCell = tableView.dequeueReusableCell(withIdentifier: "Header") as! MyCustomHeader headerCell.header.text = data[section].section return headerCell } func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat { return 50.0 } override func viewDidLoad() { super.viewDidLoad() addSlideMenuButton() addItems() print(data) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } func addItems() { var new_elements:TableData new_elements = TableData() new_elements.section = "Stuff" new_elements.data.append(obj41); new_elements.data.append(obj42); new_elements.data.append(obj43); new_elements.data.append(obj44); new_elements.data.append(obj45); new_elements.data.append(obj46); new_elements.data.append(obj47); data.append(new_elements) new_elements = TableData() new_elements.section = "More Stuff" new_elements.data.append(obj51); new_elements.data.append(obj52); new_elements.data.append(obj53); new_elements.data.append(obj54); new_elements.data.append(obj55); new_elements.data.append(obj56); new_elements.data.append(obj57); data.append(new_elements) new_elements = TableData() new_elements.section = "Netzach - Eternity" new_elements.data.append(obj61); new_elements.data.append(obj62); new_elements.data.append(obj63); new_elements.data.append(obj64); new_elements.data.append(obj65); new_elements.data.append(obj66); new_elements.data.append(obj67); data.append(new_elements) //Break new_elements = TableData() new_elements.data.append(objS0); new_elements.data.append(objS1); new_elements.data.append(objS2); new_elements.data.append(objS3); new_elements.data.append(objS4); new_elements.data.append(objS5); new_elements.data.append(objS6); new_elements.data.append(objS7); dataS.append(new_elements) new_elements = TableData() new_elements.data.append(objS11); new_elements.data.append(objS12); new_elements.data.append(objS13); new_elements.data.append(objS14); new_elements.data.append(objS15); new_elements.data.append(objS16); new_elements.data.append(objS17); dataS.append(new_elements) new_elements = TableData() new_elements.data.append(objS21); new_elements.data.append(objS22); new_elements.data.append(objS23); new_elements.data.append(objS24); new_elements.data.append(objS25); new_elements.data.append(objS26); new_elements.data.append(objS27); dataS.append(new_elements) } Attached are some Photos of the MainStoryboard: A: Keep in mind below important points regarding to UITableView * *UITableView has inherited property from UIScrollView i.e. UITableView is also below like a UIScrollView so you don't need to take UIScrollView for the specially scroll the UITableView. If you do it behaves weird. *In cellForRow, you are creating condition with param tableView to outlet typeView & typeView1 by comparing tag which is not a standard format. Because tableView.tag may be changed and gives you wrong output. So try to use below format if tableView == typeView { } else { } //tableView == typeView1 Compare UITableView objects with pram tableView. *cellForRow method returns the cell from the if-else so you don't need to write return UITableViewCell(style: UITableViewCellStyle.default, reuseIdentifier: "Cell") If I debug your cellForRow method code then your this above line never executes. First try this standards and remove your mistake and then post your question and issue you are facing. Hope my above work helps you. Edit Your required output will be like this below image. You don't need to take 2 UItableView and thse tableview in single scrollview. You can do the same with one tableView. Go through this tutorial
{ "language": "en", "url": "https://stackoverflow.com/questions/41799327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to hold a http request without failing until response coming in Angular This is my http API request this.dashboardServiceHandler.getTxnInfo([], params). API returns a response like after 2 min.In here i am trying to hold my request until reponse coming.But in network tab it shows pending for a long time and it fails. How could i hold my request until response comes. delayForFiveSeconds = () => timer(135000); getBookingInfo(dateType: string) { const delayForFiveSeconds = () => timer(135000); const params = []; params.push({code: 'dateType', name: dateType}); params.push({code: 'from', name: '2019-01-01'}); params.push({code: 'to', name: '2019-01-31'}); // this.timeout(4000); return this.ServiceHandler.getTxnInfo([], params); } In this class i am calling the backend API. export class ServiceHandler { getTxnInfo(headers: any[], params: any[]) { return this.apiService.get(environment.rm_url + 'rm-analytics-api/dashboard/txn-info', headers, params); } } getBookingDetails() { this.delayForFiveSeconds().subscribe(() => {this.getBookingInfo('BOOKING').subscribe( bookings => { console.log(bookings); }); }); } A: RxJS has a timeout operator. Probably you can use that to increase the timeout getBookingInfo(dateType: string) { ... return this.ServiceHandler.getTxnInfo([], params).pipe( timeout(10*60*1000) // 10 minutes ); } And then you can update the calling function to getBookingDetails() { this.getBookingInfo('BOOKING').subscribe( bookings => { console.log(bookings); }); } A: You can use the timeout operator of rxjs, including it in a pipe with timeout: import { timeout, catchError } from 'rxjs/operators'; import { of } from 'rxjs/observable/of'; ... getTxnInfo(headers: any[], params: any[]) { this.apiService.get(environment.rm_url + 'rm-analytics-api/dashboard/txn-info', headers, params) .pipe( timeout(20000), catchError(e => { return of(null); }) ); } Using it: this.ServiceHandler.getTxnInfo([], params).subscribe( txnInfos => { console.log(txnInfos ); });
{ "language": "en", "url": "https://stackoverflow.com/questions/61521294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: running Flask jQuery example under Apache proxy I can successfully run the Flask jQuery example (as referred near bottom of Flask's "AJAX with jQuery" page.) It runs on the flask development server, and is accessible at http://localhost:5000. How do I proxy the page so that I can access the same app under http://localhost/jqueryexample? I added this to my Apache VirtualHost entry thinking it will do the trick: ProxyPass /jqueryexample http://localhost:5000/ ProxyPassReverse /jqueryexample http://localhost:5000/ But the new URL gives the 404 error: GET http://localhost/_add_numbers?a=6&b=2 404 (Not Found) How can I get the example to run correctly under "canonical URL" (not sure if that's the right terminology)? Or, how to change the app or Apache configuration in order to get this jQuery example running for both URLs? BTW, here's how you download and run the vanilla Flask jQuery example in question: git clone http://github.com/mitsuhiko/flask cd flask/examples/jqueryexample/ python jqueryexample.py A: Okay, after looking into this further, I think I answered my own question: Apparently, instead of running the flask development server and trying to proxy it through Apache httpd, it's best to deploy the app directly to Apache using mod_wsgi. Guidelines on how to do this are well documented here. In fact, for production, the dev server is not at all recommended (see here.) As for deploying the jQuery Flask example itself, here's what you do (assuming your DocumentRoot is /var/www/html): # Get the example code. git clone http://github.com/mitsuhiko/flask cd flask/examples/jqueryexample/ # Create WSGI file. echo "\ import sys\ sys.path.insert(0, '/var/www/html/jqueryexample')\ from jqueryexample import app as application\ " > jqueryexample.wsgi # Deploy to httpd. sudo mkdir /var/www/html/jqueryexample sudo cp -r * /var/www/html/jqueryexample/ Now add this to your VirtualHost: WSGIScriptAlias /jqueryexample /var/www/html/jqueryexample/jqueryexample.wsgi <Location /var/www/html/jqueryexample> Allow from all Order allow,deny </Location> Then restart httpd. Now check out the running app at http://localhost/jqueryexample. Voila! A: I don't have an Apache install in front of me but if you are proxying the app shouldn't you change line 6 of the index.html from $.getJSON($SCRIPT_ROOT + '/_add_numbers', { to $.getJSON($SCRIPT_ROOT + '/jqueryexample/_add_numbers', {
{ "language": "en", "url": "https://stackoverflow.com/questions/17435759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Colab GPU session disconnects within 10-15 mins I'm trying to run this notebook which causes GPU session to disconnect within 10-15 minutes. I tried with different accounts, and I get the same results. If I try to reconnect, I get You cannot currently connect to a GPU due to usage limits in Colab despite not even using the GPU (the notebook never makes it past initial installations).
{ "language": "en", "url": "https://stackoverflow.com/questions/68720168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Empty underscore template in browserify I use Browserify to load a Backbone View. The view is rendering some html templates with underscore. The "tmpl2" method is generating an empty string when i load the template markup from the html template script. Are there any issues between browserify and underscore or why its rendering an empty string? (I use latest version of browserify, underscore, backbone, jquery) View.js: var $ = require('jquery'); var Backbone = require('backbone'); var _ = require('underscore'); Backbone.$ = $; var View = Backbone.View.extend({ tmpl1: _.template("<p>hello: <%= name %></p>"), //HTML hardcoded tmpl2: _.template( $.trim( $('#tmpl').html() ) ), //HTML from template render: function(){ console.log( $.trim( $('#tmpl').html() ) ); //<p>hello: <%= name %></p> <-- OK console.log( this.tmpl1({name : 'moe'}) ); //<p>hello: moe</p> <-- OK console.log( this.tmpl2({name : 'moe'}) ); //(Emptystring) <-- WTF ??? } }); module.exports = View; index.html: <script type="text/template" id="tmpl"> <p>hello: <%= name %></p> </script> A: Your issue is most likely that at the point where your compiling your template the DOM hasn't loaded. While your render function is presumably not called until later which is why at the point your able to log the template. When you declare a backbone view, the statements that assign a value to it's prototype are executed right away. For example in your case the following line is executed right away tmpl2: _.template( $.trim( $('#tmpl').html() ) ), //HTML from template You can instead compile the template in your initialize function (assuming that that is called after the DOM has loaded). For example initialize: function () { this.tmpl1 = _.template("<p>hello: <%= name %></p>"); //HTML hardcoded this.tmpl2 = _.template( $.trim( $('#tmpl').html() ) ); } However this has the disadvantage of the template being compiled for every instance of your view, what would probably make more sense in your case is to store your template separately and require it and use that.
{ "language": "en", "url": "https://stackoverflow.com/questions/31840620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Shared memory and comunication between programs I read this: python singleton into multiprocessing but I didn't find the solution of my problem. I have to run the same program (not process) many times in one time. Programs work in the same electronic devices. I must synchronized this programs. Only one program can use device in the moment. Have you got any suggestions how I can resolve this problem? A: You could use lockfiles in the filesystem.
{ "language": "en", "url": "https://stackoverflow.com/questions/3289040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CSS Translate and Scale on Y-Axis? I am taking an online course learning CSS and we are just covering CSS animations. I am trying to practice some of the things I learned (just basic transforms for now) by creating a small animation of a man walking towards the screen down a pathway. Basically, I want to both translate and scale my image at the same time. I got this working fine, but now I also wanted to add some small rotation so that it looks like the man is slightly moving left and right. Here is my code in a jsfiddle, I don't know how to change the transform-origin so that the man is walking in a straight line on the Y-Axis, the scale makes him walk in a diagonal. I hope that makes sense... The commented out part of the code includes the scale, as soon as that is added back, and the part without scale is commented out, it acts funny and I'm thinking this has to do with the origin? https://jsfiddle.net/qLLqdxbm/ HTML: <div class="man-scale"> <img class="man-walk" src="http://clipart-library.com/img/1184697.png"> </div> CSS: .man-walk { width: 100px; height: 125px; position: absolute; top: 0; left: 50px; animation-name: man-walk; animation-duration: 0.45s; animation-iteration-count: infinite; } @keyframes man-walk { 0% { transform: rotate(0deg); } 25% { transform: rotate(1.5deg); } 50% { transform: rotate(0deg); } 75% { transform: rotate(-1.5deg); } 100% { transform: rotate(0deg); } } .man-scale { width: 100px; height: 125px; animation-name: man-scale; animation-duration: 2s; animation-timing-function: linear; animation-iteration-count: infinite; } /* define the animation */ @keyframes man-scale { /* 0% { transform: translate(0px, 5px) scale(1.1); } 25% { transform: translate(0px, 15px) scale(1.5); } 50% { transform: translate(0px, 25px) scale(1.7); } 75% { transform: translate(0px, 35px) scale(2.0); } 100% { transform: translate(0px, 45px) scale(2.3); } */ 0% { transform: translate(0px, 5px); } 25% { transform: translate(0px, 15px); } 50% { transform: translate(0px, 25px); } 75% { transform: translate(0px, 35px); } 100% { transform: translate(0px, 45px); } } Thanks for the help! A: Each time you scale the image along X and Y, the origin shifts in both dimensions by a specific offset. If you can compensate for that offset in the X dimension then a vertical animation could be achieved. In this case in first keyframe the scale increased by 0.1 which is 100 * 0.1 = 10px now origin got offset by 5px in X dimension, compensating in terms of translateX(-5px). Similarly for all the other keyframes. If you want a faster animation in the Y dimension just increase the Y translate values without touching the X translation values. .man-walk { width: 100px; height: 125px; position: absolute; top: 0; left: 50px; animation-name: man-walk; animation-duration: 0.45s; animation-iteration-count: infinite; } @keyframes man-walk { 0% { transform: rotate(0deg); } 25% { transform: rotate(1.5deg); } 50% { transform: rotate(0deg); } 75% { transform: rotate(-1.5deg); } 100% { transform: rotate(0deg); } } .man-scale { width: 100px; height: 125px; animation-name: man-scale; animation-duration: 2s; animation-timing-function: linear; animation-iteration-count: infinite; } /* define the animation */ @keyframes man-scale { 0% { transform: translate(-5px, 30px) scale(1.1); } 25% { transform: translate(-20px, 70px) scale(1.4); } 50% { transform: translate(-35px, 120px) scale(1.7); } 75% { transform: translate(-50px, 180px) scale(2.0); } 100% { transform: translate(-65px, 250px) scale(2.3); } } <div class="man-scale"> <img class="man-walk" src="http://clipart-library.com/img/1184697.png"> </div> There might be some advanced CSS techniques to calculate the offset automatically.
{ "language": "en", "url": "https://stackoverflow.com/questions/47299038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: RewriteCond with the famous !-f and !-d conditions is not working I have .htaccess file with the following directives: RewriteEngine on RewriteBase /amit/public/ RewriteCond ${REQUEST_FILENAME} !-f [OR] RewriteCond ${REQUEST_FILENAME} !-d [OR] RewriteCond ${REQUEST_FILENAME} !-s [OR] RewriteCond ${REQUEST_FILENAME} !-l RewriteRule ^(.*[^/])/?$ $1.php [NC,L] Pretty basic redirection. However, I'm getting an infinite internal loop in my apache log files with a 500 error in the browser. The apache log file has the following: r->uri = /amit/public/activities.php.php.php.php.php.php.php.php.php redirected from r->uri = /amit/public/activities.php.php.php.php.php.php.php.php redirected from r->uri = /amit/public/activities.php.php.php.php.php.php.php redirected from r->uri = /amit/public/activities.php.php.php.php.php.php redirected from r->uri = /amit/public/activities.php.php.php.php.php redirected from r->uri = /amit/public/activities.php.php.php.php redirected from r->uri = /amit/public/activities.php.php.php redirected from r->uri = /amit/public/activities.php.php redirected from r->uri = /amit/public/activities.php redirected from r->uri = /amit/activities.php redirected from r->uri = /activities.php That's when I enter the page with the extension. Makes no difference when I try to access the page without the extension. The page exists and I've double checked everything, but it seems to me that the main issue is the RewriteCond. Any help is appreciated. A: You don't need 4 conditions and need to have them ANDed together: RewriteEngine on RewriteBase /amit/public/ RewriteCond %{ENV:REDIRECT_STATUS} ^$ RewriteCond %{REQUEST_FILENAME} !-l RewriteCond ${REQUEST_FILENAME} !-d RewriteRule ^(.+?)/?$ $1.php [L] A: Thanks to @anubhava. I've just realized that my code uses $ for all the server variables. It should be %. A silly typo had hacked my brain for close to 24 hours... So the 'ultimate' code with the suggestions made by @anubhava would be: Options +FollowSymlinks RewriteEngine on RewriteBase /amit/public/ RewriteCond %{ENV:REDIRECT_STATUS} ^$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-s RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(.*[^/])/?$ $1.php [NC,L] Works like a charm.
{ "language": "en", "url": "https://stackoverflow.com/questions/51279265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does Facebook do it? Checking the file type? Possible Duplicate: How to check file types of uploaded files in PHP? Creating a text file and rename it to anything.jpg and try uploading it on facebook, facebook detects that the file is not an image and says Please select an image file or something like that. How do they do it? I tested it out on my localhost by creating a dummy html form along with a <input type="file"... element and uploaded an image file created by renaming a text file to something.jpg and the file type in $_FILES['control_name']['type'] showed image/jpeg... How do I block users from uploading such 'fake' images. I think restriction using $_FILES['control_name']['type'] is not a solution, right? A: When you process image on server, use image manipulation library (getimagesize for example) to detect it's width and height. When this fails, reject the image. You will probably do it anyway to generate thumbnail, so it is like one extra if. A: There are many ways of checking the actual files. How Facebook does it, only the ones who created it know i think :). Most likely they will look at the first bytes in the file. All files have certain bytes describing what they truely are. For this however you need loads of time/money creating a database or such against which you can validate the uploads. More common solutions are; FORM attribute In a lot of browsers, of course excluding Internet Explorer, you can set an accept attribute which checks on extensions client side. More info here: File input 'accept' attribute - is it useful? Extension This is not realy secure, for a script can be saved with an image extension Read file MIME TYPE This is a solution like you stated in your question. This however is also easy to bypass and relies on the up-to-date status of your server. Processing the image The most reliable (for most developer skills and available time) would be to process the image as a test. Put it in a library like GD or Imagic. They will raise errors when an image is not realy an image. This however will require you to keep that software up to date. In short, there is not a 100% guarantee to catch this without spending tons of hours. Even then you only get 99,9%. You should weigh your available time against the above options and choose which best suits you. As best practice i recommend a combination of all 3. This topic is also discussed in Security: How to validate image file uploads? A: Headers in your file won't be the same.
{ "language": "en", "url": "https://stackoverflow.com/questions/14672724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Table with div layers I have a table inside a div 1. Then after that div 1 added another div 2 with position:relative; top:-250; so that div 2 layer will be right on top of the table. But now below the table there is a big space before anything on the page can resume displaying (I guess the second div 2 would have normally been without the -250 position change?) How do I get rid of the space and clear it? I tried this... <div style="clear:both;"></div> ...and it didn't do anything A: Using absolute positioning rather than relative may do the trick. I'll test this theory and edit my answer accordingly. Edit: using position: absolute; margin-top: -250px; seems to be the solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/3844201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: R: Plot multiple time series in a loop.Error in xy.coords(x, y, xlabel, ylabel, log) : 'x' and 'y' lengths differ I'm trying to plot many time series from a database with 134 columns, one ts per colum and after that, save it in a .jpg file. I'm using a loop. I notice that the ploblem could be the plot() function. Some data with dput() structure(list(X9003 = c(3L, 8L, 6L, 6L, 2L, 3L, 6L, 2L, 6L, 3L), X9004 = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, 0L), X9007 = c(5L, 8L, 6L, 11L, 4L, 6L, 7L, 9L, 4L, 5L), X9009 = c(1L, 2L, 3L, 0L, 0L, 1L, 0L, 1L, 0L, 4L), X9010 = c(NA, NA, NA, NA, NA, NA, NA, NA, 3L, 15L), X9012 = c(0L, 0L, 0L, 2L, 3L, 0L, 1L, 1L, 0L, 0L ), X9014 = c(NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_), X9016 = c(NA, NA, NA, NA, NA, NA, 12L, 27L, 7L, 12L), X9019 = c(3L, 8L, 27L, 38L, 19L, 27L, 25L, 47L, 5L, 3L), X9020 = c(0L, 1L, 0L, 0L, 4L, 6L, 2L, 2L, 5L, 3L)), row.names = c(NA, 10L), class = "data.frame") And dput(dat_nombres[1:10,1:2]) structure(list(ESTACIONES = c(9003L, 9004L, 9007L, 9009L, 9010L, 9012L, 9014L, 9016L, 9019L, 9020L), CLAVE = c(9003L, 9004L, 9007L, 9009L, 9010L, 9012L, 9014L, 9016L, 9019L, 9020L)), row.names = c(NA, 10L), class = "data.frame") My code: data<- read.csv("GranizoAnualCompleto_1961-2019_columnas.csv", header = TRUE) dat_nombres<-read.csv("Nombres_Estaciones_CLICOM.csv", header=TRUE ) Estaciones<-dat_nombres$ESTACIONES # List Estacionestmp<-data[,-1] # Loop for (i in seq_along (Estacionestmp)) { serie<-ts(Estacionestmp[,i], start = 1961, frequency = 1) mypath<-file.path("C:","00 FENIX","03 Data", "Data_Est_Select_1961- 2019", "SeriesTiempo_GranizoAnual", paste("Ts_Granizo4_", Estaciones[i],".jpg", sep = "")) jpeg(filename = mypath) plot(serie, col="darkblue", main="Días con granizo 1961-2019 \n Estación", Estaciones[i] , cex.main=2, sub=paste("Estación",Estaciones[i]), cex.sub= 1.5, col.main="darkred", xlab= "Tiempo", ylab= "Días con granizo", cex.lab=0.8, col.axis= "gray34", col.lab= "gray25", type= "l" ) dev.off() } The error message are: Error in xy.coords(x, y, xlabel, ylabel, log) : 'x' and 'y' lengths differ But when I just code plot(serie) (without any details in the plot function) in the loop, there are no ploblem with the result and the files are created in the specific path that I have defined. Recently, I used this code but on a dataframe with 20 columns and it worked. Why that details in the plot function argument are causing the error message? How can I fix the loop for create multiple time series plots? Is there any better solution to do it? Thank you so much! EDIT: I find that the problem are when I trying to change the main for name and subpaste every plot, the problem with the Estaciones[i]; I prove write a # before for comment it, and now it works! So, I converted it to a list Estaciones_list<-as.list(Estaciones) and tried with this new list in the main for every plot but it doesn't work either. Any suggestions? Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/58783946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP function to extract a field value from a database I'm trying to get a value of a mysql database using a php function. I have a database like this: NAME | EMAIL | PASSWORD | OTHER ------------------------------------------------------- example | [email protected] | password | other ------------------------------------------------------- example2 | [email protected] | password2 | other2 and in my PHP file I've tried to use this function: function selectUserField($email, $field, $connection){ $select_user = "SELECT '$field' FROM users WHERE email='$email' LIMIT 1"; $result = mysqli_query($connection, $select_user); $value = mysqli_fetch_assoc($result); return $value[$field]; } //And I try to echo the result of the function $connection = new mysqli(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); echo selectUserField("[email protected]", "name", $connection); But as result I get only the field name and not its content (for this example I get "NAME" and not "example"). How can i do to get the content of the database cell? A: Try this function selectUserField($email, $field, $connection){ $select_user = "SELECT `$field` FROM users WHERE `email`='$email' LIMIT 1"; //wrap it with ` around the field or don't wrap with anything at all $result = mysqli_query($connection, $select_user); $value = mysqli_fetch_assoc($result); return $value[$field]; } //And I try to echo the result of the function $connection = new mysqli(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); echo selectUserField("[email protected]", "name", $connection); A: No quotes around $field. "SELECT $field FROM users WHERE email='$email' LIMIT 1";
{ "language": "en", "url": "https://stackoverflow.com/questions/36549728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to continue the code on the next line in VBA I would like to type the mathematical forumla in VBA code which many lines. I would like to split it into many lines. How do I do it? For example: U_matrix(i, j, n + 1) = k * b_xyt(xi, yi, tn) / (4 * hx * hy) * U_matrix(i + 1, j + 1, n) + (k * (a_xyt(xi, yi, tn) / hx ^ 2 + d_xyt(xi, yi, tn) / (2 * hx))) is very long. would like to split it. Tried this: U_matrix(i, j, n + 1) = k * b_xyt(xi, yi, tn) / (4 * hx * hy) * U_matrix(i + 1, j + 1, n) _+ (k * (a_xyt(xi, yi, tn) / hx ^ 2 + d_xyt(xi, yi, tn) / (2 * hx))) But not working.. Need some guidance on this.. A: If you want to insert this formula =SUMIFS(B2:B10,A2:A10,F2) into cell G2, here is how I did it. Range("G2")="=sumifs(B2:B10,A2:A10," & _ "F2)" To split a line of code, add an ampersand, space and underscore. A: (i, j, n + 1) = k * b_xyt(xi, yi, tn) / (4 * hx * hy) * U_matrix(i + 1, j + 1, n) + _ (k * (a_xyt(xi, yi, tn) / hx ^ 2 + d_xyt(xi, yi, tn) / (2 * hx))) From ms support To continue a statement from one line to the next, type a space followed by the line-continuation character [the underscore character on your keyboard (_)]. You can break a line at an operator, list separator, or period. A: In VBA (and VB.NET) the line terminator (carriage return) is used to signal the end of a statement. To break long statements into several lines, you need to Use the line-continuation character, which is an underscore (_), at the point at which you want the line to break. The underscore must be immediately preceded by a space and immediately followed by a line terminator (carriage return). (From How to: Break and Combine Statements in Code) In other words: Whenever the interpreter encounters the sequence <space>_<line terminator>, it is ignored and parsing continues on the next line. Note, that even when ignored, the line continuation still acts as a token separator, so it cannot be used in the middle of a variable name, for example. You also cannot continue a comment by using a line-continuation character. To break the statement in your question into several lines you could do the following: U_matrix(i, j, n + 1) = _ k * b_xyt(xi, yi, tn) / (4 * hx * hy) * U_matrix(i + 1, j + 1, n) + _ (k * (a_xyt(xi, yi, tn) / hx ^ 2 + d_xyt(xi, yi, tn) / (2 * hx))) (Leading whitespaces are ignored.) A: To have newline in code you use _ Example: Dim a As Integer a = 500 _ + 80 _ + 90 MsgBox a
{ "language": "en", "url": "https://stackoverflow.com/questions/22854386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: How can I implement model revisions in Laravel? This question is for my pastebin app written in PHP. I did a bit of a research, although I wasn't able to find a solution that matches my needs. I have a table with this structure: +-----------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+------------------+------+-----+---------+----------------+ | id | int(12) unsigned | NO | PRI | NULL | auto_increment | | author | varchar(50) | YES | | | | | authorid | int(12) unsigned | YES | | NULL | | | project | varchar(50) | YES | | | | | timestamp | int(11) unsigned | NO | | NULL | | | expire | int(11) unsigned | NO | | NULL | | | title | varchar(25) | YES | | | | | data | longtext | NO | | NULL | | | language | varchar(50) | NO | | php | | | password | varchar(60) | NO | | NULL | | | salt | varchar(5) | NO | | NULL | | | private | tinyint(1) | NO | | 0 | | | hash | varchar(12) | NO | | NULL | | | ip | varchar(50) | NO | | NULL | | | urlkey | varchar(8) | YES | MUL | | | | hits | int(11) | NO | | 0 | | +-----------+------------------+------+-----+---------+----------------+ This is for a pastebin application. I basically want paste revisions so that if you open paste #1234, it shows all past revisions of that paste. I thought of three ways: Method 1 Have a revisions table with id and old_id or something and for each ID, I would insert all old revisions, so if my structure looks like this: rev3: 1234 rev2: 1233 rev1: 1232 The table will contain this data: +-------+----------+ | id | old_id | +-------+----------+ | 1234 | 1233 | | 1234 | 1232 | | 1233 | 1232 | +-------+----------+ The problem which I have with this is that it introduces a lot of duplicate data. And the more the revisions get, it has not only more data but I need to do N inserts for each new paste to the revisions table which is not great for a large N. Method 2 I can add a child_id to the paste table at the top and just update that. And then, when fetching the paste, I will keep querying the db for each child_id and their child_id and so on... But the problem is, that will introduce too many DB reads each time a paste with many revisions is opened. Method 3 Also involves a separate revisions table, but for the same scenario as method 1, it will store the data like this: +-------+-----------------+ | id | old_id | +-------+-----------------+ | 1234 | 1233,1232 | | 1233 | 1232 | +-------+-----------------+ And when someone opens paste 1234, I'll use an IN clause to fetch all child paste data there. Which is the best approach? Or is there a better approach? I am using Laravel 4 framework that has Eloquent ORM. EDIT: Can I do method 1 with a oneToMany relationship? I understand that I can use Eager Loading to fetch all the revisions, but how can I insert them without having to do a dirty hack? EDIT: I figured out how to handle the above. I'll add an answer to close this question. A: If you are on Laravel 4, give Revisionable a try. This might suite your needs A: So here is what I am doing: Say this is the revision flow: 1232 -> 1233 -> 1234 1232 -> 1235 So here is what my revision table will look like: +----+--------+--------+ | id | new_id | old_id | +----+--------+--------+ | 1 | 1233 | 1232 | | 2 | 1234 | 1233 | | 3 | 1234 | 1232 | | 4 | 1235 | 1232 | +----+--------+--------+ IDs 2 and 3 show that when I open 1234, it should show both 1233 and 1232 as revisions on the list. Now the implementation bit: I will have the Paste model have a one to many relationship with the Revision model. * *When I create a new revision for an existing paste, I will run a batch insert to add not only the current new_id and old_id pair, but pair the current new_id with all revisions that were associated with old_id. *When I open a paste - which I will do by querying new_id, I will essentially get all associated rows in the revisions table (using a function in the Paste model that defines hasMany('Revision', 'new_id')) and will display to the user. I am also thinking about displaying the author of each revision in the "Revision history" section on the "view paste" page, so I think I'll also add an author column to the revision table so that I don't need to go back and query the main paste table to get the author. So that's about it! A: There are some great packages to help you keeping model revisions: * *If you only want to keep the models revisions you can use: Revisionable * *If you also want to log any other actions, whenever you want, with custom data, you can use: Laravel Activity Logger Honorable mentions: * *Activity Log. It also has a lot of options.
{ "language": "en", "url": "https://stackoverflow.com/questions/18502697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Why Python.h of python 3.2 must be included as first together with Qt4 I have a qt application and I want to implement python interpreter into it so that I can extend it with python scripts. While this works fine for regular C++ application, including Python.h even for most simple, empty Qt4 project always result in: g++ -c -m64 -pipe -O2 -Wall -W -D_REENTRANT -DQT_WEBKIT -DQT_NO_DEBUG -DQT_CORE_LIB -DQT_SHARED -I/usr/share/qt4/mkspecs/linux-g++-64 -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4 -I/usr/include/python3.2mu -I. -o main.o main.cpp In file included from /usr/include/python3.2mu/Python.h:8:0, from main.cpp:16: /usr/include/python3.2mu/pyconfig.h:1182:0: warning: "_POSIX_C_SOURCE" redefined [enabled by default] /usr/include/features.h:164:0: note: this is the location of the previous definition /usr/include/python3.2mu/pyconfig.h:1204:0: warning: "_XOPEN_SOURCE" redefined [enabled by default] /usr/include/features.h:166:0: note: this is the location of the previous definition In file included from /usr/include/python3.2mu/Python.h:67:0, from main.cpp:16: /usr/include/python3.2mu/object.h:402:23: error: expected unqualified-id before ‘;’ token make: *** [main.o] Error 1 I only implemented this in my .pro file: INCLUDEPATH += "/usr/include/python3.2" now anytime when I do #include <Python.h> in any .h file it makes it unbuildable. Why is that? Note: This all works perfectly with python 2.7, just python 3x doesn't work EDIT: I figured out, that when I include Python.h as first file, before Qt includes, it works, is this a bug in python? Are they missing some safe guards? A: The documentation of the Python C-API states: Note Since Python may define some pre-processor definitions which affect the standard headers on some systems, you must include Python.h before any standard headers are included. It is very likely that some of the Qt headers include standard headers (as evident from the error you get, it does include /usr/include/features.h, or example), therefore #include <Python.h> should be placed before the Qt headers. In fact, it should generally be placed before any other include-statement. Note that this is the case with Python 2.7, too. If a different include order works for you with Python 2.7, then you are simply lucky.
{ "language": "en", "url": "https://stackoverflow.com/questions/20300201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best SCM practices for live databases I've been building out an SCM environment (that has to be PCI compliant, but that's tangential to this issue). I've got to the point where I want to automate database updates, but I'm not 100% sure that's the best way forward. Say I want to add a field to a DB table. Easy enough to add it to the dev environment, but what about rolling out to the live environment? I took a look at MySQL::Diff but the thought of spending time completely automating this seems like overkill for me. I want to have a rollback option, and want to avoid the overkill of complete DB duplication. All the tutorials I've found on SCM appear to either not cover this, or say it can be very messy. Is there a best practice for this? Or should I just use MySQL diff to identify changes and backup individual tables before manually tweaking at rollout? A: We store an SQL script of the changes that are needed in git, with the branch that contains the changes. This SQL script can be applied repeatedly to "fresh" copies of production data, verifying that it will work as expected. It is our strong opinion that a focused DBA or Release Engineer should apply the changes, AFTER making appropriate backups and restoration measures. -- Navicat MySQL is also an amazing tool for helping with that. Their schema diff tool is great for verifying changes, and even applying them.
{ "language": "en", "url": "https://stackoverflow.com/questions/1392024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to remove previous text from a calculation? I have recently started learning Kivy and made a calculator app but I can't figure out how to remove the previous text from a calculation when a button is pressed for the next calculation and the text only gets removed when clear is used. Here is the code https://github.com/Rakshan22/Calcy2 . So does anyone here know the answer to this question? Thanks for the assistance! A: You should identify when the text is the final answer and reset the text before adding new one. from kivy.app import App from kivy.core.window import Window from kivy.uix.widget import Widget Window.size = (350, 450) class MainWidget(Widget): def __init__(self): self.textIsResult = false def clear(self): self.ids.input.text="" def back(self): expression = self.ids.input.text expression = expression[:1] self.ids.input.text = expression def pressed(self, button): expression = self.ids.input.text if self.textIsResult: self.ids.input.text = f"{button}" if "Fault" in expression: expression = "" self.textIsResult = false if expression == "0": self.ids.input.text = "" self.ids.input.text = f"{button}" else: self.ids.input.text = f"{expression}{button}" def answer(self): expression = self.ids.input.text try: self.ids.input.text = str(eval(expression)) self.textIsResult = true except: self.ids.input.text = "Fault" class TheLabApp(App): pass TheLabApp().run()
{ "language": "en", "url": "https://stackoverflow.com/questions/72669380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to make Android TestRunner wait until activity finishes I am closing an Activity at the end of a testcase, so that it is null at the beginning of the next testcase: public void test1() throws Exception { //activity is calling the RegisterActivity in onCreate activity = getActivity(); // register next activity that need to be monitored. RegisterActivity nextActivity = (RegisterActivity) getInstrumentation() .waitForMonitorWithTimeout(activityMonitor, 5); // next activity is opened and captured. assertNotNull(nextActivity); if (nextActivity != null) { nextActivity.finish(); } } public void test2() throws Exception { //nextActivity.finish() from the previous test has not yet finished //body of test2 //... //... } If I set a Thread sleep in test1 then problem is solved: if (nextActivity != null) { nextActivity.finish(); Thread.sleep(1500); } Is there a better way to to this? Is there a method that blocks the TestRunner until nextActivity is finished? A: Try creating a new thread for the nextActivity to run in. After calling the thread.start() method, call thread.join() where you want the TestRunner blocked.
{ "language": "en", "url": "https://stackoverflow.com/questions/19811919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Call DLL C++ functions form C# without block the form controls I have an interesting study case for you :) I have a problema with a wrapper integration. I made a Dll in write in C++. CLR Windows. This Dll is called form a C# application (for a the), till here all rigth. The C# aplications is form a thrid part and I Cannot modify this source code. When I call some C++ function since a button for example. The applications si bloqued an I can't doing any more until the C++ function return. I need that when I wating for a C++ function the different the formulary controlls has enabled again so on, I need some additional process. I try to do it with async methods and treads but I can't figure out the way to deploy it. C++ function __declspec(dllexport) HANDLE openport(char *ComPort, int BR); C# function: [DllImport("mydll.dll")] public static extern IntPtr openport(string ComPort, int BR); Thanks in advance for your help. Regards. A: Your code needs to run in a new thread. Look into the System.Threading namespace for instructions and examples of how to create a new thread. Essentially, you create the thread Here is an example from one of my old test programs. Thread thdOneOfTwo = new Thread(new ParameterizedThreadStart(TextLogsWorkout.DoThreadTask)); In the above example, TextLogsWorkout.DoThreadTask is a static method on class TextLogsWorkout, which happens also to contain the above statement. You have the option of giving each thread a name, and of using a WaitHandle that it can signal when it has completed its assignment. Both are optional, but you must execute the Start method on the instance. Be aware that you are entering the world of multi-threaded programming, where many hazards await the unwary. If you aren't already, I suggest you read up on mutexes, wait handles, and the intrinsic lock () block. Of the three, lock() is the simplest way that a single application can synchronize access to a property. Of the other two, WaitHandles and Mutexes are about equal in complexity. However, while a WaitHandle can synchronize activities within a process, a Mutex is a filesystem object, and, as such, can synchronize activities between multiple processes. In that regard, be aware that if a mutex has a name, as it must if it is to synchronize more than one process, that name must begin with "\?\GLOBALROOT", unless all of them run in the same session. Failure to do this bit me really hard a few years ago.
{ "language": "en", "url": "https://stackoverflow.com/questions/31756959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to open another activity from Interface method I write an application in Kotlin language on the platform Firebase. And after authentication, I need to open another activity from Interface. While checking I get the error in this line: startActivity(Intent(applicationContext, AnotherActivity::class.java)) Attempt to invoke virtual method 'android.content.Context android.content.Context.getApplicationContext()' on a null object reference My code: Interface: interface OnResultSuccessListener { fun onResultSuccess(isSuccess: Boolean) } class an Authentication: class AuthenticationActivity : AppCompatActivity(), OnResultSuccessListener { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_authentication) ... } override fun onResultSuccess(isSuccess: Boolean) { if (isSuccess) { startActivity(Intent(applicationContext, AnotherActivity::class.java)) /*ERROR IS HERE*/ overridePendingTransition(R.anim.fade_in, R.anim.fade_out) } } override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) { super.onActivityResult(requestCode, resultCode, data) // Result returned from launching the Intent from GoogleSignInApi.getSignInIntent(...); if (requestCode == RC_SIGN_IN) { val task = GoogleSignIn.getSignedInAccountFromIntent(data) try { // Google Sign In was successful, authenticate with Firebase val account = task.getResult(ApiException::class.java) firebaseAuthWithGoogle(account) } catch (e: ApiException) { } } } } Helper class with CRUD methods Firebase: class FirestoreCRUD { val db = Firebase.firestore var geoPoint: GeoPoint? = null private val authenticationActivity = AuthenticationActivity() val COLLECTION_USERS = "Users" ... ... //add new user to Firestore fun addNewUserToFirestore(firebaseUser: FirebaseUser?) { FirebaseInstanceId.getInstance().instanceId .addOnCompleteListener(OnCompleteListener { task -> if (task.isSuccessful) { ... ... // Add a new document with a generated ID db.collection(COLLECTION_USERS) .document(firebaseUser.uid) .set(userModel) .addOnSuccessListener { documentReference -> /*CALL onResultSuccess IN AuthenticationActivity */ authenticationActivity.onResultSuccess(true) } .addOnFailureListener { e -> Log.d(LOG_TAG, "Error adding document", e) authenticationActivity.onResultSuccess(false) } } else authenticationActivity.onResultSuccess(false) }) } } Call onActivityResult failed from onResultSuccess. I will happy for any help! Thanks a lop! A: The only way the context of an Activity can be null is if you are passing around an instance of your Activity that you instantiated yourself, which you should never do. Would have to see more of your code to know where you are doing that. A: So, I resolved the problem. It was easy. How said Tenfour04: The only way the context of an Activity can be null is if you are passing around an instance of your Activity that you instantiated yourself, which you should never do... It is true. Solution: * *should send context to FirestoreCRUD class: FirestoreUtil(this).addNewUserToFirestore(user) *FirestoreCRUD class receivers straight to Interface: class FirestoreUtil(var onResultSuccessListener: OnResultSuccessListener) {} *And call the Interface method in AuthenticationActivity class from FirestoreCRUD class: onResultSuccessListener!!.onResultSuccess(true) And then my activity doesn't restart and everything runs smoothly! I hope this helps somebody.
{ "language": "en", "url": "https://stackoverflow.com/questions/64229314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to display ImageView below TextView in LinearLayout I'm writing a sports app and I want to place the text exactly under the picture, but I can't get it aligned. If there is a large text somewhere, then the picture is not in the middle. how to make sure that the text is always exactly under the picture, or the picture is always exactly under the text? <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:layout_margin="10dp" android:background="@drawable/shadow_matches_background" android:orientation="vertical"> <RelativeLayout android:layout_width="match_parent" android:layout_height="190dp"> <LinearLayout android:id="@+id/title_team_list" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="5dp" android:weightSum="2"> <TextView android:id="@+id/title_team_1" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="35dp" android:layout_weight="1" android:fontFamily="@font/roboto_medium" android:text="Chelsea" android:textColor="@color/black" android:textSize="20sp" /> <TextView android:id="@+id/title_team_2" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginRight="25dp" android:layout_weight="1" android:fontFamily="@font/roboto_medium" android:text="Manchester City" android:textColor="@color/black" android:textSize="20sp" /> </LinearLayout> <RelativeLayout android:id="@+id/image_team_list" android:layout_width="match_parent" android:layout_height="wrap_content"> <ImageView android:id="@+id/img_team_1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/title_team_1" android:src="@drawable/image_3" /> <ImageView android:id="@+id/img_team_2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/image_5" tools:ignore="DuplicateIds" /> </RelativeLayout> <TextView android:id="@+id/date_team_list" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/image_team_list" android:layout_marginTop="5dp" android:fontFamily="@font/roboto" android:text="25 сентября 2021 года" android:textAlignment="center" android:textSize="15sp" /> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@id/date_team_list"> <Button android:id="@+id/btn_subscribe" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="#00000000" android:text="@string/btn_subscribe" android:textColor="@color/colorMatchesBtn" /> <Button android:id="@+id/btn_detail" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="#00000000" android:text="@string/btn_detail" android:textColor="@color/colorMatchesBtn" /> </LinearLayout> </RelativeLayout> <androidx.cardview.widget.CardView android:layout_width="match_parent" android:layout_height="wrap_content" android:background="#00000000"> <RelativeLayout android:layout_width="match_parent" android:layout_height="190dp"> <RelativeLayout android:layout_width="match_parent" android:layout_height="180dp" android:layout_marginStart="20dp" android:layout_marginEnd="20dp" android:layout_marginBottom="20dp"> <LinearLayout android:id="@+id/title_team_list" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="5dp" tools:ignore="DuplicateIds"> <TextView android:id="@+id/title_team_1" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="35dp" android:layout_weight="1" android:fontFamily="@font/roboto_medium" android:text="Everton" android:textColor="@color/black" android:textSize="20sp" tools:ignore="DuplicateIds" /> <TextView android:id="@+id/title_team_2" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginRight="25dp" android:layout_weight="1" android:fontFamily="@font/roboto_medium" android:text="Norvich City" android:textColor="@color/black" android:textSize="20sp" tools:ignore="DuplicateIds" /> </LinearLayout> <LinearLayout android:id="@+id/image_team_list" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@id/title_team_list" android:layout_marginTop="5dp" tools:ignore="DuplicateIds"> <ImageView android:id="@+id/img_team_1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:src="@drawable/image_3" tools:ignore="DuplicateIds" /> <ImageView android:id="@+id/img_team_2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:src="@drawable/image_5" tools:ignore="DuplicateIds" /> </LinearLayout> <TextView android:id="@+id/date_team_list" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/image_team_list" android:layout_marginTop="5dp" android:fontFamily="@font/roboto" android:text="25 сентября 2021 года" android:textAlignment="center" android:textSize="15sp" tools:ignore="DuplicateIds" /> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@id/date_team_list"> <Button android:id="@+id/btn_subscribe" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="#00000000" android:text="@string/btn_subscribe" android:textColor="@color/colorMatchesBtn" tools:ignore="DuplicateIds" /> <Button android:id="@+id/btn_detail" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="#00000000" android:text="@string/btn_detail" android:textColor="@color/colorMatchesBtn" tools:ignore="DuplicateIds" /> </LinearLayout> </RelativeLayout> </RelativeLayout> </androidx.cardview.widget.CardView> A: And take a look at ConstrainsLayout It will help you out build more complex layouts easily.Also your list of teams is completely hardcoded you should definetely switch to RecyclerView. A: you should follow Constraint layout, for your desired output I have attached some code. I hope, it helps. <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" xmlns:app="http://schemas.android.com/apk/res-auto"> <TextView android:id="@+id/tvTitle" android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_constraintLeft_toLeftOf="@id/ivPic" app:layout_constraintRight_toRightOf="@id/ivPic" app:layout_constraintBottom_toTopOf="@id/ivPic" android:text="Your desired text" android:textColor="@color/black" android:textSize="15sp" android:singleLine="true" android:marqueeRepeatLimit="marquee_forever" /> <ImageView android:id="@+id/ivPic" android:layout_width="100dp" android:layout_height="100dp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toBottomOf="@id/tvTitle" android:src="@drawable/yourImage" android:scaleType="centerCrop" android:padding="2dp" /> <TextView android:id="@+id/tvClubName" android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_constraintLeft_toLeftOf="@id/ivPic" app:layout_constraintRight_toRightOf="@id/ivPic" app:layout_constraintTop_toBottomOf="@id/ivPic" android:text="Liverpool" android:textColor="@color/black" android:textSize="15sp" android:singleLine="true" android:padding="3dp" android:gravity="center_vertical|center" android:marqueeRepeatLimit="marquee_forever" /> </androidx.constraintlayout.widget.ConstraintLayout>
{ "language": "en", "url": "https://stackoverflow.com/questions/69196309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: xslt 2.0 format-number repeating pattern with alternate group-separator I'm trying to understand why XSLT 2.0 is repeating the following pattern when I attempt to provide an alternate grouping-separator to the format-number function like so: <xsl:decimal-format grouping-separator="-" name="hyphenFormatting"/> <xsl:template match="/"> <xsl:value-of select="format-number(642120, '####-##', 'hyphenFormatting')"/> </xsl:template> Output: 64-21-20 when I expected the output to be: 6421-20 Is there a way I can bypass this pattern repetition so it evaluates my mask literally? A: Are you using Saxon? With Saxon 9.8 I get the same behaviour as you do. The specification was rephrased between 2.0 and 3.0. In 2.0 it says: In addition, if these integer-part-grouping-positions are at regular intervals (that is, if they form a sequence N, 2N, 3N, ... for some integer value N, including the case where there is only one number in the list), then the sequence contains all integer multiples of N as far as necessary to accommodate the largest possible number. While 3.0 says the following (the third rule is new): The grouping is defined to be regular if the following conditions apply: * *There is an least one grouping-separator in the integer part of the sub-picture. *There is a positive integer G (the grouping size) such that the position of every grouping-separator in the integer part of the sub-picture is a positive integer multiple of G. *Every position in the integer part of the sub-picture that is a positive integer multiple of G is occupied by a grouping-separator. If the grouping is regular, then the integer-part-grouping-positions sequence contains all integer multiples of G as far as necessary to accommodate the largest possible number. So your grouping is regular under the 2.0 definition but not under the 3.0 definition. Saxon is apparently implementing the 2.0 definition. I suspect the change was intended as a bug fix, and it appears Saxon has not implemented this change. As a workaround, you could define the picture as #-###############################################-## with the extra grouping separator placed so far out to the left that you will never have a number this large. (Raised a Saxon issue here: https://saxonica.plan.io/issues/3669)
{ "language": "en", "url": "https://stackoverflow.com/questions/48770905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Add “selected” Class to the Menu Item - MVC I fallow this tutorial (link) how to add "selected" class to the menu in MVC, in other words this tutorial shows how to create navigator menu in MVC, when you click on nav button it stays selected. But when I write this code in begining using <span class="skimlinks-unlinked">System.Web</span>; using <span class="skimlinks-unlinked">System.Web.Mvc</span>; namespace <span class="skimlinks-unlinked">AdminRole.HtmlHelpers</span> { ... } My Visual Studio 2012 shows error in all of this area <span class="skimlinks-unlinked"> and it says Identifier expected? What identifier? Thanks for help A: Ok, I solve it... This code: using <span class="skimlinks-unlinked">System.Web</span>; using <span class="skimlinks-unlinked">System.Web.Mvc</span>; namespace <span class="skimlinks-unlinked">AdminRole.HtmlHelpers</span> Rewrite to: using System.Web; using System.Web.Mvc; namespace AdminRole.HtmlHelpers Now it works :)
{ "language": "en", "url": "https://stackoverflow.com/questions/23261691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cin inside switch statement, makes my program stuck In my program it should be an option to ask a user for input, and then save input string into the file. My problem is, - when I put cin in any of it forms, inside Switch, program will stuck circling indefinitely, right after i press enter after finish typing new text. What could cause the problem? #include <iostream> #include <fstream> #include <iterator> #include <string.h> #include <time.h> #include <stdlib.h> #include <algorithm> #include <chrono> #include <random> #include <vector> using namespace std; void changePlainText() { ofstream nFile("plaintext.txt"); string newText; cout << "Enter new plain text" << endl; getline(cin, newText); nFile << newText; nFile.close(); } int main() { int uInput = 0; do { printf("2.Change content of the plain text file: \n"); cin >> uInput; switch (uInput) { case 1: break; case 2: changePlainText(); break; } } while (uInput != 5); cout << "Closing program" << endl; system("pause"); } After I type something in console and press enter, the program enters never ending circle. It still stuck even if I just write simple cin >> i, in switch case. A: Take a look at this question. I debugged your code and I experienced that exact behavior. The accepted answer explains quite well what's happening and also provides a solution. I tried with boost, I changed cin >> uInput; to string inputString; // so you know what inputString is getline(cin, inputString); uInput = boost::lexical_cast<int>(line); and it's working fine now.
{ "language": "en", "url": "https://stackoverflow.com/questions/73402578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Android equivalent of iOS's beginIgnoringInteractionEvents This SO answer suggest to iterate through the childviews and disable them. It doesn't smell right to me (performance wise) What's the best approach to disable ALL touch events in the application prevent the user from interacting with the application (during an animation, let's say)? A: Check out how we solved this by overriding the dispatch methods in Activity.
{ "language": "en", "url": "https://stackoverflow.com/questions/26501184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Calling full trust assembly from a low trust assembly with AllowPartiallyTrustedCallers 'proxy' assembly If you have a third party assembly that requires full trust for say a logging operation (This assembly does not have AllowPartiallyTrustedCallers). You use this assembly through a custom assembly with AllowPartiallyTrustedCallers then deploy it to the GAC. How can you use your custom assembly from low trust code when it's dependency (third party assembly) issues security demands? Note: Context is sharepoint. A: Best guess : missing assert for security demands. Check out MSDN tutorial to get started. And SecurityPermission class for Assert method.
{ "language": "en", "url": "https://stackoverflow.com/questions/10406366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Not sure how to pass data from frontend app to backend I need to send data from the 'change' variable below to the backend flask route. Here's how my react app is- import React, {useState, useEffect} from "react"; import ReactQuill from "react-quill"; import { Card, CardBody, Form, FormInput } from "shards-react"; import "react-quill/dist/quill.snow.css"; import "../../assets/quill.css"; function Editor(props) { const[change, setChange] = useState('') const handleChange = (value) => { setChange(value) } console.log(change) console.log(JSON.stringify(change)); const requestOptions = { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(change) }; useEffect( () => { fetch('http://127.0.0.1:5000/lang', requestOptions) .then(response => response.json()) .then(data => setChange(change)) console.log('datas') }, [] ) return <Card small className="mb-3"> <CardBody> <Form className="add-new-post" action="http://127.0.0.1:5000/lang" method="POST"> <FormInput size="lg" className="mb-3" placeholder="Your Post Title" /> <ReactQuill className="add-new-post__editor mb-1" value={change} onChange={handleChange} /> <button class="ml-auto btn btn-accent btn-sm" type="submit"><i class="material-icons">file_copy</i> Publish</button> </Form> </CardBody> </Card>; } export default Editor; I have tried to use useEffect above. Upon attempting to post data to flask, Following is the error it returns: 127.0.0.1 - - [06/Jun/2021 04:36:03] "POST /lang HTTP/1.1" 500 - TypeError: 'NoneType' object is not subscriptable My backend server in flask looks like the following- from flask import Flask, render_template, jsonify, request #import objects from the Flask model from flask_cors import CORS from werkzeug.utils import secure_filename import requests import logging import json import pymongo from pymongo import MongoClient from flask_sqlalchemy import SQLAlchemy from localStoragePy import localStoragePy import config app = Flask(__name__) #define app using Flask languages = [{'title' : 'Blog Overview'}, {'subtitle' : 'Dashboard'}, {'file' : 'file'}, {'sm' : '4'}, { 'title2': 'Add New Post'}, {'subtitle2': 'Blog Posts'}] @app.route('/', methods=['GET']) def test(): return jsonify({'message' : 'It works!'}) @app.route('/lang', methods=['POST']) def addOne(): language = {'title' : request.json['title'], 'subtitle' : request.json['subtitle'], 'sm' : request.json['sm'], 'title2': request.json['title2'],'subtitle2': request.json['subtitle2'] } languages.append(language) return jsonify(languages) @app.route('/lang', methods=['GET']) def returnAll(): return jsonify(languages) if __name__ == '__main__': app.run(debug=True, port=8080) #run app on port 8080 in debug mode A: Change request.json to request.form in the AddOne function.
{ "language": "en", "url": "https://stackoverflow.com/questions/67854582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Find distinct values group by another field mongodb I have collection with documents like this : { "_id" : ObjectId("5c0685fd6afbd73b80f45338"), "page_id" : "1234", "category_list" : [ "football", "sport" ], "time_broadcast" : "09:13" } { "_id" : ObjectId("5c0685fd6afbd7355f45338"), "page_id" : "1234", "category_list" : [ "sport", "handball" ], "time_broadcast" : "09:13" } { "_id" : ObjectId("5c0694ec6afbd74af41ea4af"), "page_id" : "123456", "category_list" : [ "news", "updates" ], "time_broadcast" : "09:13" } .... now = datetime.datetime.now().time().strftime("%H:%M") What i want is : when "time_broadcast" is equal to "now",i get list of distinct "category_list" of each "page_id". Here is how the output should look like : { { "page_id" : "1234", "category_list" : ["football", "sport", "handball"] }, { "page_id" : "123456", "category_list" : ["news", "updates"] } } I have tried like this : category_list = db.users.find({'time_broadcast': now}).distinct("category_list") but this gives me as output list of distinct values but of all "page_id" : ["football", "sport", "handball","news", "updates"] not category_list by page_id . Any help please ? Thanks A: you need to write an aggregate pipeline * *$match - filter the documents by criteria *$group - group the documents by key field *$addToSet - aggregate the unique elements *$project - project in the required format *$reduce - reduce the array of array to array by $concatArrays aggregate query db.tt.aggregate([ {$match : {"time_broadcast" : "09:13"}}, {$group : {"_id" : "$page_id", "category_list" : {$addToSet : "$category_list"}}}, {$project : {"_id" : 0, "page_id" : "$_id", "category_list" : {$reduce : {input : "$category_list", initialValue : [], in: { $concatArrays : ["$$value", "$$this"] }}}}} ]).pretty() result { "page_id" : "123456", "category_list" : [ "news", "updates" ] } { "page_id" : "1234", "category_list" : [ "sport", "handball", "football", "sport" ] } you can add $sort by page_id pipeline if required
{ "language": "en", "url": "https://stackoverflow.com/questions/53830076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: uWSGI: disable logs for given urls I need to disable uWSGI logs just for one url. I played around uWSGI config but didn't find anything that can helps(maybe route directive, but it didn't give me required result). Do you have any ideas about it? A: Use its internal routing framework with the donotlog action: route = ^foo donotlog: But ensure your instance has internal routing support compiled in (if not you should see a warning in the startup logs).
{ "language": "en", "url": "https://stackoverflow.com/questions/25679166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to parse a CSV with multiple key-value pairs? I have a CSV in this format: "Account Name","Full Name","Customer System Name","Sales Rep" "0x7a69","Mike Smith","0x7a69","Tim Greaves" "0x7a69","John Taylor","0x7a69","Brian Anthony" "Apple","Steve Jobs","apple","Anthony Michael" "Apple","Steve Jobs","apple","Brian Anthony" "Apple","Tim Cook","apple","Tim Greaves" ... I would like to parse this CSV (using Java) so that it becomes: "Account Name","Full Name","Customer System Name","Sales Rep" "0x7a69","Mike Smith, John Taylor","0x7a69","Tim Greaves, Brian Anthony" "Apple","Steve Jobs, Tim Cook","apple","Anthony Michael, Brian Anthony, Tim Greaves" Essentially I just want to condense the CSV so that there is one entry per account/company name. Here is what I have so far: String csvFile = "something.csv"; String line = ""; String cvsSplitBy = ","; List<String> accountList = new ArrayList<String>(); List<String> nameList = new ArrayList<String>(); List<String> systemNameList = new ArrayList<String>(); List<String> salesList = new ArrayList<String>(); try (BufferedReader br = new BufferedReader(new FileReader(csvFile))) { while ((line = br.readLine()) != null) { // use comma as separator String[] csv = line.split(cvsSplitBy); accountList.add(csv[0]); nameList.add(csv[1]); systemNameList.add(csv[2]); salesList.add(csv[3]); } So I was thinking of adding them all to their own lists, then looping through all of the lists and comparing the values, but I can't wrap my head around how that would work. Any tips or words of advice are much appreciated. Thanks! A: By analyzing your requirements you can get a better idea of the data structures to use. Since you need to map keys (account/company) to values (name/rep) I would start with a HashMap. Since you want to condense the values to remove duplicates you'll probably want to use a Set. I would have a Map<Key, Data> with public class Key { private String account; private String companyName; //Getters/Setters/equals/hashcode } public class Data { private Key key; private Set<String> names = new HashSet<>(); private Set<String> reps = new Hashset<>(); public void addName(String name) { names.add(name); } public void addRep(String rep) { reps.add(rep); } //Additional getters/setters/equals/hashcode } Once you have your data structures in place, you can do the following to populate the data from your CSV and output it to its own CSV (in pseudocode) Loop each line in CSV Build Key from account/company Try to get data from Map If Data not found Create new data with Key and put key -> data mapping in map add name and rep to data Loop values in map Output to CSV A: Well, I probably would create a class, let's say "Account", with the attributes "accountName", "fullName", "customerSystemName", "salesRep". Then I would define an empty ArrayList of type Account and then loop over the read lines. And for every read line I just would create a new object of this class, set the corresponding attributes and add the object to the list. But before creating the object I would iterate overe the already existing objects in the list to see whether there is one which already has this company name - and if this is the case, then, instead of creating the new object, just reset the salesRep attribute of the old one by adding the new value, separated by comma. I hope this helps :)
{ "language": "en", "url": "https://stackoverflow.com/questions/43876810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Giving a different object name for the content of the returned list of purrr::map2() I'm trying to conduct a certain a calculation using purrr::map2() taking two different arguments. purrr::map2( .x = c(1, 3), .y = c(10, 20), function(.x, .y)rnorm(1, .x, .y) ) purrr::map2() returns a list, but I want to assign a distinct object name to each content within the list. For example, I want to name the first list [[1]] [1] -5.962716 as model1 and [[2]] [1] -29.58825 as model2. In other words, I'd like to automate the object naming like model* <- purrr::map2[[*]]. Would anybody tell me a better way? > purrr::map2( + .x = c(1, 3), + .y = c(10, 20), + function(.x, .y)rnorm(1, .x, .y) + ) [[1]] [1] -5.962716 [[2]] [1] -29.58825 This question is similar to this, though note that I need the results of the calculation in separate objects for my purpose. A: You could assign the name to the result using setNames : result <- purrr::map2( .x = c(1, 3), .y = c(10, 20), function(.x, .y)rnorm(1, .x, .y) ) %>% setNames(paste0('model', seq_along(.))) Now you can access each individual objects like : result$model1 #[1] 6.032297 If you want them as separate objects and not a part of a list you can use list2env. list2env(result, .GlobalEnv)
{ "language": "en", "url": "https://stackoverflow.com/questions/64572192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PRTweenOperation timingFunction member not found in Swift I'm trying to use the PRTween library in a Swift iPhone app. Original example code from GitHub: PRTweenPeriod *period = [PRTweenPeriod periodWithStartValue:100 endValue:200 duration:3]; PRTweenOperation *operation = [[PRTweenOperation new] autorelease]; operation.period = period; operation.target = self; operation.timingFunction = &PRTweenTimingFunctionLinear; My Swift port: var period = PRTweenPeriod.periodWithStartValue(100, endValue: 200, duration: 3) as PRTweenPeriod var operation = PRTweenOperation() operation.period = period operation.target = self operation.timingFunction = PRTweenTimingFunctionLinear Xcode is giving me this error: 'PRTweenOperation' does not have a member named 'timingFunction' I'm not sure how to fix this. I can clearly see the member definition in PRTween.h. I'm thinking it might be related to the fact that this is where the definition of PRTweenTimingFunction takes me. typedef CGFloat(*PRTweenTimingFunction)(CGFloat, CGFloat, CGFloat, CGFloat); Has anyone else seen an error like this? Any suggestions for fixes? P.S. I'm not really sure what to call that typedef. Is it a function pointer? EDIT As a workaround, I used this code that does not ask for a timing function: let period = PRTweenPeriod.periodWithStartValue(100, endValue: 200, duration: 2) as PRTweenPeriod PRTween.sharedInstance().addTweenPeriod(period, updateBlock: { (p: PRTweenPeriod!) in NSLog("\(Int(p.tweenedValue))" }, completionBlock: { NSLog("Completed tween") }) A: Yes, that's a function pointer. This is a current limitation of C interoperability: Note that C function pointers are not imported in Swift. You might consider filing a bug if you'd like this to work. (Note that block-based APIs are fine and work with Swift closures.)
{ "language": "en", "url": "https://stackoverflow.com/questions/24541746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }