text
stringlengths
15
59.8k
meta
dict
Q: Complex shape with rainbow gradient I'm trying to draw a figure on a canvas, to be filled with a rainbow-colored gradient. The wanted result is something like this: Creating the shape itself is pretty easy, just creating a path and drawing the lines. However, actually filling it with a gradient appears to be somewhat more difficult, as it seems only radial and linear gradients are supported. The closest I have gotten is this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var gradient=ctx.createLinearGradient(0,0,0,100); gradient.addColorStop (0, 'red'); gradient.addColorStop (0.25, 'yellow'); gradient.addColorStop (0.5, 'green'); gradient.addColorStop (0.75, 'blue'); gradient.addColorStop (1, 'violet'); ctx.moveTo(0,40); ctx.lineTo(200,0); ctx.lineTo(200,100); ctx.lineTo(0, 50); ctx.closePath(); ctx.fillStyle = gradient; ctx.fill(); <body onload="draw();"> <canvas id="canvas" width="400" height="300"></canvas> </body> The gradient colors and such are correct, but the gradient should of course be more triangular-like, rather than being rectangular and cropped. A: Native html5 canvas doesn't have a way to stretch one side of a gradient fill. But there is a workaround: Create your stretch gradient by drawing a series of vertical gradient lines with an increasing length. Then you can use transformations to draw your stretched gradient at your desired angle Example code and a Demo: var canvas=document.getElementById("canvas"); var ctx=canvas.getContext("2d"); var cw=canvas.width; var ch=canvas.height; var length=200; var y0=40; var y1=65 var stops=[ {stop:0.00,color:'red'}, {stop:0.25,color:'yellow'}, {stop:0.50,color:'green'}, {stop:0.75,color:'blue'}, {stop:1.00,color:'violet'}, ]; var g=stretchedGradientRect(length,y0,y1,stops); ctx.translate(50,100); ctx.rotate(-Math.PI/10); ctx.drawImage(g,0,0); function stretchedGradientRect(length,startingHeight,endingHeight,stops){ var y=startingHeight; var yInc=(endingHeight-startingHeight)/length; // create a temp canvas to hold the stretched gradient var c=document.createElement("canvas"); var cctx=c.getContext("2d"); c.width=length; c.height=endingHeight; // clip the path to eliminate "jaggies" on the bottom cctx.beginPath(); cctx.moveTo(0,0); cctx.lineTo(length,0); cctx.lineTo(length,endingHeight); cctx.lineTo(0,startingHeight); cctx.closePath(); cctx.clip(); // draw a series of vertical gradient lines with increasing height for(var x=0;x<length;x+=1){ var gradient=cctx.createLinearGradient(0,0,0,y); for(var i=0;i<stops.length;i++){ gradient.addColorStop(stops[i].stop,stops[i].color); } cctx.beginPath(); cctx.moveTo(x,0); cctx.lineTo(x,y+2); cctx.strokeStyle=gradient; cctx.stroke(); y+=yInc; } return(c); } #canvas{border:1px solid red; margin:0 auto; } <h4>Stretched gradient made from vertical strokes</h4> <canvas id="canvas" width=300 height=200></canvas>
{ "language": "en", "url": "https://stackoverflow.com/questions/35317997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .Net HttpClient Class within Xamarin only works in Full Debug info mode I'm new in Xamarin development.I build a small demo which use .Net Httpclient to send and receive message from remote server.Everything is fine except the httpclient segament If I choose Debug and output full debug info the response is "Success". If I choose Debug and output pdb-only or none then I got into the Exception. Things are same when I build Release, but if I output none debug info with Release mode the App will crash on my cell phone. Does anybody have the similar problem ? What should I do? I will greatly appreciate a solution. The related code is: string response = string.Empty; var client = new HttpClient(); client.MaxResponseContentBufferSize = 1024 * 1024; //client.Timeout = new TimeSpan( 0, 0, 0, 5 ); Task<string> result = null; try { result = client.GetStringAsync( reqInfo_ ); response = result.Result; response = "Success"; } catch ( Exception e ) { //output the exception } Exception is: A: http://forums.xamarin.com/discussion/7327/system-net-webexception-error-nameresolutionfailure Owen's solution works for me: "For me, in release mode, I checked the box marked Internet in the Required permissions section of the AndroidManifest.xml file found under the Properties folder. Debug mode doesn't need this checked but Release mode does - which you need if sending the APK by email for installation - which works great by the way. In case anyone is interested, you also have to sign your app and produce the private key following these instructions here: http://developer.xamarin.com/guides/android/deployment,testing,_and_metrics/publishing_an_application/part_2-_publishing_an_application_on_google_play/"
{ "language": "en", "url": "https://stackoverflow.com/questions/40234992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Anomaly Detection Using Keras - Low Prediction Rate I built an Anomaly detection system using Autoencoder, implemented in keras. My input is a normalized vector with length 13. My dataset contains about 25,000 non anomaly inputs which dedicated for learning. I get about 10^-5 MSE after learning with 1-3 epochs. The problem is that although I get to a small MSE my AE can't detect anomalies good enough.. Model class: class AutoEncoder: def __init__(self, inputLen, modelName,Batch,epochs): self.modelName = modelName self.DL_BATCH_SIZE = Batch self.DL_EPOCHS = epochs self.DL_LAYER1 = 200 self.DL_LAYER2 = 150 self.DL_LAYER3 = 100 self.DL_LAYEROUT = 13 self.DL_LOADMODEL = None #self.DL_SAVEMODEL = fileConfig.getConfigParam(DL_SAVEMODEL) #print(tensorflow.version.VERSION) if self.DL_LOADMODEL == None or self.DL_LOADMODEL == 0: my_init = keras.initializers.glorot_uniform(seed=1) self.dlModel = keras.models.Sequential() self.dlModel.add(keras.layers.Dense(units=self.DL_LAYER1, activation='tanh', input_dim=inputLen,kernel_initializer=my_init)) self.dlModel.add(keras.layers.Dense(units=self.DL_LAYER2, activation='tanh',kernel_initializer=my_init)) self.dlModel.add(keras.layers.Dense(units=self.DL_LAYER3, activation='tanh',kernel_initializer=my_init)) self.dlModel.add(keras.layers.Dense(units=self.DL_LAYER2, activation='tanh',kernel_initializer=my_init)) self.dlModel.add(keras.layers.Dense(units=self.DL_LAYER1, activation='tanh',kernel_initializer=my_init)) self.dlModel.add(keras.layers.Dense(units=self.DL_LAYEROUT, activation='tanh',kernel_initializer=my_init)) #sgd = keras.optimizers.SGD(lr=0.0001, decay=0.0005, momentum=0, nesterov=True) #adam = keras.optimizers.Adam(learning_rate=0.005,decay=0.005) simple_adam=keras.optimizers.Adam() self.dlModel.compile(loss='mse', optimizer=simple_adam, metrics=['accuracy']) else: self.dlModel = keras.models.load_model(self.DL_LOADMODEL + ".h5") After training I find the max reconstruction MSE on a specific dataset of 2500 non anomalies. Then i test my Anomaly detector and mark as anomaly every input that its reconstruction has more than the max MSE*0.9 value. find max error: N = len(non_anomaly) max_se = 0.0; max_ix = 0 second_max=0 predicteds = kerasNN.predict(non_anomaly) for i in range(N): curr_se = np.square(np.subtract(non_anomaly[i],predicteds[i])).mean() if curr_se > max_se: second_max=max_se max_se = curr_se; max_ix = i Testing the model: predicteds=kerasNN.predict(x_train_temp) #errors vector generation anomaly_binary_vector = [] i=0 anomalies=0 for x_original,x_reconstructed in zip(x_train_temp,predicteds): MSE=np.square(np.subtract(x_original,x_reconstructed)).mean() if(MSE>=0.95*max_se): anomaly_binary_vector.append(1) else: anomaly_binary_vector.append(0) i+=1 Output: anomalies 2419 not detected anomalies 2031 non anomaly but marked as anomaly 2383 percentage of non anomaly instructions that marked as anomalies out of non anomalies : 0.3143384777733808 percentage of anomaly instructions that wasn't detected out of anomalies: 0.8396031417941298 How can I Improve My anomaly detection? A: A problem can be that you are "blowing up" you're autoencoder. You have an input of 13 and than map that to 200, 150, 100 and 13 again. There is no bottleneck in the network where it has to learn a compressed representation of the input. Maybe try 13 -> 6 -> 4 -> 6 -> 13 or something like that. Than it lears a compressed representation and the task to reconstruct the input is not trivial anymore. Additionally play around with other hyperparameters like the activation function. Maybe change it to 'relu' in the intermediate models.
{ "language": "en", "url": "https://stackoverflow.com/questions/65275763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Saving of Numeric with decimal to txt File from datagridview in vb.net This is my code for saving the values from data grid view to text file: Private Sub TextFileToolStripMenuItem_Click(sender As System.Object, e As System.EventArgs) Handles TextFileToolStripMenuItem.Click Dim filename As String = String.Empty Dim sfd1 As New SaveFileDialog() sfd1.Filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*" sfd1.FilterIndex = 2 sfd1.RestoreDirectory = True sfd1.Title = "Save Text File" If sfd1.ShowDialog() = DialogResult.OK Then If sfd1.FileName = String.Empty Then MsgBox("Please input filename") Else filename = sfd1.FileName.ToString Saveto_TextFile(dvList, filename) End If End If End Sub Sub Saveto_TextFile(ByVal dvList As DataGridView, ByVal filename As String) Dim numCols As Integer = dvList.ColumnCount - 1 Dim numRows As Integer = dvList.RowCount Dim strDestinationFile As String = "" & filename & ".txt" Dim tw As TextWriter = New StreamWriter(strDestinationFile) For dvRow As Integer = 0 To numRows - 1 'checking if the checkbox is checked, then write to text file If dvList.Rows(dvRow).Cells.Item(0).Value = True Then tw.Write("True") tw.Write(", ") Else tw.Write("False") tw.Write(", ") End If 'write the remaining rows in the text file For dvCol As Integer = 1 To numCols tw.Write(dvList.Rows(dvRow).Cells(dvCol).Value) If (dvCol <> numCols) Then tw.Write(", ") End If Next tw.WriteLine() Next tw.Close() End Sub This code is perfectly working, but my only concern is that I set up the property of my data grid view to Numeric with 2 decimal places. When I'm saving it to the text file, it removes the decimal places. What can I do to keep the decimal places in the text file? A: I modified your SaveTo_TextFile method. I added two columns to my dvList [Column1] and [Column2]. I was able to save the decimal value I entered in [Column2] successfully. I do not know how you formatted your DataGridView column but mine is only a DataGridViewTextBoxCell with no formatting. If I used formatting, this is what I would set my numeric column's row cellstyle to: dvList.Columns("Column2").DefaultCellStyle.Format = "N2" SaveTo_TextFile method Private Sub Saveto_TextFile(ByVal dvList As DataGridView, ByVal filename As String) Dim numCols As Integer = dvList.ColumnCount - 1 Dim numRows As Integer = dvList.RowCount Dim strDestinationFile As String = "" & filename & ".txt" Dim tw As TextWriter = New StreamWriter(strDestinationFile) For dvRow As Integer = 0 To numRows - 1 'checking if the checkbox is checked, then write to text file If dvList.Rows(dvRow).Cells("Column1").Value = True Then tw.WriteLine(dvList.Rows(dvRow).Cells("Column2").Value) 'Column2 is the name of the column ... You can also use an index here Else tw.WriteLine("Not Checked") End If 'write the remaining rows in the text file For dvCol As Integer = 1 To numCols tw.WriteLine(dvList.Rows(dvRow).Cells(dvCol).Value) If (dvCol <> numCols) Then tw.WriteLine("???") End If Next tw.WriteLine() Next tw.Close() End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/18839379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Link to current page does nothing in AngularJS if I am on the /contact page on my website, if my nav bar has a link to the /contact and I click it, nothing happens. But if I click any other link that isn't the current url, Angular responds correctly. How do I make it reload the current page when clicking a link to itself in AngularJS? A: Create function which reloads controller and attach that function to view $scope.reloadController = function ({ $state.go($state.current, {}, { reload: true }); }) And atach it to link: <a ng-click="reload()">Contacts</button> A: If the link is to the current page it's actually smart that it doesn't reload. Try using a JavaScript function e.g. reload http://www.w3schools.com/jsref/met_loc_reload.asp
{ "language": "en", "url": "https://stackoverflow.com/questions/34477983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: onchange event not alerting Could someone help me figure out why this isn't triggering? $("input:radio[name=cm-fo-ozlkr]").change( function(){ alert('Handler for .change() called.'); }); HTML <input type="radio" checked="checked" class="styled" value="1397935" id="cm1397935" name="cm-fo-ozlkr"><input type="radio" class="styled" value="1397934" id="cm1397934" name="cm-fo-ozlkr"> A: Your attribute selector was missing quotes; $("input:radio[name='cm-fo-ozlkr']").change( function(){ alert('Handler for .change() called.'); }); A: Is the radio button HTML getting generated dynamically e.g. on an ajax refresh? If so, you want to use jQuery live: $("input:radio[name=cm-fo-ozlkr]").live('change', function () { alert('Handler for .change() called.'); }); A: Use the click event instead of change. Also, the correct selector is input[name=cm-fo-ozlkr]:radio. A: Try this.... $(document).ready(function(){ $("input:radio[name='cm-fo-ozlkr']").change( function(){ alert('Handler for .change() called.'); }); }); A: if you haven't already done so... $(document).ready(function() { $("input:radio[name=cm-fo-ozlkr]").change( function(){ alert('Handler for .change() called.'); }); });
{ "language": "en", "url": "https://stackoverflow.com/questions/5966339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Working with API's in Node, security of var I'm a php developer learning how to work with Node. When you work with php and a private API which requires an API key, you're kind of safe since you can't view PHP code in the developers console. But now I'm working with an API in Node which has this structure. var AdwordsUser = require('node-adwords-es5'); var user = new AdwordsUser({ developerToken: 'INSERT DEVELOPER TOKEN', //your adwords developerToken userAgent: 'Geen', //any company name clientCustomerId: 'INSERT CLIENT ID', //the Adwords Account id (e.g. 123-123-123) client_id: 'INSERT_OAUTH2_CLIENT_ID_HERE', //this is the api console client_id client_secret: 'INSERT_OAUTH2_CLIENT_SECRET_HERE', refresh_token: 'INSERT_OAUTH2_REFRESH_TOKEN_HERE' }); Since this is JavaScript I assume you will be able to see all of this in the developers console. Which is not safe? How do people usually solve this, or am I worrying for nothing? A: Node runs on server side so its not possible to view it in the web console. you can only view the data coming from node server on HTTP or socket call.so relax and happy coding .
{ "language": "en", "url": "https://stackoverflow.com/questions/44804038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ZF2 dont validate element inside a collection i have problems validating a Form with a element of type Collection, First, I create a "Collection" type element and then I add several text type elements. the form is rendered correctly, The problem is that the Form is always valid. How i can validate a collection type element? Form class: class TestForm extends Form { private $inputFilter; public function __construct($name = null) { parent::__construct($name); $this->add(array( 'name' => 'submit', 'type' => 'Zend\Form\Element\Submit', 'options' => array( 'label' => 'Submit', ), 'attributes' => array( 'class' => 'form-control', 'value' => 'submit' ), )); $docs = array( array('name' => "doc A"), array('name' => "doc B") ); // add collection of docs. $collection = new \Zend\Form\Element\Collection(); $collection->setName('docs'); foreach ($docs as $key => $doc) { $element = new \Zend\Form\Element\Text($key); $element->setOptions(array( 'label' => $doc['name'], )); $element->setAttributes(array( 'class' => 'form-control input-sm', )); $collection->add($element); } $this->add($collection); } public function getInputFilter() { $this->inputFilter = new InputFilter(); $this->inputFilter->add(array( 'name' => "docs", 'required' => true, 'filters' => array( array('name' => 'StripTags'), array('name' => 'StringTrim'), ), )); return $this->inputFilter; } } Controller class: class IndexController extends AppController { public function indexAction() { $form = new \Application\Model\Form\TestForm(); $request = $this->getRequest(); if ($request->isPost()) { $data = $this->params()->fromPost(); $form->setData($data); $form->setInputFilter($form->getInputFilter()); if ($form->isValid()) { pr("is valid"); } else { pr($form->getMessages()); } } return new ViewModel(array( 'form' => $form )); } View class: <?php $form->prepare(); echo $this->form()->openTag($form); echo $this->formRow($form->get('docs')); echo $this->formRow($form->get('submit')); echo $this->form()->closeTag(); ?> A: Your Collection is always valid, because it contains fields. You can't do this that way. You should considering to add Validators to DocAand DocBfields instead. This will work as follow to set correct input filters : $form->getInputFilter()->get('docs')->get('DocA')->getValidatoChain()->attachByName('YourValidatorName'); for a custom validator. OR : $form->getInputFilter()->get('docs')->get('DocA')->setRequired(true); $form->getInputFilter()->get('docs')->get('DocA')->setAllowEmpty(false); And you can also add Zend validators to them. $form->getInputFilter()->get('docs')->get('DocA')->getValidatorChain()->attach(new NotEmpty([with params look docs for that]) Careful if you don't use ServiceManager for retrieve validators you'll need to set the translator as options. Don't forget to set your validationGroup correctly or don't specify one to use VALIDATE_ALL. The same way as validators you can also add Filters as follow : $form->getInputFilter()->get('docs')->get('DocA')->getFilterChain()->getFilters()->insert(new StripTags())->insert(new StringTrim())
{ "language": "en", "url": "https://stackoverflow.com/questions/48795575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: where i can get blacklist spam email domain dataset? I wanna create an email classifier the classifier will be divide by email, subject, content classifier for email classifier, I need a list of blacklist domain such as @blablabla.com @cacacaca.com etc. like this set here but I need an up to date domain, so where I can get them? thanks A: I wonder if a good way might be to go to mxtoolbox, do a blacklist test, then get a list of blacklist sites and see if you can contact them to get a list? I suspect that such companies may consider those datasets their intellectual property and probably won't publish these - it may not be possible. Good luck! Also Akismet may have such a dataset? Additionally, the more powerful email classifying software works by using patterns that you can make. Check out MailMarshall88 for example. You could use this to build your own dataset, but remember that just because someone is on a blacklist today, doesn't mean that they're always bad. For example, you might get a virus outbreak in your company which spams people and gets your IP blacklisted. You then fix the virus and are now incorrectly blacklisted. In this scenario a pattern would work much better.
{ "language": "en", "url": "https://stackoverflow.com/questions/49606612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Error 0x800706F7 "The stub received bad data" on Windows XP SP3 In my VB6 application I make several calls to a COM server my team created from a Ada project (using GNATCOM). There are basically 2 methods available on the COM server. Their prototypes in VB are: Sub PutParam(Param As Parameter_Type, Value) Function GetParam(Param As Parameter_Type) where Parameter_Type is an enumerated type which distinguishes the many parameters I can put to/get from the COM server and 'Value' is a Variant type variable. PutParam() receives a variant and GetParam() returns a variant. (I don't really know why in the VB6 Object Browser there's no reference to the Variant type on the COM server interface...). The product of this project has been used continuously this way for years without any problems in this interface on computers with Windows XP with SP2. On computers with WinXP SP3 we get the error 0x800706F7 "The stub received bad data" when trying to put parameters with the 'Long' type. Does anybody have any clue on what could be causing this? The COM server is still being built in a system with SP2. Should make any difference building it on a system with SP3? (like when we build for X64 in X64 systems). One of the calls that are causing the problem is the following (changed some var names): Dim StructData As StructData_Type StructData.FirstLong = 1234567 StructData.SecondLong = 8901234 StructData.Status = True ComServer.PutParam(StructDataParamType, StructData) Where the definition of StructData_Type is: Type StructData_Type FirstLong As Long SecondLong As Long Status As Boolean End Type (the following has been added after the question was first posted) The definition of the primitive calls on the interface of the COM server in IDL are presented below: // Service to receive data HRESULT PutParam([in] Parameter_Type Param, [in] VARIANT *Value); //Service to send requested data HRESULT GetParam([in] Parameter_Type Param, [out, retval] VARIANT *Value); The definition of the structure I'm trying to pass is: struct StructData_Type { int FirstLong; int SecondLong; VARIANT_BOOL Status; } StructData_Type; I found it strange that this definition here is using 'int' as the type of FirstLong and SeconLong and when I check the VB6 object explorer they are typed 'Long'. Btw, when I do extract the IDL from the COM server (using a specific utility) those parameters are defined as Long. Update: I have tested the same code with a version of my COM server compiled for Windows 7 (different version of GNAT, same GNATCOM version) and it works! I don't really know what happened here. I'll keep trying to identify the problem on WinXP SP3 but It is good to know that it works on Win7. If you have a similar problem it may be good to try to migrate to Win7. A: I'll focus on explaining what the error means, there are too few hints in the question to provide a simple answer. A "stub" is used in COM when you make calls across an execution boundary. It wasn't stated explicitly in the question but your Ada program is probably an EXE and implements an out-of-process COM server. Crossing the boundary between processes in Windows is difficult due to their strong isolation. This is done in Windows by RPC, Remote Procedure Call, a protocol for making calls across such boundaries, a network being the typical case. To make an RPC call, the arguments of a function must be serialized into a network packet. COM doesn't know how to do this because it doesn't know enough about the actual arguments to a function, it needs the help of a proxy. A piece of code that does know what the argument types are. On the receiving end is a very similar piece of code that does the exact opposite of what the proxy does. It deserializes the arguments and makes the internal call. This is the stub. One way this can fail is when the stub receives a network packet and it contains more or less data than required for the function argument values. Clearly it won't know what to do with that packet, there is no sensible way to turn that into a StructData_Type value, and it will fail with "The stub received bad data" error. So the very first explanation for this error to consider is a DLL Hell problem. A mismatch between the proxy and the stub. If this app has been stable for a long time then this is not a happy explanation. There's another aspect about your code snippet that is likely to induce this problem. Structures are very troublesome beasts in software, their members are aligned to their natural storage boundary and the alignment rules are subject to interpretation by the respective compilers. This can certainly be the case for the structure you quoted. It needs 10 bytes to store the fields, 4 + 4 + 2 and they align naturally. But the structure is actually 12 bytes long. Two bytes are padded at the end to ensure that the ints still align when the structure is stored in an array. It also makes COM's job very difficult, since COM hides implementation detail and structure alignment is a massive detail. It needs help to copy a structure, the job of the IRecordInfo interface. The stub will also fail when it cannot find an implementation of that interface. I'll talk a bit about the proxy, stub and IRecordInfo. There are two basic ways a proxy/stub pair are generated. One way is by describing the interfaces in a language called IDL, Interface Description Language, and compile that with MIDL. That compiler is capable of auto-generating the proxy/stub code, since it knows the function argument types. You'll get a DLL that needs to be registered on both the client and the server. Your server might be using that, I don't know. The second way is what VB6 uses, it takes advantage of a universal proxy that's built into Windows. Called FactoryBuffer, its CLSID is {00000320-0000-0000-C000-000000000046}. It works by using a type library. A type library is a machine readable description of the functions in a COM server, good enough for FactoryBuffer to figure out how to serialize the function arguments. This type library is also the one that provides the info that IRecordInfo needs to figure out how the members of a structure are aligned. I don't know how it is done on the server side, never heard of GNATCOM before. So a strong explanation for this problem is that you are having a problem with the type library. Especially tricky in VB6 because you cannot directly control the guids that it uses. It likes to generate new ones when you make trivial changes, the only way to avoid it is by selecting the binary compatibility option. Which uses an old copy of the type library and tries to keep the new one as compatible as possible. If you don't have that option turned on then do expect trouble, especially for the guid of the structure. Kaboom if it changed and the other end is still using the old guid. Just some hints on where to start looking. Do not assume it is a problem caused by SP3, this COM infrastructure hasn't changed for a very long time. But certainly expect this kind of problem due to a new operating system version being installed and having to re-register everything. SysInternals' ProcMon is a good utility to see the programs use the registry to find the proxy, stub and type library. And you'd certainly get help from a COM Spy kind of utility, albeit that they are very hard to find these days. A: If it suddenly stopped working happily on XP, the first culprit I'd look for is type mismatches. It is possible that "long" on such systems is now 64-bits, while your Ada COM code (and/or perhaps your C ints) are exepecting 32-bits. With a traditionally-compiled system this would have been checked for you by your compiler, but the extra indirection you have with COM makes that difficult. The bit you wrote in there about "when we compile for 64-bit systems" makes me particularly leery. 64-bit compiles may change the size of many C types, you know. A: This Related Post suggests you need padding in your struct, as marshalling code may expect more data than you actually send (which is a bug, of course). Your struct contains 9 bytes (assuming 4 bytes for each of the ints/longs and one for the boolean). Try to add padding so that your struct contains a multiple of 4 bytes (or, failing that, multiple of 8, as the post isn't clear on the expected size) A: I am also suggesting that the problem is due to a padding issue in your structure. I don't know whether you can control this using a #pragma, but it might be worth looking at your documentation. I think it would be a good idea to try and patch your struct so that the resulting type library struct is a multiple of four (or eight). Your Status member takes up 2 bytes, so maybe you should insert a dummy value of the same type either before or after Status - which should bring it up to 12 bytes (if packing to eight bytes, this would have to be three dummy variables).
{ "language": "en", "url": "https://stackoverflow.com/questions/15821387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I introduce my VS 2019 to Unreal Engine? I have installed VS 2019 and UE4 v.4.11.0 but when I want to open a C++ project in UE4 I see that it can not recognize my VS2019 compiler. How can I fix it? A: Support for Visual Studio 2019 has been added to Unreal Engine 4.22: Release Notes. After updating to Unreal Engine 4.22 or newer, ensure that you have set the Source Code Editor to Visual Studio 2019 in the Editor Preferences. Additionally, if you have created your project with an earlier version of Visual Studio, you have to regenerate your project files: right click your .uproject file and click Generate Visual Studio project files.
{ "language": "en", "url": "https://stackoverflow.com/questions/63260738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to set an array in json response goLang-gin I have an array of stucts stored a variable my array. Struct is type myStruct struct { id int64 `db:"id" json:"id"` Name string `form:"name" db:"name" json:"name" binding:"required"` Status string `form:"status" db:"status" json:"status" binding:"required"` My array looks like this and is stored in a variable 'myArray'. This array is formed by iterating over a set of rows coming from database. [{1 abc default} {2 xyz default}] I am using gin as http server. How do I set this array into JSON reponse using c.JSON. Something like [ { id: 1, name : 'abc' status: 'default' }, { id: 2, name : 'xyz' status: 'default' } ] A: ok c.JSON(http.StatusOK, myArray) worked. But I cannot see the Id field in the response. Any reason why? Is it because of 'int64' dataType?
{ "language": "en", "url": "https://stackoverflow.com/questions/39095502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Rebuild existing EC2 instance from snapshot? I have an existing linux EC2 instance with a corrupted root volume. I have a snapshot of the root that is not corrupted. Is it possible with terraform to rebuild the instance based on the snapshot ID of the snapshot ? A: Of course it is possible, this simple configuration should do the job: resource "aws_ami" "aws_ami_name" { name = "aws_ami_name" virtualization_type = "hvm" root_device_name = "/dev/sda1" ebs_block_device { snapshot_id = "snapshot_ID” device_name = "/dev/sda1" volume_type = "gp2" } } resource "aws_instance" "ec2_name" { ami = "${aws_ami.aws_ami_name.id}" instance_type = "t3.large" } A: It's not really a Terraform-type task, since you're not deploying new infrastructure. Instead, do it manually: * *Create a new EBS Volume from the Snapshot *Stop the instance *Detach the existing root volume (make a note of the device identifier such as /dev/sda1) *Attach the new Volume with the same identifier *Start the instance
{ "language": "en", "url": "https://stackoverflow.com/questions/69857931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: NetworkX: structuring a simple flow study on a graph If you consider the following graph: from __future__ import division import networkx as nx import matplotlib.pyplot as plt import numpy as np from numpy.linalg import inv G = nx.Graph() pos={1:(2,3),2:(0,0),3:(6,0)} G.add_nodes_from(pos.keys()) nx.set_node_attributes(G, 'coord', pos) PE={1:0,2:60,3:40} nx.set_node_attributes(G,'PE',PE) q={1:100,2:0,3:0} nx.set_node_attributes(G,'q',q) G.add_edge(1,2) G.add_edge(1,3) G.add_edge(2,3) import math lengths={} inv_lengths={} for edge in G.edges(): startnode=edge[0] endnode=edge[1] lengths[edge]=round(math.sqrt(((pos[endnode][1]-pos[startnode][1])**2)+ ((pos[endnode][0]-pos[startnode][0])**2)),2) inv_lengths[edge]=round(1/lengths[edge],3) nx.set_edge_attributes(G, 'length', lengths) nx.set_edge_attributes(G, 'inv_length', inv_lengths) nx.draw(G,pos,node_size=1000,node_color='r',with_labels=True) nx.draw_networkx_edge_labels(G,pos) plt.show() And the following flow problem: where 1 is a supply-only node and 2 and 3 are demand-only, how come that the following solution yields weird values of flow through each edge? It seems like q1=100 is not even considered, and I expect L2 to have flow=0. m=nx.laplacian_matrix(G,weight='inv_length') a=m.todense() flow={} res2=np.dot(a,b) #No inverse is required: x=ab res2=[round(item,3) for sublist in res2.tolist() for item in sublist] print res2 for i,e in enumerate(G.edges()): flow[e]=res2[i] b=[] for i,v in enumerate(PE.values()): b.append(v) res2=np.dot(a,b) #No inverse is required: x=ab res2=[round(item,3) for sublist in res2.tolist() for item in sublist] print res2 #res2=[-24.62, 19.96, 4.66] A: I have taken the liberty to compute edge lengths in a simpler way: from scipy.spatial.distance import euclidean lengths = {} inv_lengths = {} for edge in G.edges(): startnode = edge[0] endnode = edge[1] d = euclidean(pos[startnode], pos[endnode]) lengths[edge] = d inv_lengths[edge] = 1/d And this is how I implemented the matrix equation   : E = np.array([[0], [60], [40]], dtype=np.float64) L1 = lengths[(1, 2)] L2 = lengths[(2, 3)] L3 = lengths[(1, 3)] L = np.array([[1/L1 + 1/L3, -1/L1, -1/L3], [ -1/L1, 1/L1 + 1/L2, -1/L2], [ -1/L3, -1/L2, 1/L2 + 1/L3]], dtype=np.float64) qLE = np.dot(L, E) The code above yields the same result (approximately) than yours: In [55]: np.set_printoptions(precision=2) In [56]: qLE Out[56]: array([[-24.64], [ 19.97], [ 4.67]]) In summary, I think this is not a programming issue. Perhaps you should revise the flow model...
{ "language": "en", "url": "https://stackoverflow.com/questions/42348315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Starting apache derby programmatically I am a newbie to java. In, my project, I have used JDBC. It worked fine till netbeans was on. But, when I turned off netbeans and executed the jar file, I got the following error :- java.sql.SQLNonTransientConnectionException: java.net.ConnectException : Error connecting to server localhost on port 1527 with message Connection refused. I read this answer at -> https://stackoverflow.com/a/9725496/2464420,but I could not achieve the desired result. I got the following:- Please help me. A: There is a compilation error in your project You must add derbynet.jar dependency in your classpath in order to embed the server. http://db.apache.org/derby/papers/DerbyTut/ns_intro.html#ns_config_env Client and server dependencies are two different jar.
{ "language": "en", "url": "https://stackoverflow.com/questions/20031034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Cakephp 3 Appending args variable in all pagination links I am using CakePHP V3.5.10 Once I cake bake my code any paginator works because is appending this args a variable that looks like this: domain.com/admin/categories?args=&page=3 So when I var_dump it I get this : var_dump($this->request->params); array(9) { ["controller"]=> string(10) "Categories" ["pass"]=> array(0) { } ["action"]=> string(5) "index" ["prefix"]=> string(5) "admin" ["plugin"]=> NULL ["_matchedRoute"]=> string(18) "/admin/:controller" ["?"]=> array(1) { ["args"]=> string(0) "" } ["_ext"]=> NULL ["isAjax"]=> bool(false) } if you can see the place where normally should be "page" is "args" instead so I check a diff application of mine and how normally should look is like this: Healthy URL: domain.com/admin/categories?page=3 array(9) { ["controller"]=> string(10) "Categories" ["pass"]=> array(0) { } ["action"]=> string(5) "index" ["prefix"]=> string(5) "admin" ["plugin"]=> NULL ["_matchedRoute"]=> string(18) "/admin/:controller" ["?"]=> array(1) { ["page"]=> string(1) "2" } ["_ext"]=> NULL ["isAjax"]=> bool(false) } As you can see here I have array(1) { ["page"]=> string(1) "2" So that is why paginator doesn't work because can't read the ?page plus is passing this ?args that is empty... but what is not known is why paginator will behave in this way? and what to do to make it work? If is there someone could help with this I'll be really thankful. I've been comparing even codes of the paginator and certainly no idea why this is happening. A: OK if you find this error is because of the way the data is being passing Ex: ?page=1&sort=name&direction:asc is not a CakePHP way of handling params. A right format url in cake should be nice looking like this: domain.com/2/name/asc The way I end it up passing the params in routes.php: $routes->connect('/productoptions/:page/:sort/:direction', ['controller' => 'Productoptions', 'action' => 'index'], ['param' => ['page'], 'param2' => ['sort'], 'param3' => ['direction'] ]); So I can use $this->request->params['page] , $this->request->params['direction] etc... In this way, I use CakePHP way of passing and handling params instead of $_GET
{ "language": "en", "url": "https://stackoverflow.com/questions/48915349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to add a prefix to every row of a particular column in a CSV file via command line on Linux I am trying to achieve the following. File before editing. column-1, column-2, column-3, column-4, column-5 Row-1-c1, Row-1-c2, Row-1-c3, Row-1-c4, Row-1-c5 Row-2-c1, Row-2-c2, Row-2-c3, Row-2-c4, Row-2-c5 Row-3-c1, Row-3-c2, Row-3-c3, Row-3-c4, Row-3-c5 Row-4-c1, Row-4-c2, Row-4-c3, Row-4-c4, Row-4-c5 Row-5-c1, Row-5-c2, Row-5-c3, Row-5-c4, Row-5-c5 File after editing column-1, column-2, column-3, column-4, column-5 Row-1-c1, Row-1-c2, Prefix-Row-1-c3, Row-1-c4, Row-1-c5 Row-2-c1, Row-2-c2, Prefix-Row-2-c3, Row-2-c4, Row-2-c5 Row-3-c1, Row-3-c2, Prefix-Row-3-c3, Row-3-c4, Row-3-c5 Row-4-c1, Row-4-c2, Prefix-Row-4-c3, Row-4-c4, Row-4-c5 Row-5-c1, Row-5-c2, Prefix-Row-5-c3, Row-5-c4, Row-5-c5 Notice that column-3 is the column that the prefix is added to each individual row except the column heading. I was wondering which editor would be the best editor to use and find out how to use the commands to get the desired result. A: Maybe a better question would be "How many different tools could you use for the job?" I'd probably go with awk as the easiest tool that does the job reasonably simply: awk -F, 'NR == 1 { print; OFS="," } NR > 1 { sub(/^ +/, "&Prefix-", $3); print }' The sub operation adds Prefix- after the spaces at the start of column 3. The code does not attempt to adjust the content of line 1 (the heading); if you want spaces added after $3, then I suppose this does the job (because of the placement of commas, you prefix the extra spaces to column 4 of line 1): awk -F, 'NR == 1 { OFS=","; $4 = " " $4; print } NR > 1 { sub(/^ +/, "&Prefix-", $3); print }' Do you know how to do the same thing with sed? Yes, like this: sed -e ' 1s/^\(\([^,]*,[[:space:]]*\)\{3\}\)/\1 /' \ -e '2,$s/^\(\([^,]*,[[:space:]]*\)\{2\}\)/\1Prefix-/' "$@" The first expression deals with the first line; it puts as many spaces as there are in the prefix (here that's "Prefix-" so it's 7 spaces) after the third column. The second expression deals with the remaining lines; it adds the prefix before the third column. To deal with column N instead of column 3, change the 3 to N and the 2 inside \{2\} to N-1. I rechecked the second Awk script; it produces the correct output for me on the sample data from the question. So, within its limitations, does the first Awk script. Make sure you're using something other than the C shell (it gets upset by multi-line quoted strings), and that you were careful with your copying. Example output $ cat data column-1, column-2, column-3, column-4, column-5 Row-1-c1, Row-1-c2, Row-1-c3, Row-1-c4, Row-1-c5 Row-2-c1, Row-2-c2, Row-2-c3, Row-2-c4, Row-2-c5 Row-3-c1, Row-3-c2, Row-3-c3, Row-3-c4, Row-3-c5 Row-4-c1, Row-4-c2, Row-4-c3, Row-4-c4, Row-4-c5 Row-5-c1, Row-5-c2, Row-5-c3, Row-5-c4, Row-5-c5 $ bash manglesed.sh data column-1, column-2, column-3, column-4, column-5 Row-1-c1, Row-1-c2, Prefix-Row-1-c3, Row-1-c4, Row-1-c5 Row-2-c1, Row-2-c2, Prefix-Row-2-c3, Row-2-c4, Row-2-c5 Row-3-c1, Row-3-c2, Prefix-Row-3-c3, Row-3-c4, Row-3-c5 Row-4-c1, Row-4-c2, Prefix-Row-4-c3, Row-4-c4, Row-4-c5 Row-5-c1, Row-5-c2, Prefix-Row-5-c3, Row-5-c4, Row-5-c5 $ bash mangleawk.sh data column-1, column-2, column-3, column-4, column-5 Row-1-c1, Row-1-c2, Prefix-Row-1-c3, Row-1-c4, Row-1-c5 Row-2-c1, Row-2-c2, Prefix-Row-2-c3, Row-2-c4, Row-2-c5 Row-3-c1, Row-3-c2, Prefix-Row-3-c3, Row-3-c4, Row-3-c5 Row-4-c1, Row-4-c2, Prefix-Row-4-c3, Row-4-c4, Row-4-c5 Row-5-c1, Row-5-c2, Prefix-Row-5-c3, Row-5-c4, Row-5-c5 $ cat manglesed.sh sed -e ' 1s/^\(\([^,]*,[[:space:]]*\)\{3\}\)/\1 /' \ -e '2,$s/^\(\([^,]*,[[:space:]]*\)\{2\}\)/\1Prefix-/' "$@" $ cat mangleawk.sh awk -F, 'NR == 1 { OFS=","; $4 = " " $4; print } NR > 1 { sub(/^ +/, "&Prefix-", $3); print }' "$@" $ awk -F, 'NR == 1 { print; OFS="," } NR > 1 { sub(/^ +/, "&Prefix-", $3); print }' data column-1, column-2, column-3, column-4, column-5 Row-1-c1, Row-1-c2, Prefix-Row-1-c3, Row-1-c4, Row-1-c5 Row-2-c1, Row-2-c2, Prefix-Row-2-c3, Row-2-c4, Row-2-c5 Row-3-c1, Row-3-c2, Prefix-Row-3-c3, Row-3-c4, Row-3-c5 Row-4-c1, Row-4-c2, Prefix-Row-4-c3, Row-4-c4, Row-4-c5 Row-5-c1, Row-5-c2, Prefix-Row-5-c3, Row-5-c4, Row-5-c5 $
{ "language": "en", "url": "https://stackoverflow.com/questions/24226003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: NSMutableArray addObject freezes my app Tons of similar questions, but none of their answers seem to answer mine... I'm creating my array as such: imgArray = [NSMutableArray arrayWithCapacity:10]; Later (in another function) I am trying to add an object to it: Attachment newAttachment = [[[Attachment alloc] init] autorelease]; newAttachment.fileName = filename; newAttachment.file = file; [imgArray addObject:newAttachment]; This results in the iPhone app freezing up. The simulator seems fine; the clock on the status bar keeps ticking, I don't get any error messages, but my app is no longer responding. What am I doing wrong? A: It seems you are not retaining imgArray. Are you? Try, imgArray = [[NSMutableArray alloc] initWithCapacity:10] if not. A: just do imgArray = [[NSMutableArray arrayWithCapacity:10] retain]; all class methods if return an object is returned with retain count 1 and object already in autorelease pool so if want to use that object beyond the current working block the you should always retain that object because then reference is lost outside the block.
{ "language": "en", "url": "https://stackoverflow.com/questions/6224773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to assign weight for a word? I have a list of words for four categories. There are 10 words for each category. Now I want to assign a weight for each word in the list. For example if one category has words: disease fall have ventilator tumour country sewing demo precaution analysis Now for each of these words I want to give a weight. For example disease should be given weight 1(by dividing 10 by 10), fall should be given weight 0.9 (by dividing 9 by 10), have should be given 0.8 and so on. How to write this code in R? Can anyone help me in this regard? Thanks in advance. I want my output to be: Death Malfunction Weight distance malformed 1 fall unformed 0.9 have intensive 0.8 ventilator malfunctioned 0.7 tumour front 0.6 country icu 0.5 sewing injury 0.4 demo care 0.3 precaution disease 0.2 analysis diagnosis 0.1
{ "language": "en", "url": "https://stackoverflow.com/questions/32626718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C# Saving Panel as image at Multithread I don't have any problem saving panel as image with UI thread but i have only a black rectangle when i save this panel at another thread except UI thread : using (Bitmap bmp = new Bitmap(panel1.Width, panel1.Height, System.Drawing.Imaging.PixelFormat.Format24bppRgb)) { if (panel1.InvokeRequired) { panel1.BeginInvoke((MethodInvoker)delegate () { panel1.DrawToBitmap(bmp, new System.Drawing.Rectangle(Point.Empty, bmp.Size)); }); Bitmap bb = bmp.Clone(new System.Drawing.Rectangle(0, 0, 1016, 648), PixelFormat.Format24bppRgb); bb.Save(@"C:\sample.bmp", ImageFormat.Bmp); } else { panel1.DrawToBitmap(bmp, new System.Drawing.Rectangle(Point.Empty, bmp.Size)); Bitmap bb = bmp.Clone(new System.Drawing.Rectangle(0, 0, 1016, 648), PixelFormat.Format24bppRgb); bb.Save(@"C:\sample.bmp", ImageFormat.Bmp); } } This problem is related with locking mechanism? Or how can i solve this problem? Thanks in advance. A: Universal answer (with explanation): BeginInvoke is function that send a message 'this function should be executed in different thread' and then directly leaves to continue execution in current thread. The function is executed at a later time, when the target thread has 'free time' (messages posted before are processed). When you need the result of the function, use Invoke. The Invoke function is 'slower', or better to say it blocks current thread until the executed function finishes. (I newer really tested this in C#, but it s possible, that the Invoke function is prioritized; e.g. when you call BeginInvoke and directly after it Invoke to the same thread, the function from Invoke will probably be executed before the function from BeginInvoke.) Use this alternative when you need the function to be executed before the next instruction are processed (when you need the result of the invoked function). Simple (tl;dr): When you need to need to only set a value (e.g. set text of edit box), use BeginInvoke, but when you need a result (e.g. get text from edit box) use always Invoke. In your case you need the result (bitmap to be drawn) therefore you need to wait for the function to end. (There are also other possible options, but in this case the simple way is the better way.)
{ "language": "en", "url": "https://stackoverflow.com/questions/49433837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to import csv files with sorted file names? I am trying to import multiple csv files and when I run the below code it does work. allfiles = glob.glob('*.csv') allfiles However, this results in: ['file_0.csv', 'file_1.csv', 'file_10.csv', 'file_100.csv', 'file_101.csv, ... ] As you can see, the imported files are not sorted numbers. What I want is to have my numbers in my file names to be in ascending order: ['file_0.csv', 'file_1.csv', 'file_2.csv', 'file_3.csv', ... ] How do I solve the problem? A: allfiles = glob.glob('*.csv') allfiles.sort(key= lambda x: int(x.split('_')[1].split('.')[0])) A: You can't do that with glob, you need to sort the resultant files yourself by the integer each file contains: allfiles = glob.iglob('*.csv') allfiles_sorted = sorted(allfiles, key=lambda x: int(re.search(r'\d+', x).group())) Also note that, i've used glob.iglob instead of glob.glob as there is no need to make an intermediate list where an iterator would do the job fine. A: Check with natsort from natsort import natsorted allfiles=natsorted(allfiles) A: os.listdir() will give the list of files in that folder and sorted will sort it import os sortedlist = sorted(os.listdir()) EDIT: just specify key = len to count the length of an element sorted(os.listdir(),key = len) A: This is also a way to do that. This algorithm will sort with length of file name string. import glob all_files = glob.glob('*.csv') def sort_with_length(file_name): return len(file_name) new_files = sorted(all_files, key = sort_with_length ) print("Old files:") print(all_files) print("New files:") print(new_files) Sample output: Old files: ['file1.csv', 'file101.csv', 'file102.csv', 'file2.csv', 'file201.csv', 'file3.csv'] New files: ['file1.csv', 'file2.csv', 'file3.csv', 'file101.csv', 'file102.csv', 'file201.csv']
{ "language": "en", "url": "https://stackoverflow.com/questions/57234211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Ionic Build iOS - Fail I've been workign on this project for months now with no major issues. Today I can't even get it to build out. I ran "$ionic resources" and now I can't get my build to work at all. I'm getting this error: ** BUILD FAILED ** The following build commands failed: CompileAssetCatalog build/emulator/Stopper.app Stopper/Images.xcassets (1 failure) ERROR building one of the platforms: Error code 65 for command: xcodebuild with args: -xcconfig,/Users/colemanjeff/GitHub/StopperRC1/platforms/ios/cordova/build-debug.xcconfig,-project,Stopper.xcodeproj,ARCHS=i386,-target,Stopper,-configuration,Debug,-sdk,iphonesimulator,build,VALID_ARCHS=i386,CONFIGURATION_BUILD_DIR=/Users/colemanjeff/GitHub/StopperRC1/platforms/ios/build/emulator,SHARED_PRECOMPS_DIR=/Users/colemanjeff/GitHub/StopperRC1/platforms/ios/build/sharedpch You may not have the required environment or OS to build this project Error: Error code 65 for command: xcodebuild with args: -xcconfig,/Users/colemanjeff/GitHub/StopperRC1/platforms/ios/cordova/build-debug.xcconfig,-project,Stopper.xcodeproj,ARCHS=i386,-target,Stopper,-configuration,Debug,-sdk,iphonesimulator,build,VALID_ARCHS=i386,CONFIGURATION_BUILD_DIR=/Users/colemanjeff/GitHub/StopperRC1/platforms/ios/build/emulator,SHARED_PRECOMPS_DIR=/Users/colemanjeff/GitHub/StopperRC1/platforms/ios/build/sharedpch I'm not sure what caused the problem or how to fix it. Anyone have any idea? A: Turns out I actually did need to uninstall the platform, remove the plugin json file, and then reinstall everything. A: Run (this will remove the old ionic ios platform) sudo ionic platform rm ios Then (this will install a new platform with privileges) sudo ionic platform add ios Then build your code ios/android ionic build ios ionic build android This fixed it for me! Might be that you should also run sudo ionic resources to generate new icon and splash screens. A: Basically, re-installation of Platform and Plugins once again resolves the issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/37055792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Total the hours worked, then filter out 30mins where the segment per datekey contains lunch I have a table called Shifts, below is a sample of the data ID Key DateKey Start End Hours Segment 1 1001 20210101 2021-01-01 09:00:00 2021-01-01 09:00:00 4.000000 On-Call 2 1001 20210101 2021-01-01 11:00:00 2021-01-01 11:15:00 0.250000 Break 3 1001 20210102 2021-01-02 13:00:00 2021-01-01 19:00:00 6.000000 On-Call 4 1001 20210102 2021-01-02 15:00:00 2021-01-01 15:15:00 0.250000 Break 5 1001 20210102 2021-01-02 17:00:00 2021-01-01 17:30:00 0.500000 Lunch 6 1001 20210103 2021-01-03 09:00:00 2021-01-03 16:00:00 7.000000 On-Call 7 1001 20210103 2021-01-03 11:00:00 2021-01-03 11:15:00 0.250000 Break 8 1001 20210103 2021-01-03 13:00:00 2021-01-03 13:30:00 0.500000 Lunch 9 1002 20210104 2021-01-04 09:00:00 2021-01-04 09:00:00 4.000000 On-Call 10 1002 20210104 2021-01-04 11:00:00 2021-01-04 11:15:00 0.250000 Break 11 1002 20210105 2021-01-05 07:00:00 2021-01-05 14:00:00 7.000000 On-Call 12 1002 20210105 2021-01-05 09:00:00 2021-01-05 09:15:00 0.250000 Break 13 1002 20210105 2021-01-05 11:00:00 2021-01-05 11:30:00 0.500000 Lunch I'm trying to: * *Total the hours per DateKey e.g DateKey 20210101 and ID 1 and 2 total hours worked is 4.250000. *After totalling the hours worked for that day subtract 30mins from the total work hours if the segment for that datekey contains lunch e.g 20210102 total hours worked ( ID 3,4,5 ) is 6.750000 subtract 30mins. A: --Group data on datekey, then sum it without lunch select datekey,sum(hours)[total] from newtbl where segment!='lunch' group by datekey A: to find total hours per columns, I would use something like this: Select Key,sum(hours) from Shifts where DateKey == 20210101; to exclude the lunchtime from total work: SELECT key,sum(hours) FROM Shifts where seg <> "lunch" GROUP by key
{ "language": "en", "url": "https://stackoverflow.com/questions/72810475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Getting incorrect creation dates using 'aws s3' I have two buckets. In the AWS Console they have the following "Date created" 2017-11-22 14:07:03 i-repo 2018-01-12 17:16:31 l-repo Using the AWS CLI (aws-cli/1.16.90 Python/3.7.2 Darwin/17.7.0 botocore/1.12.80) command aws s3 ls, I get the following: 2018-02-08 12:49:03 i-repo 2018-12-19 15:55:29 l-repo Using the AWS CLI command aws s3api list-buckets, I get the same incorrect dates. I have confirmed that the dates that the AWS CLI reports relate to the date of the most recent bucket policy change, NOT the bucket create date. Am I missing something, or is this a bug? A: Looks like this is a known issue/intended. See below: After further investigation and discussion with the S3 team, I have found that this is expected behavior due to the design of the service. The GET Service call in S3 (s3api list-buckets or s3 ls with no further arguments in the CLI) works differently when being run against different regions. All bucket creations are mastered in us-east-1, then replicated on a global scale - the resulting difference is that there are no "replication" events to the us-east-1 region. The Date Created field displayed in the web console is according to the actual creation date registered in us-east-1, while the AWS CLI and SDKs will display the creation date depending on the specified region (or the default region set in your configuration). When using an endpoint other than us-east-1, the CreationDate you receive is actually the last modified time according to the bucket's last replication time in this region. This date can change when making changes to your bucket, such as editing its bucket policy. This experienced behavior is result of how S3's architecture has been designed and implemented, making it difficult to change without affecting customers that already expect this behavior. S3 does intend to change this behavior so that the actual bucket creation date is shown regardless of the region in which the GET Service call is issued, however to answer your question we do not yet have an ETA for the implementation of this change. This change would most likely be announced in the AWS Forums for S3 if you'd like to know when it takes place. Source
{ "language": "en", "url": "https://stackoverflow.com/questions/54353373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: "Compile Server Error." while building OpenCL kernels I am trying to compile OpenCL kernels on OS X. Everything is ok when there are just a few lines. However, after the code grows over 1.5k lines, clGetProgramBuildInfo with CL_PROGRAM_BUILD_LOG flag returned "Compile Server Error." every time. I googled but found nothing about it. Could anyone help me? A: You can learn the meaning of OpenCL error codes by searching in cl.h. In this case, -11 is just what you'd expect, CL_BUILD_PROGRAM_FAILURE. It's certainly curious that the build log is empty. Two questions: 1.) What is the return value from clGetProgramBuildInfo? 2.) What platform are you on? If you are using Apple's OpenCL implementation, you could try setting CL_LOG_ERRORS=stdout in your environment. For example, from Terminal: $ CL_LOG_ERRORS=stdout ./myprog It's also pretty easy to set this in Xcode (Edit Scheme -> Arguments -> Environment Variables). Please find the original answer by @James A: This unhelpful error message indicates that there is bug in Apple's compiler. You can inform them of such bugs by using the Apple Bug Reporting System.
{ "language": "en", "url": "https://stackoverflow.com/questions/30000473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Azure App Service - Site Not Found Only From Specific Locations We have a website hosted via Azure App Service that we can access from many of our locations throughout the world. Yet there are some locations that return "This site can't be reached" (ERR_NAME_RESOLUTION_FAILED). Any ideas? Cannot ping the IP address from the client PC either. In Azure, I have all IP Addresses open in the Firewall. I feel like I have exhausted everything so far. I also activated the site through another server that is not part of Azure and I was able to access it from locations that cannot access the Azure Site. Why I'm pretty sure it is an Azure issue (unless it is their provider blocking something)? A: ERR_NAME_RESOLUTION_FAILED is a DNS failure error. If you have recently changed the DNS of your domain, it should take a while to become available in all regions. If you've already changed for quite a time, check if the computer where you are trying to access is having any other DNS errors / change DNS servers to known public service (Google/Cloudflare/OpenDNS). As you can see here, your DNS is failing in several regions: https://www.whatsmydns.net/#A/www.scaleitusa.com
{ "language": "en", "url": "https://stackoverflow.com/questions/52122340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Export Cronbach Alpha Results - R I'm following THIS R Blogger tutorial to calculate Cronbach alpha, which works perfectly. I'd like to learn how to export the results, either into a data.frame or text file. Any ideas how I might be able to export the results from the following code: psych::alpha(d)? Note, I looked at the stargazer package, but couldn't get it work work with Cronbach outputs only regression and descriptive statistics. Thank you. A: The output can be saved as a txt file this way. You can also subset the object created with the alpha function using the $ operator to get only the information you are interested in. setwd("~/Desktop") out <- psych::alpha(d) capture.output(out,file = "alpha.txt") A: As is true of everything R, there are many ways of doing what you want to do. The first thing is to look at the help menu for the function (in this case ?alpha). There you will see that a number of objects are returned from the alpha function. (This is what is listed in Values part of the help file.) When you print the output of alpha you are shown just a subset of these objects. However, to see the entire list of objects that are returned, use the "str" command my.results <- alpha(my.data) str(my.results) #or just list the names of the objects names(my.alpha) [1] "total" "alpha.drop" "item.stats" "response.freq" "keys" "scores" "nvar" "boot.ci" [9] "boot" "Unidim" "Fit" "call" "title" You can then choose to capture any of those objects for your own use. Thus my.alpha <- alpha(ability) #use the ability data set in the psych package my.alpha #will give the normal (and nicely formatted output) totals <- my.alpha$total #just get one object from my.alpha totals #show that object will produce a single line (without the fancy output): raw_alpha std.alpha G6(smc) average_r S/N ase mean sd 0.8292414 0.8307712 0.8355999 0.2347851 4.909159 0.006384736 0.5125148 0.2497765 You can do this for any of the objects returned. Most of us who write packages print what we think are the essential elements of the output of the function, but include other useful information. We also allow for other functions (such as summary) to print out other information. So, using the example from above, summary(my.alpha) #prints the rounded to 2 decimals my.alpha$total object Reliability analysis raw_alpha std.alpha G6(smc) average_r S/N ase mean sd 0.83 0.83 0.84 0.23 4.9 0.0064 0.51 0.25 A final word of caution. Many of us do not find alpha a particularly useful statistic to describe the structure of a scale. You might want to read the tutorial on how to find coefficient omega using the psych package at http://personality-project.org/r/psych/HowTo/R_for_omega.pdf
{ "language": "en", "url": "https://stackoverflow.com/questions/43431385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SQL Server 2008 CLR vs T-SQL: Is there an efficiency/speed difference? I'm a C# developer who has done some basic database work in T-SQL. However, I need to write a very complicated stored procedure, well above my T-SQL knowledge. Will writing a stored procedure in C# using the .net CLR as part of SQL Server 2008 cause my stored procedure to be less efficient than if it were written in T-SQL? Is the difference (if any) significant? Why? A: CLR require some communication overhead (to pass data between the CLR and SQL Server) Rule of thumb is: * *If your logic mostly includes transformations of massive sets of data, which can be performed using set operations, then use TSQL. *If your logic mostly includes complex computations of relatively small amounts of data, use CLR. With set operations much more can be done than it seems. If you post your requirements here, probably we'll be able to help. A: Please see Performance of CLR Integration: This topic discusses some of the design choices that enhance the performance of Microsoft SQL Server integration with the Microsoft .NET Framework common language runtime (CLR). A: The question of "Will writing a stored procedure in C# using the .net CLR as part of SQL Server 2008 cause my stored procedure to be less efficient than if it were written in T-SQL?" is really too broad to be given a meaningful answer. Efficiency varies greatly depending on not just what types of operations are you doing, but also how you go about those operations. You could have a CLR Stored Procedure that should out-perform an equivalent T-SQL Proc but actually performs worse due to poor coding, and vice versa. Given the general nature of the question, I can say that "in general", things that can be done in T-SQL (without too much complexity) likely should be done in T-SQL. One possible exception might be for TVFs since the CLR API has a very interesting option to stream the results back (I wrote an article for SQL Server Central--free registration required--about STVFs). But there is no way to know for certain without having both CLR and T-SQL versions of the code and testing both with Production-level data (even poorly written code will typically perform well enough with 10k rows or less). So the real question here boils down to: I know C# better than I know T-SQL. What should I do? And in this case it would be best to simply ask how to tackle this particular task in T-SQL. It could very well be that there are non-complex solutions that you just don't happen to know about yet, but would be able to understand upon learning about the feature / technique / etc. And you can still code an equivalent solution in SQLCLR and compare the performance between them. But if there is no satisfactory answer for handling it in T-SQL, then do it in SQLCLR. That being said, I did do a study 3 years ago regarding SQLCLR vs T-SQL Performance and published the results on Simple-Talk.
{ "language": "en", "url": "https://stackoverflow.com/questions/2097958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Xcode with .cxx_construct I have a little problem with Xcode. I import a header which contains this : - (id).cxx_construct; - (void).cxx_destruct; so I try to set GCC_OBJC_CALL_CXX_CDTORS to yes in my build Settings on User-Defined ! But I still have this error : Expected selector for Objective-C method What can i do ? Thanks, A: these selectors are generated by the compiler. they are the reserved selectors for c++ ivar construction and destruction. furthermore, the runtime calls these methods for you when GCC_OBJC_CALL_CXX_CDTORS is enabled. there is no need to call or declare them yourself. declaring them would result in a compilation error. What can i do? choose a unique name for your selectors, and don't implement the ones which are generated for you (when GCC_OBJC_CALL_CXX_CDTORS is enabled). what is it you are trying to do here?
{ "language": "en", "url": "https://stackoverflow.com/questions/10485920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Color reproduced on screen is different between platforms I have an both on the Android and iOS. Both are using the exact same render code with OpenGL ES. I noticed that even when using the same RGBA values, the colour reproduced on screen by the devices are not the same. I want suggestions on what might be the cause. I have considered that it might be just the display hardware, and if so, how would you recommend handling this case and if this is usually handled at all. Thanks. A: Take a screenshot on both devices, email it to a PC and compare the actual colours in photoshop. It's probably just the screen, they can vary wildly, I wouldn't worry about it unless your app has some special specific reason to be concerned about exact colour reproduction. Even on a single device there are sometimes settings which change the look completely, e.g. the Samsung Galaxy S6: http://www.phonearena.com/news/Samsung-Galaxy-S6-Review-of-the-various-display-modes_id69968
{ "language": "en", "url": "https://stackoverflow.com/questions/36323103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Android Calculator - Editview cannot input decimal places I am new to Android code development...I am developing a Android calculator apps and does not understand why the two EditTexts (first input and second input) cannot accept decimal places but can only input integers...Here attached as follows are the codes: Thanks! =============Main Activity=============================== package com.trial.jm4_calculator; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.widget.Button; import android.widget.CheckBox; import android.widget.EditText; import android.widget.RadioButton; import android.widget.TextView; import android.support.v4.app.NavUtils; public class MainActivity extends Activity { private TextView output; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button btn1 = (Button) findViewById(R.id.button1); btn1.setOnClickListener(btn1Listener); output = (TextView) findViewById(R.id.lblOutput); } View.OnClickListener btn1Listener = new View.OnClickListener() { public void onClick(View v) { double opd1, opd2; double result = 0.0; EditText txtOpd1, txtOpd2; RadioButton rdbAdd, rdbSubtract, rdbMultiply, rdbDivide; CheckBox chkDivide; txtOpd1 = (EditText) findViewById(R.id.txtOpd1); txtOpd2 = (EditText) findViewById(R.id.txtOpd2); opd1 = Double.parseDouble(txtOpd1.getText().toString()); opd2 = Double.parseDouble(txtOpd2.getText().toString()); rdbAdd = (RadioButton) findViewById(R.id.rdbAdd); if (rdbAdd.isChecked()) { result = opd1 + opd2; } rdbSubtract = (RadioButton) findViewById(R.id.rdbSubtract); if (rdbSubtract.isChecked()) { result = opd1 - opd2; } rdbMultiply = (RadioButton) findViewById(R.id.rdbMultiply); if (rdbMultiply.isChecked()) { result = opd1 * opd2; } rdbDivide = (RadioButton) findViewById(R.id.rdbDivide); if (rdbDivide.isChecked()) { result = opd1 / opd2; } output.setText("Answer = " + result); } }; } ====================Main.xml=================================== <?xml version="1.0" encoding="UTF-8"?> <LinearLayout android:layout_height="fill_parent" android:layout_width="fill_parent" android:orientation="vertical" xmlns:android="http://schemas.android.com/apk/res/android"> <LinearLayout android:layout_height="wrap_content" android:layout_width="fill_parent" android:orientation="horizontal"> <TextView android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="First Input: "/> <EditText android:layout_height="wrap_content" android:layout_width="fill_parent" android:inputType="number" android:id="@+id/txtOpd1"/> </LinearLayout> <RadioGroup android:layout_height="wrap_content" android:layout_width="fill_parent" android:orientation="horizontal" android:id="@+id/rdgOp"> <RadioButton android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="+ " android:id="@+id/rdbAdd"/> <RadioButton android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="- " android:id="@+id/rdbSubtract"/> <RadioButton android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="* " android:id="@+id/rdbMultiply"/> <RadioButton android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="/ " android:id="@+id/rdbDivide"/> </RadioGroup> <LinearLayout android:layout_height="wrap_content" android:layout_width="fill_parent" android:orientation="horizontal"> <TextView android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="Second Input: "/> <EditText android:layout_height="wrap_content" android:layout_width="fill_parent" android:inputType="number" android:id="@+id/txtOpd2"/> </LinearLayout> <Button android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="Compute" android:id="@+id/button1"/> <TextView android:layout_height="wrap_content" android:layout_width="wrap_content" android:id="@+id/lblOutput"/> </LinearLayout> A: If you want to use Decimal Number only on your EditText use the xml attribute android:inputType="numberDecimal" in your EditText widget your EditText declaration will be like this: <EditText android:id="@+id/editText1" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="numberDecimal" /> If you want to use Signed Decimal Number than combine the two Xml attributes android:inputType="numberDecimal" and android:inputType="numberSigned". Your EditText declaration will be like this: <EditText android:id="@+id/editText1" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="numberDecimal|numberSigned" > </EditText> A: Change android:inputType from "number" to "numberDecimal". See the documentation for even more options for inputType. A: inputType="number" doesnt allow floats. try changing: android:inputType="number" to: android:numeric="integer|decimal" A: You need to change the input type of your EditText in the XML code. Change the inputType attribute of the EditText from android:inputType="number" to android:inputType="numberDecimal" <EditText android:layout_height="wrap_content" android:layout_width="fill_parent" android:inputType="numberDecimal" android:id="@+id/txtOpd1"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/12021068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: TWCS is not generating the SStables according to the window_size and window_unit I need to generate the SSTables after a certain time like 10 minutes, but using TWCS and setting up the "compaction_window_size" and "compaction_window_unit", I am not able to understand when the SSTables would be getting generated. I have tried all many combinations but I am not able to figure out the when the SSTables will be created CREATE TABLE twcs.twcs2 ( id int PRIMARY KEY, age int, name text ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'compaction_window_size': '1', 'compaction_window_unit': 'MINUTES', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 3600 AND gc_grace_seconds = 60 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; Here I have set the 'compaction_window_unit'='MINUTES' and 'compaction_window_size': '1' , So according to this the SSTables should be generated after every 1 minute if some operation is performed on the table(insertion/deletion/updation of data), but this is not happening. A: TWCS is a compaction strategy. Compaction strategies have nothing to do with sstables being generated. It's a reconcile and cleanup "algorithm" once they are created. The way that TWCS works is that sstables will be consolidated into windows. The key word here is "consolidated". There is no guarantee that sstables will be "generated" in that timeframe, but whatever IS generated, after the window expires, will be consolidated together. So if you have, say, hourly windows/buckets, during that hour sstables may or may not get generated. If multiple sstables are created during the window they are compacted/consolidated/reconciled using STCS (similar sized sstables consolidated together). After the hour passes, any sstables that remain for that window will be compacted together into a single sstable. Over time you will see one sstable per window (or none if nothing was generated during that window). After your TTL and gc_grace passes, entire windows simply get removed (instead of the large effort of merging with others and then removing expired records). TWCS works very well if there is no overlap for rows within windows. If there is overlap, then the oldest windowed sstable will be unable to be removed until the newest sstable with overlapped records expire. In other words, TWCS works well for INSERTS that do not cross windows (remember updates and deletes are also considered inserts). You need to be sure to use TTL for cleanup (i.e. don't run delete statements as that will cause overlap). Also, from what I have discovered from using this, ensure to turn off repair for the tables that have TWCS as that can cause big problems (unseen overlap). So in short, TWCS does not cause sstables to get generated (there are rules for when sstables are created that have nothing to do with compaction strategies), it simply another method of keep things "clean". Hope that helps. A: There are several resources that may help you: * *https://academy.datastax.com/units/21017-time-window-compaction-dse-operations-apache-cassandra *https://www.youtube.com/watch?v=PWtekUWCIaw *https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
{ "language": "en", "url": "https://stackoverflow.com/questions/58212745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: An alternative to CodeTabs B+ Im looking for an alternative to the jquery library CodeTabs B+. It should be possible to drag-and-drop the tab-navigation (important for mobile users). Do you know any? A: I found http://owlgraphic.com/. It fits some of the features CodeTabs B+ has.
{ "language": "en", "url": "https://stackoverflow.com/questions/26544132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Use Images in a swift framework I need a picture to appeare in a framework. The only way i found needed that i know the name of the app it is in. Is there another way to get assets into your framework? (For getting this posted: * *my background search didnt help) A: Almost 5 years ago I posted this answer. It contains two pieces of code to pull out an asset from a Framework's bundle. The key piece of code is this: public func returnFile(_ resource:String, _ fileName:String, _ fileType:String) -> String { let identifier = "com.companyname.appname" // replace with framework bundle identifier let fileBundle = Bundle.init(identifier: identifier) let filePath = (fileBundle?.path(forResource: resource, ofType: "bundle"))! + "/" + fileName + "." + fileType do { return try String(contentsOfFile: filePath) } catch let error as NSError { return error.description } So what if your framework, which needs to know two things (app bundle and light/dark mode) tweaked this code? Move identifier out to be accessible to the app and not local to this function. Then create either a new variable (I think this is the best way) or a new function to work with the correct set of assets based on light or dark mode. Now your apps can import your framework, and set things up in it's consumers appropriately. (I haven't tried this, but in theory I think it should work.)
{ "language": "en", "url": "https://stackoverflow.com/questions/67236950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Limit subject line of git commit message to 50 characters I often use vim to format my git commit messages. A trend that I am seeing with increasing popularity is that the first line of the commit message should be limited to 50 characters and then subsequent lines should be limited to 72 characters. I already know how to make my commit wrap at 72 characters based on my vimrc file: syntax on au FileType gitcommit set tw=72 Is there a way to make vim autowrap the first line at 50 characters and then 72 characters after that? An equally good answer could highlight everything after the 50th column on only the first line to indicate that my header is too long ... A: You can use the CursorMoved and CursorMovedI autocommands to set the desired textwidth (or any other setting) based on the line the cursor is currently on: augroup gitsetup autocmd! " Only set these commands up for git commits autocmd FileType gitcommit \ autocmd CursorMoved,CursorMovedI * \ let &l:textwidth = line('.') == 1 ? 50 : 72 augroup end The basic logic is simple: let &l:textwidth = line('.') == 1 ? 50 : 72, although the nested autocommands make it look rather funky. You could extract some of it out to a script-local function (fun! s:setup_git()) and call that, if you prefer. The &:l syntax is the same as setlocal, by the way (but with setlocal we can't use an expression such as on the right-hand-side, only a simple string). Some related questions: * *Line number specific text-width setting. *Set paste mode when at beginning of line but auto-indent mode when anywhere else on the line? Note that the default gitcommit.vim syntax file already stops highlighting the first line after 50 characters. From /usr/share/vim/vim80/syntax/gitcommit.vim: syn match gitcommitSummary "^.\{0,50\}" contained containedin=gitcommitFirstLine nextgroup=gitcommitOverflow contains=@Spell [..] hi def link gitcommitSummary Keyword Only the first 50 lines get highlighted as a "Keyword" (light brown in my colour scheme), after that no highlighting is applied. If also has: syn match gitcommitOverflow ".*" contained contains=@Spell [..] "hi def link gitcommitOverflow Error Notice how it's commented out, probably because it's a bit too opinionated. You can easily add this to your vimrc though: augroup gitsetup autocmd! " Only set these commands up for git commits autocmd FileType gitcommit \ hi def link gitcommitOverflow Error \| autocmd CursorMoved,CursorMovedI * \ let &l:textwidth = line('.') == 1 ? 50 : 72 augroup end Which will make everything after 50 characters show up as an error (you could, if you want, also use less obtrusive colours by choosing a different highlight group).
{ "language": "en", "url": "https://stackoverflow.com/questions/43929991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: free form wordpress I have a voucher to distribute electronically and I would like people to complete a form on my WordPress site and when they click submit, they receive a e-mail with the voucher. Is there any plugin out there for that? If there is not, how could I accomplish this? A: Use the contact form 7 plugin: http://wordpress.org/extend/plugins/contact-form-7/
{ "language": "en", "url": "https://stackoverflow.com/questions/5032733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to display tabset again after .close()? I am working on a small application on ipywidgets that has multiple frames. This means users can click a button, and then the original set of widgets will be closed and a new set of widgets will appear. The function which I am using has a simplified structure as below: def to_next_page(x): current_page.close() display(next_page) I have a button in current_page that goes to next_page, and vice versa. When I tried to go back from next_page to current_page, the following message appeared instead of the widget: Tab(children=(VBox(children=(Text(value='', description='Username:'), Password(description='Password:'), Button(description='Login', style=ButtonStyle()), Button(description='Forget Password', style=ButtonStyle()))), VBox(children=(Text(value='', description='Username:'), Password(description='Password:'), Button(description='Login', style=ButtonStyle()), Button(description='Forget Password', style=ButtonStyle()), Button(description='Sign up', style=ButtonStyle())))), _titles={'0': 'Staff', '1': 'Member'}) Is there any way to go back and forth between widget sets? Thank you. A: you cant use .close use this out = Output() def to_next_page(x): out.clear_output() with out: display(next_page) and for displaying everytime use: with out: display(anything) and for closing use: out.clear_output()
{ "language": "en", "url": "https://stackoverflow.com/questions/74086941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Which of these options is good practice for assigning a string value to a variable in C? I have this snippet of C code: #include<stdio.h> #include<stdlib.h> #include<string.h> typedef struct Date { int date; char* month; int year; } Date_t; typedef Date_t* pDate_t; void assignMonth(pDate_t birth) { //1) birth->month = "Nov"; //2) //birth->month = malloc(sizeof(char) * 4); //birth->month = strcpy(birth->month, "Nov"); } int main() { Date_t birth; birth.date = 13; assignMonth(&birth); birth.year = 1969; printf("%d %s %d\n",birth.date, birth.month, birth.year); return 0; } In the function assignMonth I have two possibilities for assigning month. Both give me the same result in the output, so what is the difference between them? I think that the second variant is the good one, am I wrong? If yes, why? If not, why? Thanks in advance for any help. P.S. I'm interested in what is going on in memory in both cases. A: It depends on what you want to do with birth.month later. If you have no intention of changing it, then the first is better (quicker, no memory cleanup requirement required, and each Date_t object shares the same data). But if that is the case, I would change the definition of month to const char *. In fact, any attempt to write to *birth.month will cause undefined behaviour. The second approach will cause a memory leak unless you remember to free(birth.month) before birth goes out of scope. A: You're correct, the second variant is the "good" one. Here's the difference: With 1, birth->month ends up pointing to the string literal "Nov". It is an error to try to modify the contents of birth->month in this case, and so birth->month should really be a const char* (many modern compilers will warn about the assignment for this reason). With 2, birth->month ends up pointing to an allocated block of memory whose contents are "Nov". You are then free to modify the contents of birth->month, and the type char* is accurate. The caveat is that you are also now required to free(birth->month) in order to release this memory when you are done with it. The reason that 2 is the correct way to do it in general, even though 1 seems simpler in this case, is that 1 in general is misleading. In C, there is no string type (just sequences of characters), and so there is no assignment operation defined on strings. For two char*s, s1 and s2, s1 = s2 does not change the string value pointed to by s1 to be the same as s2, it makes s1 point at exactly the same string as s2. This means that any change to s1 will affect the contents of s2, and vice-versa. Also, you now need to be careful when deallocating that string, since free(s1); free(s2); will double-free and cause an error. That said, if in your program birth->month will only ever be one of several constant strings ("Jan", "Feb", etc.) variant 1 is acceptable, however you should change the type of birth->month to const char* for clarity and correctness. A: Neither is correct. Everyone's missing the fact that this structure is inherently broken. Month should be an integer ranging from 1 to 12, used as an index into a static const string array when you need to print the month as a string. A: I suggest either: const char* month; ... birth->month = "Nov"; or: char month[4]; ... strcpy(birth->month, "Nov"); avoiding the memory allocation altogether. A: With option 1 you never allocate memory to store "Nov", which is okay because it's a static string. A fixed amount of memory was allocated for it automatically. This will be fine so long as it's a string that appears literally in the source and you never try to modify it. If you wanted to read a value in from the user, or from a file, then you'd need to allocate first. A: In the first case your cannot do something like birth->month[i]= 'c'. In other words you cannot modify the string literal "Mov" pointed to by birth->month because it is stored in the read only section of memory. In the second case you can modify the contents of p->month because "Mov" resides on the heap. Also you need to deallocate the allocated memory using free in this case. A: Seperate to your question; why is struct Date typedefed? it has a type already - "struct Date". You can use an incomplete type if you want to hide the structure decleration. In my experience, people seem to typedef because they think they should - without actually thinking about the effect of doing so. A: For this example, it doesn't matter too much. If you have a lot of Date_t variables (in, say, an in-memory database), the first methow will lead to less memory usage over-all, with the gotcha that you should not, under any circumstances, change any of the characters in the strings, as all "Nov" strings would be "the same" string (a pointer to the same 4 chars). So, to an extent, both variants are good, but the best one would depend on expected usage pattern(s).
{ "language": "en", "url": "https://stackoverflow.com/questions/3821112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: SSRS expression, iif statement with date conditions So my default values for startDate and endDate in SSRS were set up with the following ssrs expressions. first day of previous month ssrs expression =DateAdd(DateInterval.Month, -1, DateSerial(Year(Date.Now), Month(Date.Now), 1)) last day of previous month ssrs expression =DateAdd(DateInterval.Day, -1, DateSerial(Year(Date.Now), Month(Date.Now), 1)) But that won't work alone in my case unless I want to go in on the 16th of every month and generate this report for the people requesting it for the first 15 days of the current month. So in my default value expression for start date i am trying this iif statement... = iif( DatePart(DateInterval.Day, Today() <> "20", DateInterval.Month, -1, DateSerial(Year(Date.Now), Month(Date.Now), 1), DateInterval.Month, 1, DateSerial(Year(Date.Now), Month(Date.Now), 1) ) Not working out so well. So what i'm trying to do is..... Change the default start and end date based on what day of the current month it is, So if current day of the current month equals 16, make start date 1 of current month and end date 15 of current month, if current day of the month isn’t 16 make start date first of previous month and end date last day of previous month. So then the only thing needed is to get subscription emails and what day to send them out on. A: Untested, but what if you try this? (for your start date parameter): = iif( DatePart(DateInterval.Day, Today()) <> "16", DateAdd(DateInterval.Month, -1, DateSerial(Year(Date.Now), Month(Date.Now), 1)), DateSerial(Year(Date.Now), Month(Date.Now), 1) )
{ "language": "en", "url": "https://stackoverflow.com/questions/26552747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I deliver an iOS app IPA to a customer to be signed with their own Enterprise provisioning profile We have developed an iOS app that has been delivered to the customer as an IPA with an ad-hoc distribution profile that allowed a set of their employees to install it on their devices. The customer now wishes to distribute that app internally to all their employees using their iOS Enterprise Developer program credentials. I had hoped that the customer could simply re-codesign the ad-hoc IPA with their own enterprise identity. However, they say they can't do that. They say they "need an IPA file with the removal of the limitation to only certain devices". So, what do I do? * *Do I need to somehow create an "unsigned" IPA for them? (And if so, how do I do that?) *Do I need them to generate an Enterprise distribution provisioning profile for me so I can build the app with that profile? *Do I need to just send them the source or build output and let them build the package? I have looked at the following documents, but they have not enlightened me: * *TN2250: iOS code Signing Setup, Process, and Troubleshooting *Distributing Enterprise Apps for iOS Devices A: It's completely possible to take any IPA and resign it with your own details, modifying the Info.plist, bundle ID, etc. in the process. I do this all the time with IPAs that have been signed by other developers using their own provisioning profiles and signing identities. If they aren't familiar with the codesign command line tool and all the details of replacing embedded.mobileprovision files and entitlements, the easiest way for them to do this is for you to "Archive" the app via Xcode, and send them the generated archive file (*.xcarchive). They can import that into Xcode so it is visible in the Organizer, and from there they can choose "Distribute" and sign it with their enterprise identity. To import the .xcarchive file into Xcode, they just need to copy the file into the ~/Library/Developer/Xcode/Archives directory and it should appear in the Xcode organizer. Then they click "Distribute" and follow the instructions:
{ "language": "en", "url": "https://stackoverflow.com/questions/11035601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can I install an NPM module from GitHub from a certain branch or tag? When I have now: "dependencies": { "mymodule": "owner/repo" } or "dependencies": { "mymodule": "git+ssh://[email protected]/owner/repo.git" } NPM installs the module from GitHub from the master branch. Is there a way to tell NPM to install a certain tag or the HEAD of a branch other than master? A: https://docs.npmjs.com/files/package.json#git-urls-as-dependencies "dependencies": { "mymodule": "git+ssh://[email protected]/owner/repo.git#commit-ish" } The commit-ish can be any tag, sha, or branch which can be supplied as an argument to git checkout. The default is master.
{ "language": "en", "url": "https://stackoverflow.com/questions/30648882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CKEditor Getting image width and height before inserting element (preloading) Is there a way of getting the width and height of an image before actually inserting it into the editor? I have the following code, but width and height always return 0 var imageElement = editor.document.createElement('img'); imageElement.setAttribute('src', imageSource); var width = imageElement.$.width; var height = imageElement.$.height; if (width > 0) { this.imageElement.setAttribute('width', width); } if (height > 0) { this.imageElement.setAttribute('height', height); } editor.insertElement(imageElement); Help would be greatly appreciated A: I fixed this problem by preloading the image manually, however I do not know if this is the CKEditor way to achieve this Code: var imageElement = editor.document.createElement('img'); imageElement.setAttribute('src', imageSource); function setWidthAndHeight() { if (this.width > 0) { imageElement.setAttribute('width', this.width); } if (this.height > 0) { imageElement.setAttribute('height', this.height); } return true; } var tempImage = new Image(); tempImage.src = imageSource; tempImage.onload = setWidthAndHeight; editor.insertElement(imageElement);
{ "language": "en", "url": "https://stackoverflow.com/questions/7427434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Use coupon once or twice based on the coupons table- Laravel 8 - Livewire. Apply Discounts Laravel I have a project where I have implemented coupon system. Problem is, one user can use a coupon multiple times until the coupon expires. I would lke: Limiting coupon usage based on max usage defined in coupon table; I have also looked for various sites in google. Which includes stackoverflow along with others. id code type value cart_value expiry_date timestamps max Max- maximum time a coupon can be used. Pivot table: user_coupons id coupon_id user_id code The code column here stores how many times a specific coupon has been used. Apply Coupon Method. public function applyCouponCode() { $promo = Coupon::where('code', $this->couponCode) ->where('expiry_date', '>=', Carbon::today()) ->where('cart_value', '<=', Cart::instance('cart')->subtotal()) ->first(); $coupon = Coupon::with('userCoupon') ->where('expiry_date', '>=', Carbon::today()) ->where('code', '!=', 'user_coupons.code') ->where('cart_value', '<=', Cart::instance('cart')->subtotal()) ->first(); if ($coupon->userCoupon()->code === $this->couponCode) { $this->alert('error', 'Code already used!'); return; // dd($coupon->code); } else if (!$promo) { $this->alert('error', 'Invalid code!'); return; } else if ($coupon) { $this->alert('success', 'Code ok!'); return; } //this part never appears. Even though coupon is valid $this->alert('success', 'Coupon is applied'); } Issues: 1.codes previously used are recognised. *invalid codes are recognised. But valid codes also says Invalid code. What am I missing? I am using Laravel 8 with livewire. I have tried many methods. Nothing seems to work. I have tried query builder. At some point I was able to get the codes used by the user by joining coupons using inner join with the user_coupons table. I have also tried using model relationship however it says collection does not exist. A: I do implementing coupon system on my project. And I think we have the same term for this. You might try my way: * *This is my vouchers table attributes. I declared it as fillable attributes in Voucher model. protected $fillable = [ 'service_id', 'code', 'name', 'description', 'percentage', // percentage of discount 'maximum_value', // maximum value of discount 'maximum_usage', // maximum usage per user. If it's 1, it means user only can use it once. 'user_terms', // additional terms 'amount', // amount of voucher. If it's 1, it means the voucher is only can be redeemed once. 'expired_at', 'is_active', // for exception if we want to deactivate the voucher (although voucher is valid) ]; *This is my voucher_redemptions table, this table is used when the user redeem the voucher. I declared it in VoucherRedemption model. protected $fillable = [ 'redeemer_id', // user id who redeems the voucher 'voucher_id', // voucher 'item_id', // product item ]; *This is my function to redeem voucher /** * Apply voucher to an item * @param int $redeemerId user id * @param string $voucherCode voucher code * @param int $itemId project batch package id * @return VoucherRedemption $redemption */ public static function apply($redeemerId, $voucherCode, $itemId) { $voucher = Voucher::where('code', $voucherCode)->first(); // Make sure item is exist and the batch has not been checked out $item = ProjectBatchPackage::where('id', $itemId)->first(); if (!$item) { return; } else if ($item->batch->status != null) { return; } // Make sure voucher exist if (!$voucher) { return; } // Make sure is voucher active, not expired and available. if ($voucher->is_active == false || $voucher->isExpired() || !$voucher->isAvailable()) { return; } // Make sure voucher usage for user if ($voucher->maximum_usage != null) { $user_usages = VoucherRedemption::where('redeemer_id', $redeemerId) ->where('voucher_id', $voucher->id) ->get() ->count(); if ($user_usages >= $voucher->maximum_usage) { return; } } // Apply voucher to project batch package (item) $redemption = VoucherRedemption::create([ 'redeemer_id' => $redeemerId, 'voucher_id' => $voucher->id, 'item_id' => $itemId ]); return $redemption; } Thank you.
{ "language": "en", "url": "https://stackoverflow.com/questions/71492196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Visual studio 2010, project linking with cuda files I hope I have a simple problem, but I could not solve it during this day, so I ask for your help. So, I`m building CUDA project in Visual studio 2010, using CUDA toolkit. My project contains several files, and the three most important are: //tvector_traits_kernel.cu //contains two fuctions, the first is cuda kernel, and the second is it`s wrapper, to call it in .cpp files template < typename _T > __ global __ void DaxpyKernel(lint64 _m, lint64 _na, lint64 _nb, _T *_amatr, _T *_bmatr, _T *_cmatr) { ... } template < typename _T > void DaxpyKernelWrapper(lint64 _m, lint64 _na, lint64 _nb, _T *_amatr, _T *_bmatr, _T *_cmat) { ... } //tvector_traits_kernel.h //contains wrapper fuction`s prototipe template < typename _T > void DaxpyKernelWrapper(lint64 _m, lint64 _na, lint64 _nb, _T *_amatr, _T *_bmatr, _T *_cmat); //main.cpp //it just calls DaxpyKernelWrapper fuction and includes tvector_traits_kernel.h but while linking I have an error: Error 3 error LNK2019: external symbol unresolved "void __cdecl DaxpyKernelWrapper<float>(__int64,__int64,__int64,float *,float *,float *)" (??$DaxpyKernelWrapper@M@@YAX_J00PAM11@Z) in functions "public: static void __cdecl CTVect_traits<float>::CudaBlockDaxpy(__int64,__int64,__int64,float *,float *,float *)" (?CudaBlockDaxpy@?$CTVect_traits@M@@SAX_J00PAM11@Z) C:\Users\ckhgjh\Documents\GPU\Tesis\Test\test.obj Test I wonder why, becouse "tvector_traits_kernel.cu" is in projects source files, its objective file was sucsessfully created. I`m new in Visual studio, previously I used gcc, so I managed linking process myself. So my question may be very stupid :( Thank you for your attention!
{ "language": "en", "url": "https://stackoverflow.com/questions/25251100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using Console.Clear() in a Windows Docker Image? I'm trying to containerize a .net framework console application for testing and learning. The application works just fine outside the container. However, I'm getting this error for every Console.Clear() call: Unhandled Exception: System.IO.IOException: The handle is invalid. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.Console.GetBufferInfo(Boolean throwOnNoConsole, Boolean& succeeded) at System.Console.Clear() at project.Program.Main(String[] args) in C:\Users\project\Program.cs:line xxx I can "sort" this out by encasing the Console.Clear() lines in try-catch, but that would be a mess and won't really solve the issue, just hide it under the carpet. I want to understand why is this happening and how to solve it. For propietary reasons I can't post the entire solution here. This is the dockerfile i'm using: FROM mcr.microsoft.com/windows/servercore:ltsc2019 ADD "release" "c:/release" CMD powershell "C:/release/project.exe" EXPOSE 80 443 I suspect this is because a console is not well handled by a container, but why exactly is the Clear() method of Console falling and not everything else? Why does the container lacks the handler for that particular method? Is it because it's windows core? A: You are getting an error because System.Console.Clear (along with other methods that attempt to control/query the console such as System.Console.[Get|Set]CursorPosition) requires a console/TTY but none is attached to the program. To run your code as-is, you should be able to use the --tty option to docker run to allocate a pseudo-TTY, e.g. docker run --tty <image>. To modify your code to not require this, you'd probably want to create your own wrapper for System.Console.Clear that wraps it in a try-catch: void ClearConsole() { try { System.Console.Clear(); } catch (System.IO.IOException) { // do nothing } } If only targeting Windows, you can alternatively do a P/Invoke call to GetConsoleWindow to check whether a console exists before calling System.Console.Clear: class Program { [System.Runtime.InteropServices.DllImport("kernel32.dll")] static extern System.IntPtr GetConsoleWindow(); static void ClearConsole() { if (GetConsoleWindow() != System.IntPtr.Zero) { System.Console.Clear(); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/63925467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to find out rotation angle using dot product I'm trying to rotate a DIV element using the mousemove event on an HTML page. I want to find a rotation angle using the dot product. I know that it's possible using Math.atan2 too, but I'd like to use dot product in my example. So far, I tried to implement the following formula: cos(angle) = dot(a, b) / (length(a) * length(b)) But the below implementation doesn't work well. What could be the issue? Thanks Codepen: https://codepen.io/1rosehip/pen/qBrLYLo const $box = document.getElementById('box'); const shapeRect = $box.getBoundingClientRect(); const shapeCenterX = shapeRect.x + shapeRect.width / 2; const shapeCenterY = shapeRect.y + shapeRect.height / 2; /** * get vector magnitude * @param {Array.<number>} v * @return {number} */ const length = v => { return Math.sqrt(v[0] ** 2 + v[1] ** 2); }; /** * dot product * @param {Array.<number>} v1 * @param {Array.<number>} v2 * @return {number} */ const dot = (v1, v2) => { return v1[0] * v2[0] + v1[1] * v2[1]; }; /** * handle rotation */ document.addEventListener('mousemove', (evt) => { // vector #1 - shape center const centerVector = [shapeCenterX, shapeCenterY]; const centerVectorLength = length(centerVector); // vector #2 - mouse position const mouseVector = [evt.pageX, evt.pageY]; const mouseVectorLength = length(mouseVector); // cos(angle) = dot(a, b) / (length(a) * length(b)) const radians = Math.acos(dot(centerVector, mouseVector) / (centerVectorLength * mouseVectorLength)); const degrees = radians * (180 / Math.PI); const angle = (degrees + 360) % 360; $box.style.transform = `rotate(${degrees}deg)`; }); #box{ position: absolute; background: #111; left: 100px; top: 100px; width: 100px; height: 100px; } <div id="box"></div> A: I've found the issues. (1) The vectors were defined from the wrong origin (top left corner of the page instead of the shape center). (2) Math.acos returns results in the range range [0,pi] instead of [0,2*pi]. It should be fixed by (360 - degrees) when the mouse moves to the left and passes the shape center. The codepen with fixed version: https://codepen.io/1rosehip/pen/JjWwaYE const $box = document.getElementById('box'); const shapeRect = $box.getBoundingClientRect(); const shapeCenterX = shapeRect.x + shapeRect.width / 2; const shapeCenterY = shapeRect.y + shapeRect.height / 2; /** * get vector magnitude * @param {Array.<number>} v * @return {number} */ const length = v => { return Math.sqrt(v[0] ** 2 + v[1] ** 2); }; /** * dot product * @param {Array.<number>} v1 * @param {Array.<number>} v2 * @return {number} */ const dot = (v1, v2) => { return v1[0] * v2[0] + v1[1] * v2[1]; }; /** * handle rotation */ document.addEventListener('mousemove', (evt) => { // vector #1 - shape center const centerVector = [evt.pageX - shapeCenterX, 0 - shapeCenterY]; const centerVectorLength = length(centerVector); // vector #2 - mouse position const mouseVector = [evt.pageX - shapeCenterX, evt.pageY - shapeCenterY]; const mouseVectorLength = length(mouseVector); // cos(angle) = dot(a, b) / (length(a) * length(b)) const radians = Math.acos(dot(centerVector, mouseVector) / (centerVectorLength * mouseVectorLength)); let degrees = radians * (180 / Math.PI); // const angle = (degrees + 360) % 360; if(evt.pageX < shapeCenterX){ degrees = 360 - degrees; } $box.style.transform = `rotate(${degrees}deg)`; }); #box{ position: absolute; background: #111; left: 100px; top: 100px; width: 100px; height: 100px; } <div id="box"></div>
{ "language": "en", "url": "https://stackoverflow.com/questions/67989514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Blank Canvas => 'Refused to display document because display forbidden by X-Frame-Options.' when the canvas app is loaded within the iframe nothing is display and on Chrome Firebug Console I see the error: Refused to display document because display forbidden by X-Frame-Options. I tried this solution: Overcoming "Display forbidden by X-Frame-Options" class ApplicationController < ActionController::Base protect_from_forgery before_filter :set_xframeoption def set_xframeoption response.headers["X-Frame-Options"]='GOFORIT' end end But I have the same error. Any solution? Thanks - FB Resquest Header - GET /dropis_app/ HTTP/1.1 Host: apps.facebook.com Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.77 Safari/535.7 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: [lot of stuff] - FB Response Header - HTTP/1.1 200 OK Cache-Control: private, no-cache, no-store, must-revalidate Expires: Sat, 01 Jan 2000 00:00:00 GMT P3P: CP="Facebook does not have a P3P policy. Learn why here: http://fb.me/p3p" Pragma: no-cache X-Content-Type-Options: nosniff X-Frame-Options: DENY X-XSS-Protection: 1; mode=block Set-Cookie: wd=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.facebook.com; httponly Content-Encoding: gzip Content-Type: text/html; charset=utf-8 X-FB-Debug: JGyR/rXLGOKtchBAPFmyYiPZrd5npWbORZgq4sirM1Q= X-Cnection: close Transfer-Encoding: chunked Date: Wed, 01 Feb 2012 17:58:00 GMT - iFrame Request Header - Request URL:https://foobar.herokuapp.com/ Request Method:POST Status Code:302 Found Request Headersview source Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:en-US,en;q=0.8 Cache-Control:max-age=0 Connection:keep-alive Content-Length:433 Content-Type:application/x-www-form-urlencoded Host:dropis.herokuapp.com Origin:https://apps.facebook.com Referer:https://apps.facebook.com/foobar/ User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.77 Safari/535.7 Form Dataview URL encoded - iFrame Form Data - signed_request: [removed] - iFrame Response Header - Response Headersview source Cache-Control:no-cache Connection:keep-alive Content-Length:195 Content-Type:text/html; charset=utf-8 Date:Thu, 02 Feb 2012 16:35:27 GMT Location:https://graph.facebook.com/oauth/authorize?client_id=[removed]&redirect_uri=https://foobar.herokuapp.com/users/callback Server:WEBrick/1.3.1 (Ruby/1.9.2/2011-07-09) Set-Cookie:_dropis_static_session=[removed]; path=/; HttpOnly X-Rack-Cache:invalidate, pass X-Runtime:0.001540 X-Ua-Compatible:IE=Edge,chrome=1 A: If anyone else has this problem, I fixed it by simply adding this to my link: :target => "_top" That makes it loads the auth into the top window. From here: https://developers.facebook.com/docs/authentication/canvas/
{ "language": "en", "url": "https://stackoverflow.com/questions/9100916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Defining view function for displaying postgres data in django I created the wines database in postgresql (containing ID, names etc), and inserted around 300 observations. I would like to display names of every wine in drop menu with django. The urls.py are properly setted up. What have I done so far: models.py from django.db import connection from django.db import models class ListWines(models.Model): name = models.CharField(max_length=200) views.py from django.shortcuts import render from wineapp.models import ListWines def showallwines(request): wines = ListWines.objects return render(request, 'main.html', { 'name':wines } ) main.html <!DOCTYPE html> <head> <body> <select> <option disabled = "True" selected>Select your favourite wine!</option> {% for wines in showallwines %} <option>{{wines.name}}</option> {% endfor %} </select> </body> </head> The postgres database (column containing data that I want to display is name) is connected with app by setings.py, however it doesn't show names. How should I redefine my functions in order to see display in main.html drop menu? A: To get the list of all objects, you must use Model.objects.all() So make these changes in your view def showallwines(request): wines = ListWines.objects.all() # changed # changed context name to wines since it is list of wines not names. return render(request, 'main.html', { 'wines': wines } ) main.html You have to use the context name wines since we are passing the context wines from the view {% for wine in wines %} <option>{{ wine.name }}</option> {% endfor %} A: views.py Insted of ListWines.objects use ListWines.objects.all() and update context name with "showallwines" this. def showallwines(request): wines = ListWines.objects.all() return render(request, 'main.html', { 'showallwines':wines } )
{ "language": "en", "url": "https://stackoverflow.com/questions/70397161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to add in a CMake project a global file extension (*.pde) to GCC which is treated like C++ code I have a very simple CMake script. Unfortunately, the project uses a *.pde file which is plain C++ or C code. CMake is working with any file ending, but I get a compiler error, because GCC does not know how to handle it. How can I add a global file extension to GCC, so that the *.pde file is compiled as a usual *.cpp file? The -x c++ foo.pde command is nice if I want to use the console, but for CMake it is (I think) not applicable. cmake_minimum_required(VERSION 2.8) project(RPiCopter) SET( RPiCopter absdevice containers device exceptions navigation frame vehicle receiver scheduler tinycopter.pde ) message( STATUS "Include ArduPilot library directories" ) foreach( DIR ${AP_List} ${AP_List_Linux} ${AP_Headers} ) include_directories( "../libraries/${DIR}" ) endforeach() include_directories( zserge-jsmn ) # *************************************** # Build the firmware # *************************************** add_subdirectory ( zserge-jsmn ) #set(CMAKE_CXX_FLAGS "-x c++ *.pde") ADD_EXECUTABLE ( RPiCopter ${RPiCopter} ) target_link_libraries ( RPiCopter -Wl,--start-group ${AP_List} ${AP_List_Linux} jsmn -Wl,--end-group ) A: You should be able to use set_source_files_properties along with the LANGUAGE property to mark the file(s) as C++ sources: set_source_files_properties(${TheFiles} PROPERTIES LANGUAGE CXX) As @steveire pointed out in his own answer, this bug will require something like the following workaround: set_source_files_properties(${TheFiles} PROPERTIES LANGUAGE CXX) if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") add_definitions("-x c++") endif() A: Normally you should be able to just extend CMAKE_CXX_SOURCE_FILE_EXTENSIONS. This would help, if you have a lot of files with unknown file extensions. But this variable is not cached - as e.g. CMAKE_CXX_FLAGS is - so the following code in CMakeCXXCompiler.cmake.in will always overwrite/hide whatever you will set: set(CMAKE_CXX_IGNORE_EXTENSIONS inl;h;hpp;HPP;H;o;O;obj;OBJ;def;DEF;rc;RC) set(CMAKE_CXX_SOURCE_FILE_EXTENSIONS C;M;c++;cc;cpp;cxx;mm;CPP) I consider this non-caching being a bug in CMake, but until this is going to be changed I searched for a workaround considering the following: * *You normally don't want to change files in your CMake's installation *It won't have any effect if you change CMAKE_CXX_SOURCE_FILE_EXTENSIONS after project()/enable_language() (as discussed here). I have successfully tested the following using one of the "hooks"/configuration variables inside CMakeCXXCompiler.cmake.in: cmake_minimum_required(VERSION 2.8) set(CMAKE_CXX_SYSROOT_FLAG_CODE "list(APPEND CMAKE_CXX_SOURCE_FILE_EXTENSIONS pde)") project(RPiCopter CXX) message("CMAKE_CXX_SOURCE_FILE_EXTENSIONS ${CMAKE_CXX_SOURCE_FILE_EXTENSIONS}") add_executable(RPiCopter tinycopter.pde) A: I decided to use this approach. I just remove the file ending by cmake in the temporary build directory. So GCC is not confused anymore because of the strange Arduino *.pde file extension. # Exchange the file ending of the Arduino project file configure_file(${CMAKE_CURRENT_SOURCE_DIR}/tinycopter.pde ${CMAKE_CURRENT_BINARY_DIR}/tinycopter.cpp) A: CMake doesn't do this for you: http://public.kitware.com/Bug/view.php?id=14516
{ "language": "en", "url": "https://stackoverflow.com/questions/30556429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Python How to show on screen which game player's turn it is I am building a Pictionary game which lets two players(in the same room) draw and it should keep score of which player's turn it is and what the score is. I have no idea how to do this. Any help would be great #add controls to custom widget self.vbdock.addWidget(QLabel("Current Turn: -")) self.vbdock.addSpacing(20) self.vbdock.addWidget(QLabel("Scores:")) self.playerOne = QLabel("Player 1: ") self.vbdock.addWidget(self.playerOne) self.playerTwo = QLabel("Player 2: ") self.vbdock.addWidget(self.playerTwo) self.vbdock.addStretch(1) # buttons to get the players name self.btn1 = QPushButton("Player 1 name",self) self.btn2 = QPushButton("Player 2 name",self) self.vbdock.addWidget(self.btn1) self.vbdock.addWidget(self.btn2) self.btn1.clicked.connect(self.showDialog1) self.btn2.clicked.connect(self.showDialog2) self.vbdock.addWidget(QPushButton("Skip player"))
{ "language": "en", "url": "https://stackoverflow.com/questions/74414166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to mapped java embbeded object with JSON I build an google Calendar API, and i miss understand a point with my json files. I succeed to create my java object with my json files but here the issue: i have two classes : public class User { private String email; private String firstname; private String lastname; Entity entity; `` and my Entity `` public class Entity { private String name; private String entityType; private Entity rootEntity;`` here my json file : for user ``[ { "firstname": "Jean-Marc", "lastname": "Chevereau", "email": "[email protected]", "entity": { "name":"BFA", "entityType":"secteur" } }, { "firstname": "Florent", "lastname": "Hamlin", "email": "[email protected]", "entity": { "name":"IT", "entityType":"secteur" } }, { "firstname": "Benoit", "lastname": "Micaud", "email": "[email protected]", "entity": { "name":"EX", "entityType":"offre", "rootEntity":{ "name":"BFA" } } } ]`` And a Entity json file ```[ { "name": "BFA", "entityType": "secteur", "rootEntity": "", }, { "name": "EX", "entityType": "Offre", "rootEntity": "BFA", } } ] But here the trouble. if in my User.json i write theEntity Name, i dont want to write entitytype and rootEntity, because if i write Entity Name is BFA, it will always be the same entitType and the rootEntity. In others words, my json Entity will be always the same,and if i just put the name we know that refers to an entity object. For instance, in this user.json file, I will just need to put [ { "firstname": "Jean-Marc", "lastname": "Chevereau", "email": "[email protected]", "entity": { "name":"BFA", } }, { "firstname": "Florent", "lastname": "Hamlin", "email": "[email protected]", "entity": { "name":"IT", } }, { "firstname": "Benoit", "lastname": "Micaud", "email": "[email protected]", "entity": { "name":"EX", } } ] A: I suppose com.fasterxml.jackson's @JsonIgnore annotation should help. public class Entity { private String name; @JsonIgnore private String entityType; @JsonIgnore private Entity rootEntity; } A: In Json-lib you have a JsonConfig to specify the allowed fields: JsonConfig jsonConfig=new JsonConfig(); jsonConfig.registerPropertyExclusion(Entity.class,"rootEntity"); jsonConfig.registerPropertyExclusion(Entity.class,"entityType"); JSON json = JSONSerializer.toJSON(objectToWrite,jsonConfig);
{ "language": "en", "url": "https://stackoverflow.com/questions/56276875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Mapping a single child The mapping below works, but I was wondering if it can be done with less configuration. I've tried playing around with ForAllMembers and ForSourceMember but I haven't found anything that works so far. Classes public class User { [Key] public int ID { get; set; } public string LoginName { get; set; } public int Group { get; set; } ... } public class UserForAuthorisation { public string LoginName { get; set; } public int Group { get; set; } } public class Session { [Key] [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public Guid ID { get; set; } public virtual User User { get; set; } ... } Configuration Mapper.CreateMap<Session, UserForAuthorisation>() .ForMember(u => u.LoginName, m => m.MapFrom(s => s.User.LoginName)) .ForMember(u => u.Group, m => m.MapFrom(s => s.User.Group)); Query UserForAuthorisation user = this.DbContext.Sessions .Where(item => item.ID == SessionID ) .Project().To<UserForAuthorisation>() .Single(); Edit This works for the reverse. Mapper.CreateMap<UserForAuthorisation, User>(); Mapper.CreateMap<UserForAuthorisation, Session>() .ForMember(s => s.User, m => m.MapFrom(u => u)); var source = new UserForAuthorisation() { Group = 5, LoginName = "foo" }; var destination = Mapper.Map<Session>(source); Unfortunately, Reverse() isn't the easy solution, mapping doesn't work. Mapper.CreateMap<UserForAuthorisation, User>().ReverseMap(); Mapper.CreateMap<UserForAuthorisation, Session>() .ForMember(s => s.User, m => m.MapFrom(u => u)).ReverseMap(); var source = new Session() { User = new User() { Group = 5, LoginName = "foo" } }; var destination = Mapper.Map<UserForAuthorisation>(source); A: I can see only one option to do less configurations. You can use benefit of flattering by renaming properties of UserForAuthorisation class to: public class UserForAuthorisation { public string UserLoginName { get; set; } public int UserGroup { get; set; } } In this case properties of nested User object will be mapped without any additional configuration: Mapper.CreateMap<Session, UserForAuthorisation>();
{ "language": "en", "url": "https://stackoverflow.com/questions/14876575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SSIS - Calling second table based on first table I have 2 tables T1 and T2 like this : Create table #T1(ID int) Insert into #T1 values(10),(20),(30) Create table #T2(Val varchar(10)) Insert into #T2 values ('A'),('B'),('C'),('D') output: ----------- Table1 - ID ----------- 10 20 30 ---------- Table2 - Val ---------- A B C D I want to store output in Flat file destination such that it looks like 10,A,B,C,D 20,A,B,C,D 30,A,B,C,D. I know how to use joins but don't want to use them. Please help me out as I am comparatively new to SSIS 2012. I am trying to implement using For-Each loop but not getting success at all. A step wise solution will be appreciated. It should be like for each entry of T1, a loop will run for T2. A: * *Create variables for object and item *Create a SQL statement to extract data from T1 and store in object variable. Set the ResultSet to "Full result set" and map the Result SetResult Name(3) *Add a Foreach Loop Container using the Foreach ADO enumerator Use the Object variable as the source variable and map to the item (5). *Add a dataflow. In the data flow assign the T2 table as the DB source *Add a derived column and add the item variable as an extra column *Map the derived column and the T2 data to your output flatfile
{ "language": "en", "url": "https://stackoverflow.com/questions/33343296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: ImageView does not respect Pivot when using setRotation() I have been struggling too much with this problem. I tried every possible solution I found on the web. NOTHING works!!! Anyway, I am trying to make an analog clock view using svg Vectors for my app. I had successfully implemented the clock face and was able to display the clock dials in the center. However, I can not rotate the clock's dials from the center. I am pretty much sure that my pivotX and pivotY variables are pointing to exactly the center. But still, the hands rotate to a different pivot I don't even know where is it. I even tried duplicating the VectorDrawableCompat class and accessing the Vector tag to manually change position but it didn't work. I already tried every possible solution I thought of and everything that might help me but still no result. package com.analogclock.me; import android.content.Context; import android.content.SharedPreferences; import android.graphics.drawable.Drawable; import android.os.Build; import android.preference.PreferenceManager; import android.support.v4.graphics.drawable.DrawableCompat; import android.support.v7.widget.AppCompatImageView; import android.util.Log; import android.util.TypedValue; import android.view.View; import android.view.ViewTreeObserver; import android.widget.LinearLayout; import android.widget.RelativeLayout; import com.analogclock.me.vectors.VectorDrawableCompat; import java.util.Calendar; public class AnalogClockView extends RelativeLayout{ int color; int fontSize; float alpha; AppCompatImageView analogFace; AppCompatImageView analogHour; AppCompatImageView analogMinute; AppCompatImageView analogSecond; Drawable hour; Drawable minute; Drawable second; Drawable face; public View c; SharedPreferences prefs; public AnalogClockView(Context ctx) {//// TODO: try adding pivot parameters. super(ctx); main(ctx); setClipChildren(false); } com.analogclock.vectors.VectorDrawableCompat.VGroup vGroup; ViewTreeObserver.OnGlobalLayoutListener layoutListener; int centerX; int centerY; public void main(Context ctx) { prefs = PreferenceManager.getDefaultSharedPreferences(ctx); alpha = PreferenceManager.getDefaultSharedPreferences(ctx).getFloat("opacity", 0.25f); fontSize = PreferenceManager.getDefaultSharedPreferences(ctx).getInt("font size", 40); color = PreferenceManager.getDefaultSharedPreferences(ctx).getInt("clock color", 0xffffffff); //todo Make a method for setting size based on @fontSize variable if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) { face = ctx.getDrawable(R.drawable.analog_clock_1); hour = ctx.getDrawable(R.drawable.hours_hand); minute = ctx.getDrawable(R.drawable.minutes_hand); second = ctx.getDrawable(R.drawable.second_hand); }else{ face = getResources().getDrawable(R.drawable.analog_clock_1); hour = getResources().getDrawable(R.drawable.hours_hand); minute = getResources().getDrawable(R.drawable.minutes_hand); second = getResources().getDrawable(R.drawable.second_hand); } face = DrawableCompat. wrap(face); hour = DrawableCompat. wrap(hour); minute = DrawableCompat.wrap(minute); second = DrawableCompat.wrap(second); DrawableCompat.setTint(face.mutate(),color); DrawableCompat.setTint(hour.mutate(),color); DrawableCompat.setTint(minute.mutate(),color); DrawableCompat.setTint(second.mutate(),color); second = DrawableCompat.unwrap(second); inflate(ctx,R.layout.style2_analog,this); analogFace = (AppCompatImageView)findViewById(R.id.face); analogHour = (AppCompatImageView)findViewById(R.id.hour); analogMinute = (AppCompatImageView)findViewById(R.id.minute); analogSecond = (AppCompatImageView)findViewById(R.id.second); c = findViewById(R.id.c); //square it analogFace.setAdjustViewBounds(true); analogHour.setAdjustViewBounds(true); analogMinute.setAdjustViewBounds(true); analogSecond.setAdjustViewBounds(true); analogFace.setImageDrawable(face); analogHour.setImageDrawable(hour); analogMinute.setImageDrawable(minute); analogSecond.setImageDrawable(second); final int dp = (int)TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP,7.0f,getResources().getDisplayMetrics()); //analogHour.setPadding(0,0,0,); layoutListener = new ViewTreeObserver.OnGlobalLayoutListener() { @Override public void onGlobalLayout() { Log.e("analoghour",analogHour.getX()+"x-y"+analogHour.getY()); centerY = (analogFace.getTop() + analogFace.getBottom()) / 2; centerX = (analogFace.getLeft() + analogFace.getRight()) / 2; analogHour.setY(analogHour.getTop() - analogHour.getHeight()/2 + dp); analogMinute.setY(analogMinute.getTop() - analogMinute.getHeight()/2 + dp); analogSecond.setY(analogSecond.getTop() - analogSecond.getHeight()/2 + dp); com.analogclock.vectors.VectorDrawableCompat compat = com.analogclock.vectors.VectorDrawableCompat.create(getResources(),R.drawable.second_hand,null); if (compat != null) { compat.setAllowCaching(false); analogSecond.setImageDrawable(compat); VectorDrawableCompat.VGroup vGroup = (VectorDrawableCompat.VGroup) compat.getTargetByName("rotation"); Log.e("isNull", (vGroup == null) + ""); if (vGroup != null) { vGroup.setPivotX(centerX); vGroup.setPivotY(centerX); Log.e("fddf y",""+ vGroup.getPivotY()); Log.e("fddf x",""+ vGroup.getPivotX()); analogSecond.setAdjustViewBounds(true); analogSecond.setClipBounds(analogFace.getClipBounds()); analogSecond.setPivotX(centerX); analogSecond.setPivotY(centerX); analogSecond.setRotation(50); vGroup.setRotation(50); compat.invalidateSelf(); analogSecond.invalidate(); AnalogClockView.this.invalidate(); ((LinearLayout)AnalogClockView.this.getParent()).invalidate(); } } invalidate(); setTime(); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN) { getViewTreeObserver().removeOnGlobalLayoutListener(layoutListener); }else{ getViewTreeObserver().removeGlobalOnLayoutListener(layoutListener); } } }; getViewTreeObserver().addOnGlobalLayoutListener(layoutListener); } public void setTime() { Calendar calendar = Calendar.getInstance(); int curSecond = calendar.get(Calendar.SECOND); int curMinute = calendar.get(Calendar.MINUTE); int curHour = calendar.get(Calendar.HOUR); } } Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/39357498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: using .htaccess to add www prefix I have a site hosted on godaddy. uses apache. I used this code in .htaccess to add www prefix to the domain automatically RewriteEngine on RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/../$1 [R=301,L] but instead of 'www.example.com' it goes to 'www.example.com/web' I just want to convert 'example.com' to 'www.example.com' A: If you just want to convert example.com to www.example.com then you just need to use: RewriteEngine on RewriteCond %{HTTP_HOST} ^example.com [NC] RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=302,NC] You can also lay it out like this: RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}%{REQUEST_URI} [R=302,L,NE] Make sure you clear your cache before testing this. You will notice I've just the flag R=302. This is a temporary redirect, use this while you're testing. If you're happy with the RewriteRule and everything is working, change these to R=301, which is a permanent redirect. A: solved by using this RewriteEngine on RewriteCond %{HTTP_HOST} ^example.com$ [NC] RewriteRule (.*)$ http://www.example.com/$1 [R=301] RedirectMatch 301 ^/web/$ http://www.example.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/42989587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Adding query in FutureBuilder to retrieve data just for logged user and his information Flutter I have this list into a Futurebuilder Widget that retrieve documents from Firestore database-. I have created query to retrieve data from a specific collection in Firestore. I need to add into the query a instance to retrieve data just for logged user and his information not the information of the other. this is my code: class CollectData extends StatefulWidget { @override _CollectDataState createState() => _CollectDataState(); } class _CollectDataState extends State<CollectData> { final String phone; final String wife; final String location; _CollectDataState({ this.phone, this.wife, this.location, }); Future<QuerySnapshot> getData() async { var User = await FirebaseAuth.instance.currentUser(); return await Firestore.instance .collection("dataCollection") .getDocuments(); } @override Widget build(BuildContext context) { return FutureBuilder( future: getData(), builder: (context, AsyncSnapshot<QuerySnapshot> snapshot) { if (snapshot.connectionState == ConnectionState.done) { return ListView.builder( shrinkWrap: true, itemCount: snapshot.data.documents.length, itemBuilder: (BuildContext context, int index) { return Column( children: [ Text('${snapshot.data.documents[index].data["wife"]}'), Text('${snapshot.data.documents[index].data["phone"]}'), Text('${snapshot.data.documents[index].data["location"]}'), ], ); }); } else if (snapshot.connectionState == ConnectionState.none) { return Text("No data"); } return Center(child: CircularProgressIndicator()); }, ); } A: Solution found: Future<QuerySnapshot> getData() async { var User = await FirebaseAuth.instance.currentUser(); return await Firestore.instance .collection("dataCollection") .where(FieldPath.documentId, isEqualTo: User.uid ) .getDocuments(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/63522996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Help understanding mod_rewrite Let's say I have the following filesystem setup on my webserver: /www/web/foo/widget.php ... /www/app/mvc/controllers/WidgetController.php I need to figure out how to use mod_rewrite to map page requests (and their respective GET/POST data) for widget.php to its controller WidgetController.php. It looks like mod_rewrite is super-powerful and thus complex. Is there a quick and easy way for someone to explain to me how to accomplish this? What files do I have to change? Can someone show me a sample rule for this "widget" example? Thanks! A: Nothing is quick and easy. Setup First you must make sure that you have the package installed To use mod_rewrite, you need to load the extension. Usually, this is done by inmporting the rewrite.so module in the apache2 global configuration (/etc/apache2/apache2.conf) Usually all mod_rewrite instruction are written in the virtual host definition. (Say: /etc/apache2/site-available/000default) Usage First step To enable rewrite for one site, you have to ask for it with : RewriteEngine On Then you can begin to write rules. The basic you need to write rules is describe by the following diagram : (See also : How does url rewrite works?) To help me understand how it works, always consider it from the server side (not client side). You receive an URL from the client. This URL has a certain format that you had defined. (E.g. http://blog.com/article/myarticle-about-a-certain-topic). But apache can't understand this by himself, so we need to help him. We know that the controller is page.php and can look up article by name. Getting information So now we forge a regex to extract information from the URL. All regex are matched against what is following your domain name (here : article/myarticle-about-a-certain-topic without the first / -- It can be written though on recent version of rewrite) Here we need the article's name: ^article/(.*)$ will do the job of matching URL against article/<something> and capturing <something> into $1. (For characters meaning, I advise you to look a tutorial on regex. Here ^ is beginning of the string, invisible position after the .com/, and $ the end of the URL) So now we need to informe apache that this URL means http://myblog.com/page.php?article=myarticle-about-a-certain-topic This is achieved by using a RewriteRule RewriteRule ^article/(.*)$ page.php?article=$1 Restricting to conditions To go a bit on advance topics, you may want to apply this rule only if the article name is fetch by GET method. To do this, you can include a RewriteCond like RewriteCond %{REQUEST_METHOD} GET It goes BEFORE a RewriteRule in the file but is tested AFTER it. Flags If you are making lot of redirection/rewrite, you will have to understand flags The most used are [L] and [R]. A little explanation on those : * *[R] ask for redirection, it can be tuned like [R=302] where 302 is a redirection status number of the HTTP protocol. This will force the client to make a new request with the rewritten URL. Therefore he will see the rewritten URL in his address bar. *[L] forces apache to stop treating rules. Be advise that it does mean that the current incoming URL will stop being modified, but the rewritten URL WILL go again through the process of rewriting. Keep this in mind if you want to avoid loops. Conclusion So you end up with the following block of instructions RewriteEngine On RewriteCond %{REQUEST_METHOD} GET RewriteRule ^article/(.*)$ page.php?article=$1 See also You can find additional resources here : * *A basic tester http://martinmelin.se/rewrite-rule-tester/ *Cheat sheet : http://www.ranzs.com/?p=43 *Query_String examples : http://statichtml.com/2010/mod-rewrite-baseon-on-query-string.html *Tips : http://www.noupe.com/php/10-mod_rewrite-rules-you-should-know.html and http://www.ranzs.com/?p=35
{ "language": "en", "url": "https://stackoverflow.com/questions/6446080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Net core 2.1 elasticsearch working in development, it doesn’t work in release net core 2.1 elasticsearch working in development, it doesn't work in release, this is my elasticsearch class {public static class ElasticsearchExtensions { public static void AddElasticsearch( this IServiceCollection services, IConfiguration configuration) { var defaultIndex ="honadonz"; var settings = new ConnectionSettings(new Uri("https://711e0c87a+++++++=========dc2528152891.us-east-1.aws.found.io:9243")) .DefaultIndex(defaultIndex) .BasicAuthentication("elastic", "q1vqu++++++++yfV7RFS5WR6"); // AddDefaultMappings(settings); var client = new ElasticClient(settings); services.AddSingleton<IElasticClient>(client); CreateIndex(client, defaultIndex); } private static void AddDefaultMappings(ConnectionSettings settings) { settings .DefaultMappingFor<ElasticSearchModel>(m => m); } private static void CreateIndex(IElasticClient client, string indexName) { var createIndexResponse = client.CreateIndex("honadonz", c => c .Mappings(m => m.Map<ElasticSearchModel>(mm => mm .AutoMap() )) .RequestConfiguration(r => r .DisableDirectStreaming() ) ); Console.WriteLine("Writeline is :" + createIndexResponse); } } } I use Elasticsearch version 6.8.1 in cloud.elastic.co and nest 6.8.0 Reply StartUp.cs class using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Threading.Tasks; using Autofac; using Autofac.Extensions.DependencyInjection; using AutoMapper; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using Microsoft.Extensions.Options; using Swashbuckle.AspNetCore.Swagger; using XonadonUz.Core.Extensions; using XonadonUz.Core.DAL; using System.Net; using Microsoft.AspNetCore.Diagnostics; using Microsoft.AspNetCore.Http; using XonadonUz.Core.Filters; using Microsoft.AspNetCore.Identity; using XonadonUz.Core.Helpers; using Microsoft.AspNetCore.Http.Features; using Microsoft.Extensions.FileProviders; using Microsoft.AspNetCore.SpaServices.AngularCli; namespace XonadonUz { public class Startup { private readonly IConfigurationRoot _config; public IContainer ApplicationContainer { get; private set; } public Startup(IHostingEnvironment env,IConfiguration configuration) { Configuration = configuration; var builder = new ConfigurationBuilder() .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) .SetBasePath(env.ContentRootPath); _config = builder.Build(); ; } public IConfiguration Configuration { get; } public IServiceProvider ConfigureServices(IServiceCollection services) { services.AddCustomizedMvc(); services.AddCustomAutoMapper(); services.AddDbContext();(Configuration.GetSection("blog")); services.AddSpaStaticFiles(configuration => { configuration.RootPath = "ClientApp/dist"; }); services.AddCors(option => option.AddPolicy("AllowAll", p => p.AllowAnyOrigin() .AllowAnyMethod() .AllowAnyHeader() ) ); services.AddCustomAuthentication(); services.AddCustomAuthorization(); services.AddCustomIdentity(); (Configuration.GetSection("ElasticConnectionSettings")); services.AddElasticsearch(Configuration); services.Configure<FormOptions>(x => { // set MaxRequestBodySize property to 200 MB x.MultipartBodyLengthLimit = 209715200; }); services.AddSwaggerGen(c => { c.SwaggerDoc("doc", new Info { Title = "HonadonUz API" }); c.OperationFilter<FileOperationFilter>(); }); return services.ConfigureAutofac(ApplicationContainer); } public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime appLifetime) { EmailTemplates.Initialize(env); Core.ServiceProvider.Services = app.ApplicationServices; #region Configure swagger endpoints app.UseSwagger(); app.UseSwaggerUI(c => { c.RoutePrefix = AppSettings.Instance.SwaggerRoutePrefix; c.SwaggerEndpoint("/swagger/doc/swagger.json", "doc"); }); #endregion if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseDatabaseErrorPage(); } else{ app.UseExceptionHandler("/Error"); app.UseHsts(); } // app.UseExceptionHandler(builder => // { // builder.Run(async context => // { // context.Response.StatusCode = (int)HttpStatusCode.InternalServerError; // context.Response.Headers.Add("Access-Control-Allow-Origin", "*"); // var error = context.Features.Get<IExceptionHandlerFeature>(); // if (error != null) // { // context.Response.AddApplicationError(error.Error.Message); // await context.Response.WriteAsync(error.Error.Message).ConfigureAwait(false); // } // }); // }); app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseSpaStaticFiles(); app.UseStaticFiles(new StaticFileOptions() { FileProvider = new PhysicalFileProvider(Path.Combine(Directory.GetCurrentDirectory(), "Storage")), RequestPath = new PathString("/Storage") }); app.UseAuthentication(); app.UseCors("AllowAll"); app.UseMvc(); app.UseSpa(spa => { // To learn more about options for serving an Angular SPA from ASP.NET Core, // see https://go.microsoft.com/fwlink/?linkid=864501 spa.Options.SourcePath = "ClientApp"; if (env.IsDevelopment()) { spa.UseAngularCliServer(npmScript: "start"); spa.Options.StartupTimeout = TimeSpan.FromSeconds(120); } }); } } } When I run in localhost its working, before I used aws cloud and its working without any problem(same code),now I changed to cloud.elastic.co csproj <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp2.1</TargetFramework> <TypeScriptToolsVersion>3.0</TypeScriptToolsVersion> <!-- qo'shilgan --> <TypeScriptCompileBlocked>true</TypeScriptCompileBlocked> <IsPackable>false</IsPackable> <SpaRoot>ClientApp\</SpaRoot> <DefaultItemExcludes>$(DefaultItemExcludes);$(SpaRoot)node_modules\**</DefaultItemExcludes> <BuildServerSideRenderer>false</BuildServerSideRenderer> <!-- qo'shilgan --> </PropertyGroup> <ItemGroup> <None Remove="Core\DAL\Entities\Manual\Country.cs~RF134f573d.TMP" /> </ItemGroup> <ItemGroup> <Folder Include="Core\Enums\" /> <Folder Include="Seed\" /> <Folder Include="Storage\" /> <Folder Include="Storage\Building\Media\" /> <Folder Include="Storage\Building\Photo\" /> </ItemGroup> <ItemGroup> <PackageReference Include="Autofac" Version="4.8.1" /> <PackageReference Include="Autofac.Extensions.DependencyInjection" Version="4.3.1" /> <PackageReference Include="AutoMapper" Version="7.0.1" /> <PackageReference Include="AutoMapper.Extensions.Microsoft.DependencyInjection" Version="5.0.1" /> <PackageReference Include="FluentValidation.AspNetCore" Version="7.5.2" /> <PackageReference Include="MailKit" Version="2.1.0.3" /> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.8" /> <PackageReference Include="Microsoft.AspNetCore.HttpsPolicy" Version="2.2.0" /> <PackageReference Include="Microsoft.AspNetCore.SpaServices.Extensions" Version="2.2.0" /> <PackageReference Include="Microsoft.CodeAnalysis.Common" Version="2.10.0" /> <PackageReference Include="Microsoft.CodeAnalysis.CSharp" Version="2.10.0" /> <PackageReference Include="Microsoft.EntityFrameworkCore" Version="2.1.4" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.1.4" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Proxies" Version="2.1.4" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="2.1.4" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer.Design" Version="1.1.6" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.1.4" /> <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.0.3" /> <PackageReference Include="morelinq" Version="3.1.0" /> <PackageReference Include="NEST.JsonNetSerializer" Version="6.8.0" /> <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="2.1.2" /> <PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.NetTopologySuite" Version="2.1.1" /> <PackageReference Include="OpenIddict" Version="2.0.0" /> <PackageReference Include="OpenIddict.EntityFrameworkCore" Version="2.0.0" /> <PackageReference Include="OpenIddict.Mvc" Version="2.0.0" /> <PackageReference Include="PagedList.Core" Version="1.17.4" /> <PackageReference Include="Serilog" Version="2.7.1" /> <PackageReference Include="Serilog.Extensions.Logging" Version="2.0.2" /> <PackageReference Include="Serilog.Sinks.Seq" Version="4.0.0" /> <PackageReference Include="Swashbuckle.AspNetCore" Version="4.0.1" /> </ItemGroup> <ItemGroup> <Reference Include="System"> <HintPath>System</HintPath> </Reference> <Reference Include="System.ComponentModel.Composition"> <HintPath>System.ComponentModel.Composition</HintPath> </Reference> <Reference Include="System.ComponentModel.DataAnnotations"> <HintPath>System.ComponentModel.DataAnnotations</HintPath> </Reference> <Reference Include="System.Data"> <HintPath>System.Data</HintPath> </Reference> </ItemGroup> <ItemGroup> <Service Include="{508349b6-6b84-4df5-91f0-309beebad82d}" /> </ItemGroup> <!-- qo'shilgan --> <ItemGroup> <!-- Don't publish the SPA source files, but do show them in the project files list --> <Content Remove="$(SpaRoot)**" /> <None Include="$(SpaRoot)**" Exclude="$(SpaRoot)node_modules\**" /> </ItemGroup> <Target Name="DebugEnsureNodeEnv" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('$(SpaRoot)node_modules') "> <!-- Ensure Node.js is installed --> <Exec Command="node --version" ContinueOnError="true"> <Output TaskParameter="ExitCode" PropertyName="ErrorCode" /> </Exec> <Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." /> <Message Importance="high" Text="Restoring dependencies using 'npm'. This may take several minutes..." /> <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" /> </Target> <Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish"> <!-- As part of publishing, ensure the JS resources are freshly built in production mode --> <Exec WorkingDirectory="$(SpaRoot)" Command="npm install" /> <Exec WorkingDirectory="$(SpaRoot)" Command="npm run build --prod" /> <Exec WorkingDirectory="$(SpaRoot)" Command="npm run build:ssr --prod" Condition=" '$(BuildServerSideRenderer)' == 'true' " /> <!-- Include the newly-built files in the publish output --> <ItemGroup> <DistFiles Include="$(SpaRoot)dist\**; $(SpaRoot)dist-server\**" /> <DistFiles Include="$(SpaRoot)node_modules\**" Condition="'$(BuildServerSideRenderer)' == 'true'" /> <ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)"> <RelativePath>%(DistFiles.Identity)</RelativePath> <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory> </ResolvedFileToPublish> </ItemGroup> </Target> <!-- qo'shilgan --> </Project> Appsettings.development.json { "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Debug", "System": "Information", "Microsoft": "Information" } } } Starndart Appsettings.json { "SwaggerRoutePrefix": "docs", "Hosts": "http://localhost:54555", "DatabaseConnectionString": "Data Source=SQL6005+++++++;Initial Catalog=DB_+++++++++=n;User Id=DB_+++++++======admin;Password=+++++++;", "Globalization": { "DefaultCulture": "uz", "Cultures": [ { "Code": "uz", "Name": "Uzbek", "IsActive": true }, { "Code": "ru", "Name": "Russian", "IsActive": true }, { "Code": "en", "Name": "English", "IsActive": true } ] }, "JwtIssuerOptions": { "Issuer": "webApi", "Audience": "http://localhost:54555/" }, "elasticsearch": { "index": "honadonz", "url": "https://=========.aws.found.io:9243" }, "Storage": { "DefaultPhotos": "Storage/DefaultPhotos", "UserPhotos": "Storage/UserPhoto", "BuildingMedias": "Storage/Building/Media", "BuildingPhotos": "Storage/Building/Photo" }, "SmtpConfig": { "Host": "smtp.gmail.com", "Port": 465, "UseSSL": true, "Name": "+++++++", "Username": "============", "EmailAddress": "=================", "Password": "===========" }, "Logging": { "IncludeScopes": false, "Debug": { "LogLevel": { "Default": "Warning" } }, "Console": { "LogLevel": { "Default": "Warning" } } } } +++++++++++++++++++++++++++++++ Error 2019-07-28 09:54:38.745 +02:00 [Warning] Invalid NEST response built from a unsuccessful () low level call on POST: /honadon/_search?typed_keys=true Audit trail of this API call: * *[1] BadRequest: Node: https://+++++++++b155b5877928e.europe-west1.gcp.cloud.es.io:9243/ Took: 00:00:00.0034149 OriginalException: System.Net.Http.HttpRequestException: An attempt was made to access a socket in a way forbidden by its access permissions ---> System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken) --- End of inner exception stack trace --- at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken) at System.Threading.Tasks.ValueTask1.get_Result() at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Threading.Tasks.ValueTask1.get_Result() at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask1 creationTask) at System.Threading.Tasks.ValueTask1.get_Result() at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken) at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Net.Http.HttpClient.FinishSendAsyncUnbuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts) at Elasticsearch.Net.HttpConnection.Request[TResponse](RequestData requestData) A: This problem by hosting provider not allowing outbound connections on port 9243. (Smarterasp.net) They answered to me : "upgrade your hosting account to our .net premium plan, so you can enable spacial port to the remote server."
{ "language": "en", "url": "https://stackoverflow.com/questions/57131415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: XHR not suited for chat applications? When we send an XMLHttpRequest, we always hundreds of extra bytes with it. In a normal usage it is good. But when building applications that needs speed, this is not good for reliablility. function update(){ var xhr = getXMLHttp(); // Normal catch handler for XHR xhr.open("POST", "update.php?r=" + "&chatvslog=" + user, true); xhr.send(); window.setTimeout("update();",300); } The request over does take all the way from 170 to 360ms to send. The problem is that I need this job done faster. Is there a way of improving my XMLHttpRequest or doing this another way? A: Polling is a bad workaround that does the job in a small scalle but is not efficient and ugly to implement. Modern browsers support WebSockets as a much better way to allow bidirectional communication. With something such as node.js' Socket.IO you can even use a high-level WebSocket abstraction layer that falls back to whatever is available in the browser - it can use WebSockets (preferred) and techniques such as Flash sockets, AJAX long-polling or JSONp long-polling without you having to care about what's used. A: This is what WebSockets is designed for, but it's not yet supported on IE (though it's coming in IE 10), or some older but still-used versions of the other big browsers. http://caniuse.com/#search=websockets Until then, check out Comet: http://en.wikipedia.org/wiki/Comet_(programming)
{ "language": "en", "url": "https://stackoverflow.com/questions/12326177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cannot move capture in lambda without 'auto' keyword Letting the compiler determine the lambda type with auto works just fine: #include <memory> #include <functional> int main(void) { std::unique_ptr<int> ptr(new int(1)); auto fn = [ capture = std::move(ptr) ] () {}; } If the lambda type is explicitly defined, however, a compilation error occurs saying there was an attempt to call unique_ptr's deleted copy constructor: #include <memory> #include <functional> int main(void) { std::unique_ptr<int> ptr(new int(1)); std::function<void()> fn = [ capture = std::move(ptr) ] () {}; } Reduced output: /media/hdd/home/vitor/Documents/parallel-tools/tests/reusable_thread.cpp:115:59: error: use of deleted function ‘std::unique_ptr<_Tp, _Dp>::unique_ptr(const std::unique_ptr<_Tp, _Dp>&) [with _Tp = int; _Dp = std::default_delete<int>]’ In file included from /usr/include/c++/7/memory:80:0, from /media/hdd/home/vitor/Documents/parallel-tools/tests/reusable_thread.cpp:110: /usr/include/c++/7/bits/unique_ptr.h:388:7: note: declared here unique_ptr(const unique_ptr&) = delete; ^~~~~~~~~~ What exactly is going on here?
{ "language": "en", "url": "https://stackoverflow.com/questions/59460667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to implement UserStore.GetClaimsAsync method I have a custom user store that implements IUserClaimStore. That way, I need to implement GetClaimsAsync method, but I am stuck here. This is my current implementation: public Task<IList<Claim>> GetClaimsAsync(T user) { return Task.FromResult(new List<Claim>()); } This is not valid since List does not implement IList. So the question is, how can I do it? Thanks Jaime
{ "language": "en", "url": "https://stackoverflow.com/questions/57629690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Array of 600+ Strings in excel VBA I am doing a loop for each string in an array such that filename = Array(file1,file2.....file600) However VBA gets a compile error that is due to the array taking up 8 lines. As far as I am aware it only allows 1 line (error says expected list or separator) I am new to VBA sorry A: You can escape new lines in VBA with _. so your solution might look like filename = Array("file1", _ "file2", _ "file3") See How to break long string to multiple lines and If Statement With Multiple Lines If you have 100's of names, however, you might be better off storing them in a worksheet and reading them in, rather than hard-coding them. A: Should you strings in the array be actually "buildable" following a pattern (like per your examples: "file1", "file2", ...,"file600") then you could have a Function define them for you, like follows: Function GetFileNames(nFiles As Long) As String() Dim iFile As Long ReDim filenames(1 To nFiles) As String For iFile = 1 To nFiles filenames(iFile) = "file" & iFile Next GetFileNames = filenames End Function which you'd call in your "main" code as follows Sub main() Dim filenames() As String filenames = GetFileNames(600) '<--| this way'filenames' array gets filled with 600 hundred values like "file1", "file2" and so on End Sub A: The amount of code that can be loaded into a form, class, or standard module is limited to 65,534 lines. A single line of code can consist of up to 1023 bytes. Up to 256 blank spaces can precede the actual text on a single line, and no more than twenty-four line-continuation characters ( _) can be included in a single logical line. From VB6's Help. A: when programming, you don't build an array this big mannually, never. either you store each multiline-string inside a Cell, and at the end you buid the array like this : option Explicit Sub ArrayBuild () Dim Filenames() 'as variant , and yes i presume when using multi files, the variable name should have an "s" With Thisworkbook.sheets("temp") 'or other name of sheet Max = .cells(.rows.count,1).end(xlup).row '=max number of rows in column 1 Filenames = .range( .cells(1,1) , .cells(Max,1)).value2 ' this example uses a one column range from a1 to a[max] , but you could also use a multi column by changing the second .cells to `.cells(Max, ColMax)` end with 'do stuff erase Filenames 'free memory End Sub An other way is to build an array like you build a house by adding one brick at a time, like this : Dim Filenames() as string 'this time you can declare as string, but not in the previous example Dim i& 'counter For i=1 to Max 'same max as in previous example, adapt code plz... redim Preserve Filenames (1 to ubound(filenames)+1) 'this is an example for unknown size array wich grows, but in your case you know the size (here Max, so you could declare it `Dim Filenames (1 to Max)` from the start, just wanted to show every option here. Filenames(i) = Cells (i,1).value2 'for example, but can be anything else. Also i'm beeing lazy and did not reference the cell to its sheet, wich i'd never do in actual coding... next i EDIT i did re-read your Question, and it is even easier (basically because you ommited the bracets in your post and corrected it as comment...), use user3598756 's code plz. I thought File1 is a variable, when it should be written as "File1" . EDIT 2 why bother build and array where Filename(x)="Filex" anyway? you know the result beforehand
{ "language": "en", "url": "https://stackoverflow.com/questions/41415892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How test Select2(Multi-select) in protractor? I have a Select2(Multi-select),I want to type admin and select it. this my HTML code : <select class="js-select2" multiple="multiple"> <option> admin </option> <option> John Doe </option> </select> this is my test code : describe('when select admin and press save button', function () { beforeAll(function () { browser.get('http://example'); element(by.css("*[id='technician'] + span.select2")).click(); browser.sleep(1000); element(by.css(".select2-search__field")).sendKeys('admin'); browser.sleep(1000); element(by.css('.select2-results__options li:nth-of-type(1)')).click(); element(by.buttonText('save')).click(); }); it('You must see a successful message', function () { expect(element(by.css(".alert")).getText()).toContain('Settings saved successfully'); }); }); When I execute the code, Protractor gives this message : Failed: element not interactable Where did i make mistakes ? and What should i do ? A: Select2 is a jQuery plugin, which implement dropdown with css & javascript, it's not native dropdown which implement purely by select. For such CSS dropdown, the visible option does not comes from select and the select is invisible or visible but with very small size (like 1 * 1 size) prevent user to operate it. Below code example test on the demo from Select2 site describe('handsontable', function(){ it('input text into cell', function(){ browser.ignoreSynchronization = true; browser.get('https://select2.org/selections'); browser.sleep(3000); // click to make the input box and options display out element(by.css('select.js-example-basic-multiple-limit + span' + ' .select2-selection--multiple')).click(); browse.sleep(1000); element(by.css("select.js-example-basic-multiple-limit + span input")) .sendKeys('Hawaii'); element(by.xpath("//li[@role='treeitem'][text()='Hawaii']")).click(); browser.sleep(3000); }); })
{ "language": "en", "url": "https://stackoverflow.com/questions/52351877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SELECT statements have different number of columns I am making a small php website in which you can follow others and then see their post. I have three tables- 1.Posts, which has post_id and author_id 2.follow, which has following and follower 3.users, which has id, username, and all other stuff. I try the following in sql- SELECT * FROM posts,follow,users WHERE posts.author_id=users.id AND users.id=follow.following AND follow.follower='$id' UNION SELECT * FROM posts,users WHERE posts.author_id=users.id AND users.id='$id' Where $id is the id of the user logged in. It displays the following error- #1222 - The used SELECT statements have a different number of columns I have searched for hours but I cannot find the answers to match with my query. I will really appreciate an answer with a better version of the above code. Thanks in advance. A: When you union two queries together, the columns on both must match. You select from posts,follow,users on the first query and posts,users on the second. this won't work. From the mysql manual: The column names from the first SELECT statement are used as the column names for the results returned. Selected columns listed in corresponding positions of each SELECT statement should have the same data type A: Perhaps a JOIN would serve you better ... something like this: SELECT * FROM posts JOIN users on posts.author_id=users.id JOIN followers on users.id=follow.following WHERE follow.follower='$id'
{ "language": "en", "url": "https://stackoverflow.com/questions/48083351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Cake PHP 2.x model validation returns error on field despite data set I'm working with a Cake PHP 2 project and am utilising the model validation and have implemented validation into my controller I've set up my model to contain a $validate array with my fields and some basic rules, but when validating the request data with the validates method, the validation is failing for some reason. My model looks ike: <?php class MyModelName extends AppModel { /* ** Model name */ public $name = 'MyModelName'; /* ** Table prefix */ public $tablePrefix = 'tlp_'; /* ** Table to save data to */ public $useTable = 'some_table'; /* ** Validation */ public $validate = array( 'age' => array( 'required' => array( 'rule' => 'required', 'message' => 'This field is required' ), 'numeric' => array( 'rule' => 'numeric', 'message' => 'This field can only be numeric' ) ) ); } And then I'm validating like this... $data = [ 'age' => $this->request->data['age'] ?? null, ]; $this->MyModelName->set($this->request->data); if (!$this->MyModelName->validates()) { $json['errors'] = $this->MyModelName->validationErrors; echo json_encode($json); exit(); } $hop = $this->MyModelName->save($data); Why would $this->MyModelName->validates() always be returning false? I've checked that age is in $this->request->data and it is, and it's absolutely a numeric value. $this->request->data looks like the following: { "age": 50 }
{ "language": "en", "url": "https://stackoverflow.com/questions/68411228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CMSMS meta tags not retrieved I built a site with CMSMS. I'm having trouble retrieving the meta tags. I assume this needs to be done in the header.inc.php file. As of now, meta-tag for content is empty. <meta name="description" content=""> According to some documentations I found online I need to put <meta name="description" content="{content}"> but that just outputs literally that in the markup. What is the right way? <meta name="description" content="{$content}"> <meta name="description" content={content}> <meta name="description" content="<?php $content ?>"> <meta name="description" content="<?php echo $content ?>"> and what about {keywords}? None of those work. A: You don't need to touch the header.inc.php, you are using CMS Made Simple, not CMS made difficult :). Go to 'Site Admin > Settings - Global Settings > General Settings tab > Global Metadata', add all your tags in there and put the smarty tag {metadata} in a page template. More details here: https://docs.cmsmadesimple.org/configuration/global-settings Hope that helps. PS plenty of fast advise on the Slack channel also.
{ "language": "en", "url": "https://stackoverflow.com/questions/61502044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I generate multi-line build commands? In SCons, my command generators create ridiculously long command lines. I'd like to be able to split these commands across multiple lines for readability in the build log. e.g. I have a SConscipt like: import os # create dependency def my_cmd_generator(source, target, env, for_signature): return r'''echo its a small world after all \ its a small world after all''' my_cmd_builder = Builder(generator=my_cmd_generator, suffix = '.foo') env = Environment() env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) AlwaysBuild(my_cmd) When it executes, I get: scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... echo its a small world after all \ its a small world after all its a small world after all sh: line 1: its: command not found scons: *** [foo.foo] Error 127 scons: building terminated because of errors. Doing this in the python shell with os.system and os.popen works -- I get a readable command string and the sub-shell process interprets all the lines as one command. >>> import os >>> cmd = r'''echo its a small world after all \ ... its a small world after all''' >>> print cmd echo its a small world after all \ its a small world after all >>> os.system( cmd) its a small world after all its a small world after all 0 When I do this in SCons, it executes each line one at a time, which is not what I want. I also want to avoid building up my commands into a shell-script and then executing the shell script, because that will create string escaping madness. Is this possible? UPDATE: cournape, Thanks for the clue about the $CCCOMSTR. Unfortunately, I'm not using any of the languages that SCons supports out of the box, so I'm creating my own command generator. Using a generator, how can I get SCons to do: echo its a small world after all its a small world after all' but print echo its a small world after all \ its a small world after all ? A: Thanks to cournape's tip about Actions versus Generators ( and eclipse pydev debugger), I've finally figured out what I need to do. You want to pass in your function to the 'Builder' class as an 'action' not a 'generator'. This will allow you to actually execute the os.system or os.popen call directly. Here's the updated code: import os def my_action(source, target, env): cmd = r'''echo its a small world after all \ its a small world after all''' print cmd return os.system(cmd) my_cmd_builder = Builder( action=my_action, # <-- CRUCIAL PIECE OF SOLUTION suffix = '.foo') env = Environment() env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) This SConstruct file will produce the following output: scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... my_action(["foo.foo"], ["/bin/bash"]) echo its a small world after all \ its a small world after all its a small world after all its a small world after all scons: done building targets. The other crucial piece is to remember that switching from a 'generator' to an 'action' means the target you're building no longer has an implicit dependency on the actual string that you are passing to the sub-process shell. You can re-create this dependency by adding the string into your environment. e.g., the solution that I personally want looks like: import os cmd = r'''echo its a small world after all \ its a small world after all''' def my_action(source, target, env): print cmd return os.system(cmd) my_cmd_builder = Builder( action=my_action, suffix = '.foo') env = Environment() env['_MY_CMD'] = cmd # <-- CREATE IMPLICIT DEPENDENCY ON CMD STRING env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) A: You are mixing two totally different things: the command to be executed, and its representation in the command line. By default, scons prints the command line, but if you split the command line, you are changing the commands executed. Now, scons has a mechanism to change the printed commands. They are registered per Action instances, and many default ones are available: env = Environment() env['CCCOMSTR'] = "CC $SOURCE" env['CXXCOMSTR'] = "CXX $SOURCE" env['LINKCOM'] = "LINK $SOURCE" Will print, assuming only C and CXX sources: CC foo.c CC bla.c CXX yo.cc LINK yo.o bla.o foo.o
{ "language": "en", "url": "https://stackoverflow.com/questions/466293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Preventing commandbar from going into landscape mode I'm making an app, my first UWP app, I already spent hours looking for this. I need the commandbar to keep in portrait mode while all the rest of the application goes to landscape mode. I need something like this: in portrait mode And in landscape mode, see the bar didn't moved. here is my code so far: <Page.BottomAppBar> <CommandBar IsSticky="True" Name="cmdBar_ed"> <CommandBar.PrimaryCommands> <AppBarButton Name="newBtn" Label="" Width="30" Click="newFile"> <AppBarButton.Icon> <BitmapIcon UriSource="ms-appx:///Assets/test.png"/> </AppBarButton.Icon> </AppBarButton> <AppBarButton Name="newBtn1" Label="" Width="30" Click="newFile"> <AppBarButton.Icon> <BitmapIcon UriSource="ms-appx:///Assets/test.png"/> </AppBarButton.Icon> </AppBarButton> <AppBarButton Name="newBtn2" Label="" Width="30" Click="newFile"> <AppBarButton.Icon> <BitmapIcon UriSource="ms-appx:///Assets/test.png"/> </AppBarButton.Icon> </AppBarButton> <AppBarButton Name="newBtn3" Label="" Width="30" Click="newFile"> <AppBarButton.Icon> <BitmapIcon UriSource="ms-appx:///Assets/test.png"/> </AppBarButton.Icon> </AppBarButton> <AppBarButton Name="newBtn4" Label="" Width="30" Click="newFile"> <AppBarButton.Icon> <BitmapIcon UriSource="ms-appx:///Assets/test.png"/> </AppBarButton.Icon> </AppBarButton> <AppBarButton Name="btnTab" Label="" Width="30" Click="insertTab"> <AppBarButton.Icon> <BitmapIcon UriSource="ms-appx:///Assets/btnIcon/Tab.png"/> </AppBarButton.Icon> </AppBarButton> </CommandBar.PrimaryCommands> <CommandBar.SecondaryCommands> <AppBarButton Name="btn_save" Icon="Save" Label="Save" Click="save"/> <AppBarButton Name="btn_saveAs" Icon="Save" Label="Save As" Click="saveAs"/> <AppBarButton Name="btnClose" Icon="Cancel" Label="Close File" Click="close" /> </CommandBar.SecondaryCommands> </CommandBar> </Page.BottomAppBar> Also, another Strange thing is that when the on screen Keyboard is activated, this bar moves with it to stay on top of the on screen keyboard and not in a a fixed position. how would I do it to fix? A: If you put commandbar in <Page.BottomAppBar/>, it always changes its position according to current ApplicationViewOrientation, you cannot control it. But if you put it into a panel(e.g, Grid): <Page x:Class="AppCommandBar.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:AppCommandBar" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d"> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Grid.RowDefinitions> <RowDefinition Height="9*"></RowDefinition> <RowDefinition Height="*"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition></ColumnDefinition> <ColumnDefinition></ColumnDefinition> </Grid.ColumnDefinitions> <CommandBar Name="cmdBar_ed" Grid.Row="1" Grid.ColumnSpan="2"> <CommandBar.PrimaryCommands> <AppBarButton Name="newBtn" Label="test" Width="30"> </AppBarButton> </CommandBar.PrimaryCommands> <CommandBar.SecondaryCommands> <AppBarButton Name="btn_save" Icon="Save" Label="Save"/> <AppBarButton Name="btn_saveAs" Icon="Save" Label="Save As"/> <AppBarButton Name="btnClose" Icon="Cancel" Label="Close File"/> </CommandBar.SecondaryCommands> </CommandBar> </Grid> You could manually set its position by current view's orientation.About how to detect current view's orientation, please check this thread How to detect orientation changes and change layout
{ "language": "en", "url": "https://stackoverflow.com/questions/41850955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Filter list within list to get all second numbers Im trying to filter the following list: filtered = [[1.0, 3.0], [2.0, 70.0], [40.0, 3.0], [5.0, 50.0], [6.0, 5.0], [7.0, 21.0]] To get every second number in the list within list, resulting in the following: filtered = [[3.0], [70.0], [3.0], [50.0], [5.0], [21.0]] I tried the following which does not work: from operator import itemgetter a = map(itemgetter(0), filtered) print(a) The following also doesn't work: from operator import itemgetter b = map(filtered,key=itemgetter(1))[1] print(b) In the last line of code i have shown, if I change map to max, it does find the largest value of all the second floats in the lists. So i assume that i am close to a solution? A: You can use a list comprehension. x = [[el[1]] for el in filtered] or: x = [[y] for x,y in filtered] You can also use map with itemgetter. To print it, iterate over the iterable object returned by map. You can use list for instance. from operator import itemgetter x = map(itemgetter(1), filtered) print(list(x)) A: You are not closer to a solution trying to pass a key to map. map only takes a function and an iterable (or multiple iterables). Key functions are for ordering-related functions (sorted, max, etc.) But you were actually pretty close to a solution in the start: a = map(itemgetter(0), filtered) The first problem is that you want the second item (item 1), but you're passing 0 instead of 1 to itemgetter. That obviously won't work. The second problem is that a is a map object—a lazily iterable. It does in fact have the information you want: >>> a = map(itemgetter(1), filtered) >>> for val in a: print(val, sep=' ') 3.0 70.0 3.0 50.0 5.0 21.0 … but not as a list. If you want a list, you have to call list on it: >>> a = list(map(itemgetter(1), filtered)) >>> print(a) [3.0, 70.0, 3.0, 50.0, 5.0, 21.0] Finally, you wanted a list of single-element lists, not a list of elements. In other words, you want the equivalent of item[1:] or [item[1]], not just item[1]. You can do that with itemgetter, but it's a pretty ugly, because you can't use slice syntax like [1:] directly, you have to manually construct the slice object: >>> a = list(map(itemgetter(slice(1, None)), filtered)) >>> print(a) [[3.0], [70.0], [3.0], [50.0], [5.0], [21.0]] You could write this a lot more nicely by using a lambda function: >>> a = list(map(lambda item: item[1:], filtered)) >>> print(a) [[3.0], [70.0], [3.0], [50.0], [5.0], [21.0]] But at this point, it's worth taking a step back: map does the same thing as a generator expression, but map takes a function, while a genexpr takes an expression. We already know exactly what expression we want here; the hard part was turning it into a function: >>> a = list(item[1:] for item in filtered) >>> print(a) Plus, you don't need that extra step to turn it into a list with a genexpr; just swap the parentheses with brackets and you've got a list comprehension: >>> a = [item[1:] for item in filtered] >>> print(a)
{ "language": "en", "url": "https://stackoverflow.com/questions/50320657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How i can use script session outside script folder like home folder? I use script on my website like this localhost/panel/index.php in index.php the script set session like this $cookie_lifetime = ini_get('session.cookie_lifetime') ? ini_get('session.cookie_lifetime') : 0; session_set_cookie_params($cookie_lifetime , rtrim(dirname($_SERVER["SCRIPT_NAME"]),"/") . "/", $_SERVER['SERVER_NAME'], $ssl, true); session_start(); I need to use this session on localhost/index.php by using this session. A: To set the session cookie path to a fixed location you don't need any calculation: session_set_cookie_params($cookie_lifetime , '/index.php', $ssl, true); It's worth noting that the default value is /.
{ "language": "en", "url": "https://stackoverflow.com/questions/44287991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Decimal error on using linq function datacolumn I have a simple code that runs through all the text columns of a DataGrid. I use a for the create a LINQ statement that searches values on that column. The problem I have is when the column has decimal values. I intent to make an if function asking if this column is decimal before going into the code. How can I check to see what column datatype it is? If String.IsNullOrWhiteSpace(Me.DisplayMemberPath) = False AndAlso mBindingListCollectionView IsNot Nothing AndAlso editableTextBox IsNot Nothing Then Dim str As New StringBuilder For Each item As DataGridTextColumn In Me.Columns Dim b As Binding = item.Binding If b IsNot Nothing AndAlso b.Path IsNot Nothing Then str.Append(" " & b.Path.Path & " LIKE '%" & editableTextBox.Text & "%' OR") End If Next Dim query As String If str.ToString.Trim.EndsWith("OR") Then query = str.ToString(0, str.Length - 2) Else query = str.ToString End If mBindingListCollectionView.CustomFilter = query This is a CustomControl in WPF. The Customcontrol allows me to add datacolumns and then i hook up an itemsSource in my window. When I start typing, this little function searches all the columns available and sees what I have similar to the entered text.
{ "language": "en", "url": "https://stackoverflow.com/questions/25207675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: passing data between fragments when swiped I am newbie to android.I am trying to develop an application which contains a Viewpager with Tabs.My Aim is to pass the data entered in first fragment(i.e tab1) to second fragment.I am confused how to get the data in FragmentA and show it in FragmentB besides my viewpager is in Mainactivity. some of my code is mviewpager.addOnPageChangeListener(new ViewPager.OnPageChangeListener() { @Override public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) { // Toast.makeText(getApplicationContext(),""+servloc,Toast.LENGTH_SHORT).show(); } @Override public void onPageSelected(int position) { mviewpager.setCurrentItem(position); int index = mviewpager.getCurrentItem(); if(position==0) { Fragmentpageadapter adapter = (Fragmentpageadapter) mviewpager.getAdapter(); CreateInspectionFragmentA ca= (CreateInspectionFragmentA) adapter.getFragment(index); } above code is viewpage setonpagechangelistener.. and my pageadapter is @Override public Fragment getItem(int position) { switch (position) { case 0: CreateInspectionFragmentA createInspectionFragmenta=new CreateInspectionFragmentA(); mPageReferenceMap.put(position, createInspectionFragmenta); return createInspectionFragmenta; case 1: CreateInspectionFragmentB createInspectionFragmentb = new CreateInspectionFragmentB(); // createInspectionFragmentb.deliverData(data1,data2); mPageReferenceMap.put(position,createInspectionFragmentb); return createInspectionFragmentb; case 2: CreateInspectionFragmentC createInspectionFragmentc = new CreateInspectionFragmentC(); mPageReferenceMap.put(position,createInspectionFragmentc); return createInspectionFragmentc; default: return null; } } my requirement is to pass the data i entered in FragmentA in edittext to be seen in FragmentB when swiped but not when button click or something. posibilities: 1.if there is a way to get the UI controls of FragmentA in onpageselected i can bind the EditText view and get the data. 2. using Bundle. please help me to resolve this problem. Thanks in Advance. A: You can try this. In FragmentA you put this code private static NameOfTheFragment instance = null; public static NameOfTheFragment getInstance() { return instance; } Then create a function to return what you want, like a List View public ListView getList(){ return list; } Then in FragmentB you do this in onCreateView ListView list = NameOfTheFragment.getInstance().getList(); Then you show list in fragment B
{ "language": "en", "url": "https://stackoverflow.com/questions/32479685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to set Json string array to Java object using Java I have following JSON string that I need to set to the Java objects of POJO class. What method should I follow? {"status":"FOUND","messages":null,"sharedLists": [{"listId":"391647d","listName":"/???","numberOfItems":0,"colla borative":false,"displaySettings":true}] } I tried using Gson but it did not work for me. Gson gson = new Gson(); SharedLists target = gson.fromJson(sb.toString(), SharedLists.class); Following is my SharedLists pojo public class SharedLists { @SerializedName("listId") private String listId; @SerializedName("listName") private String listName; @SerializedName("numberOfItems") private int numberOfItems; @SerializedName("collaborative") private boolean collaborative; @SerializedName("displaySettings") private boolean displaySettings; public int getNumberOfItems() { return numberOfItems; } public void setNumberOfItems(int numberOfItems) { this.numberOfItems = numberOfItems; } public boolean isCollaborative() { return collaborative; } public void setCollaborative(boolean collaborative) { this.collaborative = collaborative; } public boolean isDisplaySettings() { return displaySettings; } public void setDisplaySettings(boolean displaySettings) { this.displaySettings = displaySettings; } public String getListId() { return listId; } public void setListId(String listId) { this.listId = listId; } } A: Following is your JSON string. { "status": "FOUND", "messages": null, "sharedLists": [ { "listId": "391647d", "listName": "/???", "numberOfItems": 0, "colla borative": false, "displaySettings": true } ] } Clearly sharedLists is a JSON array within the outer JSON object. So I have two classes as follows (created from http://www.jsonschema2pojo.org/ by providing your JSON as input) ResponseObject - Represents the outer object public class ResponseObject { @SerializedName("status") @Expose private String status; @SerializedName("messages") @Expose private Object messages; @SerializedName("sharedLists") @Expose private List<SharedList> sharedLists = null; public String getStatus() { return status; } public void setStatus(String status) { this.status = status; } public Object getMessages() { return messages; } public void setMessages(Object messages) { this.messages = messages; } public List<SharedList> getSharedLists() { return sharedLists; } public void setSharedLists(List<SharedList> sharedLists) { this.sharedLists = sharedLists; } } and the SharedList - Represents each object within the array public class SharedList { @SerializedName("listId") @Expose private String listId; @SerializedName("listName") @Expose private String listName; @SerializedName("numberOfItems") @Expose private Integer numberOfItems; @SerializedName("colla borative") @Expose private Boolean collaBorative; @SerializedName("displaySettings") @Expose private Boolean displaySettings; public String getListId() { return listId; } public void setListId(String listId) { this.listId = listId; } public String getListName() { return listName; } public void setListName(String listName) { this.listName = listName; } public Integer getNumberOfItems() { return numberOfItems; } public void setNumberOfItems(Integer numberOfItems) { this.numberOfItems = numberOfItems; } public Boolean getCollaBorative() { return collaBorative; } public void setCollaBorative(Boolean collaBorative) { this.collaBorative = collaBorative; } public Boolean getDisplaySettings() { return displaySettings; } public void setDisplaySettings(Boolean displaySettings) { this.displaySettings = displaySettings; } } Now you can parse the entire JSON string with GSON as follows Gson gson = new Gson(); ResponseObject target = gson.fromJson(inputString, ResponseObject.class); Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/44006902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: _WebTryThreadLock error I am facing an error while manipulation the UIWebView in ipad application. in that I have to fill the UIWebview with the HTML string and though the content is coming from the APi so I have to implement the process under Thread. Below is the error message : bool _WebTryThreadLock(bool), 0xb2aa410: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... A: you are doing some graphical changes in secondary thread .. you must do all the graphical changes in your main thread. check you thread code.
{ "language": "en", "url": "https://stackoverflow.com/questions/5662816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Rounded Border Corner I'm wondering about how to make the top-border like the red line on the picture below? I've tried border-radius, but I can't figure out how to round the bottom part of the top-border. A: I would use :before to create a pseudo-element that you can style, as it is only used for presentation, so having an empty element would be unnecessarily verbose. Here’s an example of how this could be done: .splitter { border: 1px solid #ddd; border-top: 0; } .splitter:before { content: ' '; display: block; position: relative; top: -5px; width: 100%; height: 8px; background-color: red; border-radius: 100px / 70px; } .myContent { padding: 0 20px; } <div class="splitter"> <div class="myContent"> <h1>React or Angular</h1> <p>Lorem ipsum Mollit qui sunt consequat deserunt exercitation veniam.</p> </div> </div> Which can also be seen working on JS Bin: http://jsbin.com/hoqizajada/edit?html,css,output A: I think it's not possible with a single div. However, you can place a div above it and trick with border-radius. .inbox { width: 200px; } #top-border { border: red 4px solid; border-radius: 4px; } #test{ padding: 4px; height: 200px; background: #EEEEEE; } <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>JS Bin</title> </head> <body> <div id="top-border" class="inbox"></div> <div id="test" class="inbox"></div> </body> </html> JSBIN: http://jsbin.com/pofolanuje/edit?html,css,output A: Here is the complete code, hope this helps you; .container{ width:320px; height:520px; background:#fff; -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; border:1px solid #e4e4e4; } .border-red{ background:red; width:100%; height:10px; -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; } <div class="container"> <div class="border-red"> </div> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/36838062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Should I install PWA offline caching at the start of the development? As the question suggests, should I install/add Progressive Web Application (PWA) offline caching at the start of the development, or should I only add it after development is complete? Because I've heard/read that installing it at the start may impact how we develop our application and how we connect with the backend. However, the part where I'm most confused is that the data is "cached" throughout the development, and I may not be able to get "fresh" data immediately unless I use incognito.
{ "language": "en", "url": "https://stackoverflow.com/questions/67919278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: jQuery element value = $(this).value I would like to select the element with class of Level1 whose value is equal to $(this).val(). I read that you can't do $(.Level1[someAttribute=value]) which makes sense as val() isn't an available DOM attribute in that sense. This may be an easy one, but I am stumped at the moment. Unfortunately i do not have the ability to change how the HTML below is rendered. $('.Level0').each(function () { var sortOfChildren = $('.Level1').val() = $(this).val //not real code }); HTML <div class="Row Level0 PCITrainingVerification">x</div> <div class="Row Level1 PCITrainingVerification">x</div> <div class="Row Level1 PCITrainingVerification">x</div> <div class="Row Level1 PCITrainingVerification">x</div> <div class="Row Level1 PCITrainingVerification">x</div> <div class="Row Level0 Training2">y</div> <div class="Row Level0 Training3">z</div> <div class="Row Level1 Training3">z</div> ANSWER $('.Level0').each(function () { var targetValue = $(this).val(); var matching = $(".Level1").filter(function() { return $(this).val() === targetValue; }); // ...use `matching` here... }); A: Your question talks about .val and the "value" of elements, but none of the elements in your question is a form field, and therefore they don't have values. If they were form fields: I read that you can't do $(.Level1[someAttribute=value]) which makes sense as val() isn't an available DOM attribute in that sense Right, your only option is to loop, probably via filter: $('.Level0').each(function () { var targetValue = $(this).val(); var matching = $(".Level1").filter(function() { return $(this).val() === targetValue; }); // ...use `matching` here... }); Note that matching may have zero, or multiple, elements in it depending on how many .Level1 elements matched the value. But because they aren't: ...you may want .text or .html instead of .val in the above, or possibly a :contains selector instead of filter.
{ "language": "en", "url": "https://stackoverflow.com/questions/31747740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using Tuples in map, flatmap,... partial functions If I do: val l = Seq(("un", ""), ("deux", "hehe"), ("trois", "lol")) l map { t => t._1 + t._2 } It's ok. If I do: val l = Seq(("un", ""), ("deux", "hehe"), ("trois", "lol")) l map { case (b, n) => b + n } It's ok too. But if I do: val l = Seq(("un", ""), ("deux", "hehe"), ("trois", "lol")) l map { (b, n) => b + n } It will not work. Why should I use "case" keyword to use named tuples? A: The error message with 2.11 is more explanatory: scala> l map { (b, n) => b + n } <console>:9: error: missing parameter type Note: The expected type requires a one-argument function accepting a 2-Tuple. Consider a pattern matching anonymous function, `{ case (b, n) => ... }` l map { (b, n) => b + n } ^ <console>:9: error: missing parameter type l map { (b, n) => b + n } ^ For an apply, you get "auto-tupling": scala> def f(p: (Int, Int)) = p._1 + p._2 f: (p: (Int, Int))Int scala> f(1,2) res0: Int = 3 where you supplied two args instead of one. But you don't get auto-untupling. People have always wanted it to work that way. A: This situation can be understand with the types of inner function. First, the type syntax of parameter function for the map function is as follows. Tuple2[Int,Int] => B //Function1[Tuple2[Int, Int], B] The first parameter function is expand to this. (t:(Int,Int)) => t._1 + t._2 // type : Tuple2[Int,Int] => Int This is ok. Then the second function. (t:(Int, Int)) => t match { case (a:Int, b:Int) => a + b } This is also ok. In the failure scenario, (a:Int, b:Int) => a + b Lets check the types of the function (Int, Int) => Int // Function2[Int, Int, Int] So the parameter function type is wrong. As a solution, you can convert multiple arity functions to tuple mode and backward with the helper functions in Function object. You can do following. val l = Seq(("un", ""), ("deux", "hehe"), ("trois", "lol")) l map(Function.tupled((b, n) => b + n )) Please refer Function API for further information. A: The type of a function argument passed to map function applied to a sequence is inferred by the type of elements in the sequence. In particular, scenario 1: l map { t => t._1 + t._2 } is same as l map { t: ((String, String)): (String) => t._1 + t._2 } but shorter, which is possible because of type inference. Scala compiler automatically inferred the type of the argument to be (String, String) => String scenario 2: you can also write in longer form l map { t => t match { case(b, n) => b + n } } scenario 3: a function of wrong type is passed to map, which is similar to def f1 (a: String, b: String) = a + b def f2 (t: (String, String)) = t match { case (a, b) => a + b } l map f1 // won't work l map f2
{ "language": "en", "url": "https://stackoverflow.com/questions/22944468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Issue IN VS CODE I am a new vs code user. When I installed vs code and runned a C program "Hello World" and other programs but I face one issue undefined reference to winMain@16 I also have MinGW compiler. I searched for this issue in the whole internet but I didn't find anything regarding this visual studio code issue. If you have a solution to this please reply. ~~ Joydeep
{ "language": "en", "url": "https://stackoverflow.com/questions/62317326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Comparing two datasets, to ensure an administrator can only edit users in their area I have created an app where users have many areas (through subscriptions). Some users have the :admin role, and these users also have many administrative_areas (through administrations. I'm struggling getting my head round the best way to ensure that given an admin of Area A goes to the users index page, they only see other users in Area A, but not those outside of this area. My model is set up as follows: class User < ActiveRecord::Base has_many :subscriptions has_many :areas, through: :subscriptions has_many :administrations has_many :administrative_areas, through: :administrations, :source => :area, :class_name => "Area" end class Area < ActiveRecord::Base has_many :subscriptions has_many :users, :through => :subscriptions has_many :administrations has_many :admins, :through => :administrations, :class_name => "User", :source => :user end I can successfully set up the users / admins, but am struggling with getting the right ActiveRecord Query to return only relevant users for the admin. Any thoughts on solving this, or suggestions for a better approach? Thanks! A: Assuming @admin_user as some user with admin role. @admin_areas = User.joins(:subscriptions).where('subscriptions.area_id' => @admin_user.administrative_areas.map(&:id)) You can use cancan gem to manage roles and giving controls to different roles. EDIT : You can try something like below In user.rb scope :users_to_manage, lambda{ joins(:subscriptions).where('subscriptions.area_id' => current_user.administrative_areas.map(&:id)) #if you have access current_user in model In ability.rb can :manage, Area.users_to_manage
{ "language": "en", "url": "https://stackoverflow.com/questions/15055819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is my java app spending 30% of its time in young generation gc? I was looking at the gc log file for our Solr installation, and I noticed its taking 30% of its time doing young generation gc. The log snippet below spans about a second, and the gc times add up to .34 seconds. My question is: is this a problem, and if so what's causing it? I am running jdk 1.6.0_24 on Linux 1004626.109: [GC 1004626.109: [ParNew: 74847K->5219K(76672K), 0.0838750 secs] 10831779K- 10762151K(11525824K), 0.0841790 secs] [Times: user=0.24 sys=0.00, real=0.09 secs] 1004626.320: [GC 1004626.320: [ParNew: 73379K->5468K(76672K), 0.0527070 secs] 10830311K->10762874K(11525824K), 0.0529680 secs] [Times: user=0.20 sys=0.00, real=0.06 secs] 1004626.511: [GC 1004626.511: [ParNew: 73628K->4986K(76672K), 0.0591070 secs] 10831034K->10763002K(11525824K), 0.0593820 secs] [Times: user=0.20 sys=0.00, real=0.05 secs] 1004626.698: [GC 1004626.698: [ParNew: 73146K->5611K(76672K), 0.0523060 secs] 10831162K->10764169K(11525824K), 0.0525820 secs] [Times: user=0.21 sys=0.00, real=0.05 secs] 1004626.902: [GC 1004626.902: [ParNew: 73771K->6878K(76672K), 0.0653490 secs] 10832329K->10765868K(11525824K), 0.0656210 secs] [Times: user=0.22 sys=0.00, real=0.06 secs] A: No, that's not a problem. It means that your objects come and go - you create them in scope, use them, and then they're eligible for GC. I don't think that's an indication that something's wrong. The other extreme would be an issue: objects are created, age, and stick around too long. That's where memory leaks and filled perm space happens. A: I think you do have a problem here. I'm not an expert on reading GC logs, but I think it is saying that you have a 'young' space of 76672K, and a total heap size of 11525824K. Furthermore, the total heap usage after each GC cycle is 10765868K ... and growing. And it is (apparently) spending ~30% of its time garbage collecting. My diagnosis is that your heap is nearly full, and you are spending a significant (and increasing) percentage of time garbage collection as a direct result. My advice would be restart the application (short term), and look for memory leaks (long term). If there are no memory leaks (i.e. your application is using all of that heap space) then you need to look for ways to reduce you application's memory usage. Your application does seem to be generating a fair bit of garbage, but that is not necessarily a worry. The HotSpot GCs can reclaim garbage pretty efficiently. It is the amount of non-garbage that it has to deal with that causes performance issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/10259032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: The type or namespace name 'Input' does not exist in the namespace 'UnityEngine.Experimental' I am using unity's new input system and getting this error, I tried a lot to fix it but can't find the problem. Please help me. Error: Assets\Player01.cs(4,32): error CS0234: The type or namespace name 'Input' does not exist in the namespace 'UnityEngine.Experimental' (are you missing an assembly reference?) Full Script(C#): using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.Experimental.Input; public class Player01 : MonoBehaviour { public InputMaster controls; void Awake () { controls.Player01.Shoot.performed += _ => Shoot(); } void Shoot () { Debug.Log("We shot the sherif!"); } private void OnEnable() { controls.Enable(); } private void OnDisable() { controls.Disable(); } } A: The problem is that there is no namespace of UnityEngine.Experimental.Input. But, it only says “'Input' does not exist in the namespace 'UnityEngine.Experimental'”. ‘Experimental’ does exist and ‘Input’ does not. However, ‘InputSystem’ does. And that is the one you are looking for, not ‘Input’. You should change the first line to this: using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.Experimental.InputSystem; public class Player01 : MonoBehaviour ... A: Just in case, if the first solution did not work, Close vs studio / code if open, then go to Edit -> Project Settings -> Input System Manager. There will be only one option available. Click the button, save your scene. Open a script. This worked for me. A lot of tutorials do not mention this part.
{ "language": "en", "url": "https://stackoverflow.com/questions/67446941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ToggleButton with custom image and custom text color I have this toggle: <ToggleButton android:id="@+id/toggle_1" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:background="@drawable/custom_toggle_bg" android:gravity="bottom|center_horizontal" /> and this is drawable resources: <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:drawable="@drawable/active" android:state_checked="true"/> <item android:drawable="@drawable/default" android:state_checked="false"/> </selector> Now i would like that text of ToggleButton (on and off) change color when Toggle change state (only text, not background of entire ToggleButton). How can i do? A: You can add the attribute android:textColor="@drawable/color_selector" into your <ToggleButton> //color_selector.xml <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_checked="true" color="@color/text_on" /> <item android:state_checked="false" color="@color/text_off" /> </selector> A: Make another selector in location ./res/color/my_selector.xml and set it to ToggleButton android:textColor="@color/my_selector" example my_selector.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item color="@color/text_active" android:state_checked="true"/> <item color="@color/text_default" android:state_checked="false"/> </selector> A: In your code, use an OnClickListener. protected void onCreate(Bundle savedInstanceState) { final ToggleButton tb = new ToggleButton(DataHandler.getContext()); tb.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if (tb.isChecked()) { tb.setTextColor(Color.GREEN); } else { tb.setTextColor(Color.RED); } } }); }
{ "language": "en", "url": "https://stackoverflow.com/questions/32358100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: RequestContext.RouteData and Request.UrlReferrer info is lost on FormMethod.Post I am trying to post a form with one of the variables being the Action of the previous URL, however whenever I POST it, the variable becomes null. How can I persist that info at least until it reaches the POST Action? I've tried using both Url.RequestContext.RouteData.Values["action"] and Request.UrlReferrer to get the Action. In my View I'm trying to POST a file, Action name, and an id: Html.BeginForm("Edit", "Expenses", FormMethod.Post, new { enctype = "multipart/form-data", actionName = actionName, id = idInt }) If I keep it the way it is, the Action name is null, but if I remove the FormMethod.Post portion from the BeginForm method, the Action name is posted successfully and the file is null. A: You are using the Html.BeginForm helper method incorrectly! You mixed route values and html attributes to a single object ! Your current call matches the below overload public static MvcForm BeginForm( this HtmlHelper htmlHelper, string actionName, string controllerName, FormMethod method, IDictionary<string, object> htmlAttributes ) The last parameter is the htmlAttributes. So with your code, it will generate the form tag markup like this <form action="/Expenses/Edit" actionname="someActionName" enctype="multipart/form-data" id="22" method="post"> </form> You can see that Id and action became 2 attributes of the form ! Try this overload where you can specify both route values and htmlAttributes @using (Html.BeginForm("AddToCart", "Home", new { actionName = "Edit", id = 34 }, FormMethod.Post,new { enctype = "multipart/form-data", })) { <input type="submit"/> } which will generate the correct form action attribute value using the route values you provided.
{ "language": "en", "url": "https://stackoverflow.com/questions/39236965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Prevent Jackson from accepting numbers into Date fields in the request body How can I stop Jackson from accepting numeric values into Date fields in the request body? Jackson seems to be treating them as timestamps and doing the conversion but I would rather have it return a Bad Request response instead. The field is defined as follows: @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd'T'HH:mm:ss'Z'") private final Date someDate; Example request: { "someDate": 1 } Is deserialized to: 1970-01-01 01:00:00 CET
{ "language": "en", "url": "https://stackoverflow.com/questions/57095666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unable to create database tables with Generate Database from Model I am using Entity Framework 4.1 and SQL Express. I have been trying to create models inside the emdx file and from that create the tables inside the a .mdf file. However, I am unable to get that work. However, I am able to get the "Update Model from Database" to work, so there don't seem to be a problem with the connection string. What am I doing wrong? A: I tend to copy the generated SQL script and execute it myself using SQL (Enterprise manager 2008 in my case), gives you better feedback and more control. Haven't really bothered setting it up so that it executes automatically, because EF sometimes makes mistakes in its scripting (e.g. trying to delete every FK twice. Once in the beginning, and then again before the containing table will be deleted). Also, if you made a lot of changes or dropped some tables, sometimes the script isn't 100% compatible with deleting the existing database. I then just drop all FK's and tables (not just what the script tells me to) and then execute the script. But that's just how I like to do it.
{ "language": "en", "url": "https://stackoverflow.com/questions/8490503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Ignoring non-existent URLs with htmlParse() in R G'day Everyone, I have a very long list of place names (~15,000) that I want to use to look up wiki pages and extract data from them. Unfortunately not all of the places have wiki pages and when htmlParse() hits them it stops the function and returns an error. Error: failed to load HTTP resource I can't go through and remove every place name that creates a non-existent URL so I was wondering if there is a way to get the function to skip places that don't have a wiki page? # Town names to be used towns <- data.frame('recID' = c('G62', 'G63', 'G64', 'G65'), 'state' = c('Queensland', 'South_Australia', 'Victoria', 'Western_Australia'), 'name' = c('Balgal Beach', 'Balhannah', 'Ballan', 'Yunderup'), 'feature' = c('POPL', 'POPL', 'POPL', 'POPL')) towns$state <- as.character(towns$state) towns$name <- sub(' ', '_', as.character(towns$name)) # Function that extract data from wiki wiki.tables <- function(towns) { require(RJSONIO) require(XML) u <- paste('http://en.wikipedia.org/wiki/', sep = '', towns[,1], ',_', towns[,2]) res <- lapply(u, function(x) htmlParse(x)) tabs <- lapply(sapply(res, getNodeSet, path = '//*[@class="infobox vcard"]') , readHTMLTable) return(tabs) } # Now to run the function. Yunderup will produce a URL that # doesn't exist. So this will result in the error. test <- wiki.tables(towns[,c('name', 'state')]) # It works if I don't include the place that produces a non-existent URL. test <- wiki.tables(towns[1:3,c('name', 'state')]) Is there a way to identify these non-existent URLs and either skip them or remove them? Thanks for you help! Cheers, Adam A: You can use the 'url.exists' function from `RCurl` require(RCurl) u <- paste('http://en.wikipedia.org/wiki/', sep = '', towns[,'name'], ',_', towns[,'state']) > sapply(u, url.exists) http://en.wikipedia.org/wiki/Balgal_Beach,_Queensland TRUE http://en.wikipedia.org/wiki/Balhannah,_South_Australia TRUE http://en.wikipedia.org/wiki/Ballan,_Victoria TRUE http://en.wikipedia.org/wiki/Yunderup,_Western_Australia TRUE A: Here's another option that uses the httr package. (BTW: you don't need RJSONIO). Replace your wiki.tables(...) function with this: wiki.tables <- function(towns) { require(httr) require(XML) get.HTML<- function(url){ resp <- GET(url) if (resp$status_code==200) return(htmlParse(content(resp,type="text"))) } u <- paste('http://en.wikipedia.org/wiki/', sep = '', towns[,1], ',_', towns[,2]) res <- lapply(u, get.HTML) res <- res[sapply(res,function(x)!is.null(x))] # remove NULLs tabs <- lapply(sapply(res, getNodeSet, path = '//*[@class="infobox vcard"]') , readHTMLTable) return(tabs) } This runs one GET request and tests the status code. The disadvantage of url.exists(...) is that you have to query every url twice: once to see if it exists, and again to get the data. Incidentally, when I tried your code the Yunderup url does in fact exist ??
{ "language": "en", "url": "https://stackoverflow.com/questions/22085773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cannot loop through array comparing text values, undefined result with Protractor I am new to protractor and JavaScript and struggling trying to compare a delimited string to an array. What I am trying to do is to find a list of all elements and then from each element, loop through the arrays to compare the text values to the delimited string but the delimited string values are 'undefined' element.all(by.css('.itemField')).then(function (allFieldItems) { var toCompare= ["AGO", "9"] for (var i = 0; i < toCompare.length; i++) { var valueToCompare = toCompare[i] allFieldItems[i].getText().then(function (text) { if(text != valueToCompare[i]){ console.log("Values don't match") } }.bind( i)) } }) The problem is that the line "if(text != valueToCompare[i]) the "valueToCompare[i]" is always 'undefined' and I am looking for help on how to resolve this without using expect statements. A: You can call getText() on the result of element.all() directly: var toCompare = ["AGO", "9"]; element.all(by.css('.itemField')).getText().then(function (texts) { for (var i = 0; i < toCompare.length; i++) { if (texts[i] != toCompare[i]) { console.log("Values don't match"); } } }); Or, you may even expect it like this (not sure if this is what you are actually trying to do): expect($$('.itemField').getText()).toEqual(toCompare);
{ "language": "en", "url": "https://stackoverflow.com/questions/35809291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Do I need to make fields in schema.xml to use DataImportHandler I have many different column names and types that I want to import. Do I need to change my schema.xml to have entries for each of these specific field types, or is there a way for the importhandler to generate the schema.xml from the underlying SQL data? A: You need to define the fields you need to import in the schema.xml. The DIH does not autogenerate the fields and it is better to create the fields if the amount the fields are less. Solr also allows you to define Dynamic fields, where the fields need not be explicitly defined but just needs to match the regex pattern. <dynamicField name="*_i" type="integer" indexed="true" stored="true"/> You can also define a catch field with Solr, however the behaviour cannot be control as same analysis would be applied to all the fields.
{ "language": "en", "url": "https://stackoverflow.com/questions/18950750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: JPA Criteria Query for Left joins I am new to JPA and I have a Left Join scenario. I have my native sql query as below and I am using left join to fetch the complete records from V_MONITORING table for st.id = 10001. I have some null values for id_legislature which also needs to be selected and hence using a left join select distinct mv.id_set, mv.id_travel, mv.id_legislature from V_MONITORING mv left join v_set st on mv.id_set = st.id left join v_travel fg on mv.id_travel = fg.id left join v_legislature gg on mv.id_legislature = gg.id; The same thing I am implementing using the JPA criteria query, I am unable to fetch the null records Below is the code Predicate predicate = cb.conjunction(); Root<MonitoringBE> mvRoot = criteriaQuery.from(MonitoringBE.class); mvRoot.fetch(MonitoringBE.id_set, JoinType.LEFT); mvRoot.fetch(MonitoringBE.id_travel, JoinType.LEFT); mvRoot.fetch(MonitoringBE.id_legislature, JoinType.LEFT); final Path<Object> serieneinsatzterminPath= mvRoot.get(MonitoringBE.id_set); predicate = cb.and(predicate, cb.EqualTo(serieneinsatzterminPath.get(SetBE.GUELTIG_VON_DATUM), startSetDate)); criteriaQuery.multiselect( mvRoot.get(MonitoringBE.id_travel).alias("id_travel"), mvRoot.get(MonitoringBE.id_set).alias("id_set"), mvRoot.get(MonitoringBE.id_legislature).alias("id_legislature")) .distinct(true).where(predicate); Can some one guide me. A: You have to replace the mvRoot.get(MonitoringBE.id_travel) and others in the multiselect-statement with mvRoot.join(MonitoringBE.id_travel, JoinType.LEFT). Otherwise they will end up in a inner join.
{ "language": "en", "url": "https://stackoverflow.com/questions/54328260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Karate mock API: /__admin/stop endpoint not stopping mock service (standalone jar v1.3.0) I am running a mock service using the karate standalone jar version 1.3.0. Mocking is working fine. However, when I make a GET request to the /__admin/stop endpoint, the process registers that the endpoint has been called, closes the listen port, but does not stop. Execution output and scripts... on startup: java -jar /opt/karate/karate.jar -m src/test/java/mocks/fs/john.feature -p 80 16:17:16.671 [main] INFO com.intuit.karate - Karate version: 1.3.0 16:17:17.315 [main] INFO com.intuit.karate - mock server initialized: src/test/java/mocks/fs/john.feature 16:17:17.425 [main] DEBUG com.intuit.karate.http.HttpServer - server started: aad-9mpcfg3:65080 netstat on listen socket: netstat -na | grep 65080 tcp6 0 0 :::65080 :::* LISTEN submission of curl commands: curl -v http://localhost:65080/john/transactionservice/ping * Trying 127.0.0.1:65080... * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 65080 (#0) > GET /john/transactionservice/ping HTTP/1.1 > Host: localhost:65080 > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < content-type: application/json < content-length: 49 < server: Armeria/1.18.0 < date: Mon, 28 Nov 2022 16:19:37 GMT < * Connection #0 to host localhost left intact {"message":"this is the JOHN TransactionService"} curl -v http://localhost:65080/__admin/stop * Trying 127.0.0.1:65080... * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 65080 (#0) > GET /__admin/stop HTTP/1.1 > Host: localhost:65080 > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 202 Accepted < content-type: text/plain; charset=utf-8 < content-length: 12 < server: Armeria/1.18.0 < date: Mon, 28 Nov 2022 16:19:54 GMT < * Connection #0 to host localhost left intact output after curl commands: java -jar /opt/karate/karate.jar -m src/test/java/mocks/fs/john.feature -p 80 16:17:16.671 [main] INFO com.intuit.karate - Karate version: 1.3.0 16:17:17.315 [main] INFO com.intuit.karate - mock server initialized: src/test/java/mocks/fs/john.feature 16:17:17.425 [main] DEBUG com.intuit.karate.http.HttpServer - server started: aad-9mpcfg3:65080 16:19:37.312 [armeria-common-worker-epoll-2-1] DEBUG com.intuit.karate - scenario matched at line 3: pathMatches('john/transactionservice/ping') && methodIs('get') 16:19:54.603 [armeria-common-worker-epoll-2-2] DEBUG com.intuit.karate.http.HttpServer - received command to stop server: /__admin/stop At this point, netstat on listen port shows socket is closed, but the process continues to run: ps -ef | grep karate matt 15070 15069 1 16:17 pts/3 00:00:03 java -jar karate-1.3.0.jar -m src/test/java/mocks/fs/john.feature -p 65080 This is the test mock that I am using (src/test/java/mocks/fs/john.feature): Feature: JOHN TransactionService mock Scenario: pathMatches('john/transactionservice/ping') && methodIs('get') * def response = {} * set response.message = 'this is the JOHN TransactionService' * def responseStatus = 200 Question: Is there something else I should be doing to make the mock process stop? I think I'm following the guidance at https://github.com/karatelabs/karate/tree/master/karate-netty#stopping Thank you in anticipation of responses.
{ "language": "en", "url": "https://stackoverflow.com/questions/74603710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Angular2: How to use on the frontend "crypto.pbkdf2Sync" function from node.js There is a function crypto.pbkdf2Sync() in node.js API and I whant to use it in my Angular2 project. I tried to import it and use it. Project compiles with no error, but in browser I get an error: TypeError: webpack_require.i(...) is not a function at createHashSlow (hash.ts:4) Here is the hash.ts module: import { pbkdf2Sync } from 'crypto'; import { CONFIG } from '../config'; export function createHashSlow(password, salt) { return pbkdf2Sync( password, salt, CONFIG.crypto.hash.iterations, CONFIG.crypto.hash.length, 'SHA1' ).toString('base64'); }; What did I do wrong? How to make it work? A: The crypto module is based on OpenSSL that is not available in the browser The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign and verify functions. I suggest to use WebCryptographyApi that is available in all modern browsers. See an example here Angular JS Cryptography. pbkdf2 and iteration
{ "language": "en", "url": "https://stackoverflow.com/questions/42538280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Variable Never Assigned to C# I'm missing something simple here, I know it. My private variables are "not assigned," but from my (limited) knowledge they are. What am I missing? I've looked at the other similar questions, but I don't quite understand the answers (or even the questions!). Please help. public class Weapon { private string mName; private double mRange; private double mDamage; public string Name { get { return mName; } set { mName=value; } } public double Range { get { return mRange; } set { if (value >= 0) mRange=value; else throw new ArgumentException("Invalid Range"); } } public double Damage { get { return mDamage; } set { if (value >= 0) mDamage=value; else throw new ArgumentException("Invalid Damage"); } } public Weapon(string n, double d) { n = Name; d = Damage; } public Weapon (string n, double r, double d) { n = Name; r = Range; d = Damage; } A: You have your constructor assignments backwards. This: public Weapon(string n, double d) { n = Name; d = Damage; } public Weapon (string n, double r, double d) { n = Name; r = Range; d = Damage; } Should be this: public Weapon(string n, double d) { Name = n; Damage = d; } public Weapon (string n, double r, double d) { Name = n; Range = r; Damage = d; } A: Try this: public class Weapon { private string _name; private double _range; private double _damage; public string Name { get { return _name; } set { _name = value; } } public double Range { get { return _range; } set { if (value >= 0) _range = value; else throw new ArgumentException("Invalid Range"); } } public double Damage { get { return _damage; } set { if (value >= 0) _damage = value; else throw new ArgumentException("Invalid Damage"); } } public Weapon(string n, double d) { Name = n; Damage = d; } public Weapon(string n, double r, double d) { Name = n; Range = r; Damage = d; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/35925146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to avoid ugly composer hell of folders? ... the problem is about use clause position and path, at my projetc/src folder where I run composer require jenssegers/imagehash now I have as ls: composer.json composer.lock sync.php /vendor autoload.php /composer /jenssegers /imagehash composer.json README.md /src ImageHash.php Implementation.php /Implementations Them, at my projects folder I run php sync.php ... ERROR PHP Fatal error: Uncaught Error: Class 'Jenssegers\ImageHash\ImageHash' not found How to fix? ... And how to organize or install correctly all folders with KISS and Convention over configuration principles? At sync.php I have PHP code, use Jenssegers\ImageHash\ImageHash; // after composer update $hasher = new ImageHash; die("\ndebug\n"); A: Add require __DIR__ . '/vendor/autoload.php'; to sync.php. You can read more about it here.
{ "language": "en", "url": "https://stackoverflow.com/questions/41043902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: How to use a razor variable mixed with html ID text? I am getting an error because the razor and html is getting confused by the compiler I imagine. <div id="@Model.MyLabelCars" ...> My model variable is: Model.MyLabel The "Cars" is just raw text that should be in the HTML. So say Model.MyLabel's value is "123" the ID should be: id="123Car" How can I seperate the model's variable name and HTML? A: You could use regular string add operator <div id="@(Model.MyLabel + "Car")"></div> Or C# 6's string interpolation. <div id="@($"{Model.MyLabel}Car")"></div> A: What you want is to use the <text></text> pseudo tags <div id="@Model.MyLabel<text>Cars</text>" ...>
{ "language": "en", "url": "https://stackoverflow.com/questions/45888411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In Vue router, why would this happen? this$1.pending !== route In most cases, when I use this.$router.push() everything works fine. However, there is one case where I'm doing that that throws an exception. The page changes just fine - it is just that the message vue-router.esm.js?8c4f:2007 Uncaught (in promise) appears in the console. I don't see anything different in the way I call this particular route than any other. The code below is where it fails in the router. this$1.pending1 and route are both objects. I put a breakpoint there and checked the following: JSON.stringify(this$1.pending) === JSON.stringify(route) and that returns true, so they have identical data. In javascript, objects are not considered equal unless they are the same, but I don't know why, in this case, the object is a clone instead of being identical. runQueue(queue, iterator, function () { var postEnterCbs = []; var isValid = function () { return this$1.current === route; }; // wait until async components are resolved before // extracting in-component enter guards var enterGuards = extractEnterGuards(activated, postEnterCbs, isValid); var queue = enterGuards.concat(this$1.router.resolveHooks); runQueue(queue, iterator, function () { if (this$1.pending !== route) { // EXCEPTION ON THIS LINE return abort() } this$1.pending = null; onComplete(route); if (this$1.router.app) { this$1.router.app.$nextTick(function () { postEnterCbs.forEach(function (cb) { cb(); }); }); } }); }); A: This isn't exactly an answer, but here's a discussion of other people who have run into the same thing: https://github.com/vuejs/vue-router/issues/2932 It doesn't sound like there is a resolution, but since it appears harmless, (except for the message in the console), I'm going to not worry about it at the moment.
{ "language": "en", "url": "https://stackoverflow.com/questions/57897511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: UITableViewCell Contents Are Lost When Scrolling I have a Parser object that contains 15 items from the internet (articles). I am trying to load this items in my TableView. My problem is that I have 8 items visible at start (4 inch retina simulator) but when it starts scrolling, almost all my contents are lost and I cannot see the rest of the 7 items. Not sure what I am doing wrong, this is my code: - (void)viewDidLoad { [super viewDidLoad]; parser = [[Parser alloc] init]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [[parser items] count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { ArticleCell *cell = [tableView dequeueReusableCellWithIdentifier:@"ArticleCell"]; Article *article = [parser items][indexPath.row]; cell.title.text = article.title; cell.date.text = article.date; return cell; } Edit: This is what it shows when scrolling, if I log the data before returning the cell inside cellRowAtIndexPath: 2013-09-29 13:37:05.341 Inter[3685:a0b] Article for index: 0 . Title: Cagliari-Inter, tutte le curiosità ; date: (null) . 2013-09-29 13:37:05.343 Inter[3685:a0b] Article for index: 1 . Title: Cerrone "Oggi abbiamo vinto da squadra" ; date: (null) . 2013-09-29 13:37:05.344 Inter[3685:a0b] Article for index: 2 . Title: Le immagini del 10° "Memorial Prisco" ; date: (null) . 2013-09-29 13:37:05.345 Inter[3685:a0b] Article for index: 3 . Title: Primavera, Udinese-Inter 0-1 ; date: (null) . 2013-09-29 13:37:05.345 Inter[3685:a0b] Article for index: 4 . Title: Tutte le immagini della vigilia di Cagliari-Inter ; date: (null) . 2013-09-29 13:37:05.346 Inter[3685:a0b] Article for index: 5 . Title: Udinese-Inter Primavera, 0-0 a fine primo tempo ; date: (null) . 2013-09-29 13:37:05.347 Inter[3685:a0b] Article for index: 6 . Title: Mazzarri "Rischio buccia di banana in un momento di euforia" ; date: (null) . 2013-09-29 13:37:05.348 Inter[3685:a0b] Article for index: 7 . Title: Inter Campus in Bosnia Erzegovina: passi avanti per i progetti a Sarajevo e Domanovici ; date: (null) . 2013-09-29 13:37:11.053 Inter[3685:a0b] Article for index: 8 . Title: (null) ; date: (null) . 2013-09-29 13:37:28.181 Inter[3685:a0b] Article for index: 9 . Title: (null) ; date: (null) . 2013-09-29 13:37:29.499 Inter[3685:a0b] Article for index: 10 . Title: (null) ; date: (null) . 2013-09-29 13:37:29.591 Inter[3685:a0b] Article for index: 11 . Title: (null) ; date: (null) . 2013-09-29 13:37:35.642 Inter[3685:a0b] Article for index: 1 . Title: (null) ; date: (null) . 2013-09-29 13:37:35.767 Inter[3685:a0b] Article for index: 0 . Title: (null) ; date: (null) . Edit: Complete code here. A: Try changing the property from (nonatomic, weak) NSMutableString *title; NSMutableString *date; to (nonatomic, strong) should solve your problem. A: -(void) ViewDidLoad { [super ViewDidLoad]; parser = [Parser alloc] init]; You Reallocated and reinitialised parser, is there any part of the code that you are reloading data? If not your data store will be empty. A: My suggestions: * *Please check, if your tableView's delegate and dataSource are set up correctly. I haven't seen that in your code. *Please check, what cell class is registered to your table view. I mean, you need to use tableView's registerClass:forCellReuseIdentifier: or registerNib:forCellReuseIdentifier: somewhere. Anyway, I'm used to work with GUI without IB or storyboards (so, I'm doing everything programmatically), so it's possible that you don't need to do that. Good luck. A: I have similar issue before. the cause of my problem is that my view controller get released after I scroll my table view, can you please check if you have same cause? A: Try to do the below: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString * MyIdentifier = @"ArticleCell"; ArticleCell *cell = [tableView dequeueReusableCellWithIdentifier: MyIdentifier]; Article *article = [parser items][indexPath.row]; cell.title.text = article.title; cell.date.text = article.date; return cell; } A: The problem is that the identifier used should be static string So declare an identifier which is statisc and assign it static NSString *cellId = @"cellIdentifier"; ArticleCell *cell = [tableView dequeueReusableCellWithIdentifier: cellId]; Also make sure the storyboard cell has the same identifier value.ie here 'cellIdentifier' A: Check what happens with [[parser items] count] when you scroll. My guess is that it decreases which decrease the row count in your table. I may be wrong here because I don't know what's Parser (probably your custom class?) but in case if it is used to parse HTTP response, perhaps item count decreases when the parsing has been done? To check parser item count, add the following to your tableView:cellForRowAtIndexPath: NSLog(@"Parser item count: %d", parser.items.count);
{ "language": "en", "url": "https://stackoverflow.com/questions/19075847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to configure Pomelo.EntityFrameworkCore.MySql in XML configuration? I am trying to set up Entity Framework Core, with a Pomelo provider, in a .NET Framework (4.6.2) project. At runtime I get an error message The Entity Framework provider type 'Pomelo.EntityFrameworkCore.MySql, Pomelo.EntityFrameworkCore.MySql' registered in the application config file for the ADO.NET provider with invariant name 'MySqlConnector' could not be loaded. I am just guessing at the "invariantName" to use. According to Microsoft documentation, invariantName identifies the core ADO.NET provider that this EF provider targets I am also uncertain what class within the DLL is the actual data provider. Here is configuration in Web.config at the moment: <entityFramework> <providers> <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" /> <provider invariantName="MySqlConnector" type="Pomelo.EntityFrameworkCore.MySql, Pomelo.EntityFrameworkCore.MySql" /> </providers> I've been unable to find documentation for setup in XML config files among the Pomelo pages. Any help would be appreciated.
{ "language": "en", "url": "https://stackoverflow.com/questions/57614123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to create directories using foreach in chef (with attributes) How do I create multiple directories if my array has attributes in it? From the chef directory resource documentation I have tested with sample code below and this works. However, I am having trouble if the array contains attributes, and I am not quite sure how to see what it's doing. %w( /foo /foo/bar /foo/bar/baz ).each do |path| I have printed all of my variables and observed node.default[:user_home] to be /home/chefuser # this creates /home/chefuser/.local directory 'for storing local binaries' do path "#{node.default[:user_home]}/.local" owner 'chefuser' group 'chefuser' mode '0755' action :create end # this does not create /home/chefuser/.local or /home/chefuser/.local/bin (however it doesn't fail) ["#{node.default[:user_home]}/.local", "#{node.default[:user_home]}/.local/bin"].each do |path| directory 'for storing local binaries' do owner 'chefuser' group 'chefuser' mode '0755' action :create end end A: The issue is because the name property of the resource is only one (for storing local binaries), and it does not iterate over the attributes passed as array. For this foreach loop to work, you need to use the loop variable path in the resource. Example of using it as "resource name": [ "#{node.default['user_home']}/.local", "#{node.default['user_home']}/.local/bin" ].each do |path| directory path do owner 'chefuser' group 'chefuser' mode '0755' action :create end end
{ "language": "en", "url": "https://stackoverflow.com/questions/69356059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: gradle overwrite plugin repository from init script How can I overwrite all plugin repositories in gradle via the init script? allprojects { repositories { maven { url "foo" } } } only works for regular repositories - not for plugin repositories. Plugins are registerd via the old buildscript{ repositories { }} notation A: simply adding: allprojects { buildscript{ repositories{ maven { url "https://foo.com" } } } repositories { maven { url "https://foo.com" } } } is the solution
{ "language": "en", "url": "https://stackoverflow.com/questions/50683821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }