text
stringlengths
15
59.8k
meta
dict
Q: ASIFormDataRequest upload video works in iPhone simulator but fails with iPhone for files over 1.5 MB I am using ASIHTTPFormDataRequest to upload a video file. I know my code works because when I upload a video under 1.5 MB (low quality under 1 minute 40 seconds) on the iPhone it successfully posts. I was convinced it was a server issue so we did every test ever and came back with no success. It wasn't until the other day that I was able to successfully upload a large file within the iPhone simulator. The file was over 5 MB and high quality. So I am forced to belive that it has something to do with the iPhone device itself. I've googled up a storm and came back with the same stupid response: its a server issue. Well its not... Here is my code: -(void) post:(NSData *) pVideoData progressBarView:(UIProgressView *) pProgressBarView{ pVideoData = [NSData dataWithContentsOfFile: m_pAppDelegate.m_pAppData.m_pPhotoURL]; NSString *urlAddress = [NSString stringWithString:@"https://api.sitename.com/action/upload"]; NSURL *url = [NSURL URLWithString:urlAddress]; ASIFormDataRequest *request = [ASIFormDataRequest requestWithURL:url]; [request setPostValue:API_KEY forKey:@"api_key"]; [request setPostValue: m_pAppDelegate.m_pAppData.m_sTokenKey forKey:@"user_token"]; [request setPostValue: @"somename" forKey:@"tag"]; [request setPostValue: [AppData getIPAddress] forKey:@"ip"]; [request setData:pVideoData forKey:@"file"]; request.numberOfTimesToRetryOnTimeout = 3; [request setDelegate:self]; [request showAccurateProgress]; [request setShouldStreamPostDataFromDisk:YES]; [request setDidFinishSelector:@selector(postRequestSuccess:)]; [request setDidFailSelector:@selector(postRequestFailed:)]; [request startAsynchronous]; [request setUploadProgressDelegate: pProgressBarView];} Everything works fine. Please note that I have already tried the following: synchronous download, & setfile. But I don't belive that the problem lies in my code or my server. Here is the data I get while uploading: please note the differences between the two. SUCCESSFUL VIDEO UPLOAD: 2012-03-13 14:31:06.413 MYAPP[7805:707] [STATUS] Starting asynchronous request <ASIFormDataRequest: 0xb9bc00> 2012-03-13 14:31:06.543 MYAPP[7805:8c07] ==== Building a multipart/form-data body ==== --0xKhTmLbOuNdArY-26765320-0965465416-4395-BA24-AA5CF2B58A66 Content-Disposition: form-data; name="api_key" *SECRET --0xKhTmLbOuNdArY-26765320-0916-4395-BA24-AA5CF2B58A66 Content-Disposition: form-data; name="user_token" Br300MCw4P-l06SFITFCHdiifBHdOCPczX0Y830Yfabox3wLMPs2s7MlWTAS7F2TlwuhL2kiZ7mEXeDWkmDi5g --0xKhTmLbOuNdArY-26765320-0916-4395-BA24-AA5CF2B58A66 Content-Disposition: form-data; name="tag" NPSH Live --0xKhTmLbOuNdArY-26765320-0916-4395-BA24-AA5CF2B58A66 Content-Disposition: form-data; name="ip" 61.197.151.129 --0xKhTmLbOuNdArY-26765320-0916-4395-BA24-AA5CF2B58A66 Content-Disposition: form-data; name="longitude" -122.357832 --0xKhTmLbOuNdArY-26765320-0916-4395-BA24-AA5CF2B58A66 Content-Disposition: form-data; name="latitude" 47.781178 --0xKhTmLbOuNdArY-26765320-0916-4395-BA24-AA5CF2B58A66 Content-Disposition: form-data; name="file"; filename="file" Content-Type: application/octet-stream [920870 bytes of data] --0xKhTmLbOuNdArY-26765320-0916-4395-BA24-AA5CF2B58A66-- ==== End of multipart/form-data body ==== 2012-03-13 14:31:06.550 MYAPP[7805:8c07] [CONNECTION] Request <ASIFormDataRequest: 0xb9bc00> will not use a persistent connection 2012-03-13 14:31:06.819 MYAPP[7805:8c07] [THROTTLING] ===Used: 238 bytes of bandwidth in last measurement period=== wait_fences: failed to receive reply: 10004003 2012-03-13 14:31:07.992 MYAPP[7805:8c07] [THROTTLING] ===Used: 327680 bytes of bandwidth in last measurement period=== 2012-03-13 14:31:09.053 MYAPP[7805:8c07] [THROTTLING] ===Used: 655360 bytes of bandwidth in last measurement period=== 2012-03-13 14:31:09.820 MYAPP[7805:8c07] [STATUS] Request <ASIFormDataRequest: 0xb9bc00> finished uploading data 2012-03-13 14:31:10.070 MYAPP[7805:8c07] [THROTTLING] ===Used: 860720 bytes of bandwidth in last measurement period=== 2012-03-13 14:31:11.320 MYAPP[7805:8c07] [THROTTLING] ===Used: 0 bytes of bandwidth in last measurement period=== 2012-03-13 14:31:11.835 MYAPP[7805:8c07] [STATUS] Request <ASIFormDataRequest: 0xb9bc00> received response headers 2012-03-13 14:31:11.843 MYAPP[7805:8c07] [STATUS] Request <ASIFormDataRequest: 0xb9bc00> finished downloading data (105 bytes) 2012-03-13 14:31:11.854 MYAPP[7805:8c07] [STATUS] Request finished: <ASIFormDataRequest: 0xb9bc00> FAILED UPLOAD: [STATUS] Starting asynchronous request <ASIFormDataRequest: 0xb8e200> wait_fences: failed to receive reply: 10004003 2012-03-13 14:21:03.093 MYAPP[7805:8c07] ==== Building a multipart/form-data body ==== --0xKhTmLbOuNdArY-E6B28382AE4C-F123-499E-8076-19CC8D41F46E Content-Disposition: form-data; name="api_key" *secret --0xKhTmLbOuNdArY-E6B2AE4C-F123-499E-8076-19CC8D41F46E Content-Disposition: form-data; name="user_token" Br300MCw4P-l06SFITFCHdiifBHdOCPczX0Y830Yfabox3wLMPs2s7MlWTAS7F2TlwuhL2kiZ7mEXeDWkmDi5g --0xKhTmLbOuNdArY-E6B2AE4C-F123-499E-8076-19CC8D41F46E Content-Disposition: form-data; name="tag" test --0xKhTmLbOuNdArY-E6B2AE4C-F123-499E-8076-19CC8D41F46E Content-Disposition: form-data; name="ip" 81.197.151.699 --0xKhTmLbOuNdArY-E6B2AE4C-F123-499E-8076-19CC8D41F46E Content-Disposition: form-data; name="longitude" -122.357874 --0xKhTmLbOuNdArY-E6B2AE4C-F123-499E-8076-19CC8D41F46E Content-Disposition: form-data; name="latitude" 47.781192 --0xKhTmLbOuNdArY-E6B2AE4C-F123-499E-8076-19CC8D41F46E Content-Disposition: form-data; name="file"; filename="file" Content-Type: application/octet-stream [3280300 bytes of data] --0xKhTmLbOuNdArY-E6B2AE4C-F123-499E-8076-19CC8D41F46E -- ==== End of multipart/form-data body ==== 2012-03-13 14:21:03.099 MYAPP[7805:8c07] [CONNECTION] Request <ASIFormDataRequest: 0xb8e200> will not use a persistent connection 2012-03-13 14:21:03.354 MYAPP[7805:8c07] [THROTTLING] ===Used: 2500 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:04.605 MYAPP[7805:8c07] [THROTTLING] ===Used: 0 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:05.698 MYAPP[7805:8c07] [THROTTLING] ===Used: 458752 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:06.744 MYAPP[7805:8c07] [THROTTLING] ===Used: 688128 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:07.755 MYAPP[7805:8c07] [THROTTLING] ===Used: 1114112 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:08.855 MYAPP[7805:8c07] [THROTTLING] ===Used: 950272 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:09.897 MYAPP[7805:8c07] [THROTTLING] ===Used: 458752 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:11.050 MYAPP[7805:8c07] [THROTTLING] ===Used: 393216 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:12.085 MYAPP[7805:8c07] [THROTTLING] ===Used: 917504 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:13.104 MYAPP[7805:8c07] [THROTTLING] ===Used: 458752 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:13.858 MYAPP[7805:8c07] [STATUS] Request <ASIFormDataRequest: 0xb8e200> finished uploading data 2012-03-13 14:21:14.354 MYAPP[7805:8c07] [THROTTLING] ===Used: 1123122 bytes of bandwidth in last measurement period=== 2012-03-13 14:21:15.014 MYAPP[7805:8c07] [STATUS] Request <ASIFormDataRequest: 0xb8e200> received response headers 2012-03-13 14:21:15.017 MYAPP[7805:8c07] [STATUS] Request <ASIFormDataRequest: 0xb8e200> finished downloading data (76 bytes) 2012-03-13 14:21:15.034 MYAPP[7805:8c07] [STATUS] Request finished: <ASIFormDataRequest: 0xb8e200> So what do you guys think the problem could be. Another note: I have tried using other wifi connections as well as turning off wifi, using 3g and turning off 3g and only using wifi. Do you think there are limits to how much a user can upload on the iPhone? ONE LAST NOTE: The upload apparently is successful, but I guess the file I upload becomes invalid all together. My server responds successful by telling my that everything was set and okay but the video file itself. The video file is invalid. Yet it works in the simulator... EDIT!! I JUST FINISHED TRYING OUT A DIFFERENT METHOD FOR MY UPLOADING, AND I GET THE EXACT SAME ERRORS FOR THE EXACT SAME CIRCUMSTANCES SO I AM STARTING TO THINK THAT IT HAS SOMETHING TO DO WITH MY CORE FILES? MAYBE ITS A MEMORY THING? NSURL *remoteUrl = [NSURL URLWithString:@"http://api.mysite.com/upload"]; NSURL *myfile = [NSURL URLWithString:m_pAppDelegate.m_pAppData.m_PendingInfo.m_pMovieURL]; NSData * photoImageData = [NSData dataWithContentsOfFile: m_pAppDelegate.m_pAppData.m_PendingInfo.m_pMovieURL]; NSError **error; // [photoImageData writeToFile:filePath atomically:YES]; AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:remoteUrl]; NSMutableURLRequest *afRequest = [httpClient multipartFormRequestWithMethod:@"POST" path:@"" parameters:nil constructingBodyWithBlock:^(id <AFMultipartFormData>formData) { [formData appendPartWithFormData:[API_KEY dataUsingEncoding:NSUTF8StringEncoding] name:@"api_key"]; [formData appendPartWithFormData:[m_pAppDelegate.m_pAppData.m_sTokenKey dataUsingEncoding:NSUTF8StringEncoding] name:@"user_token"]; [formData appendPartWithFileData:photoImageData name:@"file" fileName:@"test.mov" mimeType:@"video/quicktime"]; [formData appendPartWithFileURL:myfile name:@"file" error:error]; } ]; AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:afRequest]; [operation setUploadProgressBlock:^(NSInteger bytesWritten, NSInteger totalBytesWritten, NSInteger totalBytesExpectedToWrite) { NSLog(@"Sent %d of %d bytes", totalBytesWritten, totalBytesExpectedToWrite); }]; [operation setCompletionBlock:^{ NSLog(@"%@", operation.responseString); //Gives a very scary warning }]; [operation start]; A: You need to increase the timeOutSeconds of your request.The default time of a request is 60 seconds.You might increase to 10 mins(600 seconds). [request setTimeOutSeconds:600]; You also need to check that max upload file size that is specified in your php.Please go through this link this might help you. http://drupal.org/node/97193 A: Turned out to be a problem in my .htpconfig file. I had a hard set limit set on it. Adjusting it worked like a charm!
{ "language": "en", "url": "https://stackoverflow.com/questions/9692769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is case-insensitive comparison of string possible for 'where' clause in vuex ORM? While filtering data from the store, I need to check whether 'name' field of the data is 'stackoverflow'. So I use: data() { myname: 'stackoverflow' }, computed: { user() { return this.$store.getters['entities/users/query']().where('name', myname).first(); } } It works perfectly if the name is given as 'stackoverflow', but not for 'StackOverflow'. Can the 'where' clause be modified so that it checks case insensitive? A: I have never used the vuex-orm but i think this should work, according to the docs https://vuex-orm.github.io/vuex-orm/guide/store/retrieving-data.html#simple-where-clauses computed: { user() { return this.$store.getters['entities/users/query']().where(user => user.name.toUpperCase() === this.myname.toUpperCase()).first(); } } Or even computed: { user() { return this.$store.getters['entities/users/query']().where('name', value => value.toUpperCase() === this.myname.toUpperCase()).first(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/56289709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: mysql query array counter Apologies if I have the terminology wrong. I have a for loop in php which operates a mysql query... for ($i = 0; $i <count($user_id_pc); $i++) { $query2 = " SELECT job_title, job_info FROM job_description WHERE postcode_ss = '$user_id_pc[$i]'"; $job_data = mysqli_query($dbc, $query2); $job_results = array(); while ($row = mysqli_fetch_array($job_data)) { array_push($job_results, $row); } } The results that are given when I insert a... print_r ($job_results); On screen -> Array() If I change the query from $user_id_pc[$i] to $user_id_pc[14] for example I receive one set of results. If I include this code after the query and inside the for loop echo $i; echo $user_id_pc[$i] . "<br>"; I receive the number the counter $i is on followed by the data inside the array for that counter position. I am not sure why the array $job_results is empty from the query using the counter $i but not if I enter the number manually? Is it a special character I need to escape? The full code <?php print_r ($user_id_pc); //Select all columns to see if user has a profile $query = "SELECT * FROM user_profile WHERE user_id = '" . $_SESSION['user_id'] . "'"; //If the user has an empty profile direct them to the home page $data = mysqli_query($dbc, $query); if (mysqli_num_rows($data) == 0) { echo '<br><div class="alert alert-warning" role="alert"><h3>Your appear not to be logged on please visit the<a href="index.php"> home page</a> to log on or register. <em>Thank you.</em></h3></div>'; } //Select data from user and asign them to variables else { $data = mysqli_query($dbc, $query); if (mysqli_num_rows($data) == 1) { $row = mysqli_fetch_array($data); $cw_job_name = $row['job_description']; $cw_rate = $row['hourly_rate']; $job_mileage = $row['mileage']; $job_postcode = $row['postcode']; $response_id = $row['user_profile_id']; } } for ($i = 0; $i <count($user_id_pc); $i++) { $query2 = " SELECT job_title, job_info FROM job_description WHERE postcode_ss = '{$user_id_pc[$i]}'"; $job_data = mysqli_query($dbc, $query2); $job_results = array(); while ($row = mysqli_fetch_array($job_data)) { array_push($job_results, $row); } echo $i; ?> <br> <?php } print ($query2); print $user_id_pc[$i]; ?> A: This is primarily a syntax error, the correct syntax should be: $query2 = " SELECT job_title, job_info FROM job_description WHERE postcode_ss = '{$user_id_pc[$i]}'"; Note that this is correct syntax but still wrong!! For two reasons the first is that it's almost always better (faster, more efficient, takes less resources) to do a join or a subquery or a simple IN(array) type query rather than to loop and query multiple times. The second issue is that passing parameters in this manner leave your vulnerable to sql injection. You should use prepared statements. The correct way if(count($user_id_pc)) { $stmt = mysqli_stmt_prepare(" SELECT job_title, job_info FROM job_description WHERE postcode_ss = ?"); mysqli_stmt_bind_param($stmt, "s", "'" . implode("','",$user_id_pc) . "'"); mysqli_stmt_execute($stmt); } Note that the for loop has been replaced by a simple if A: You have to check the query variable, instead of: $query2 = " SELECT job_title, job_info FROM job_description WHERE postcode_ss = '$user_id_pc[$i]'" have you tried this: $query2 = " SELECT job_title, job_info FROM job_description WHERE postcode_ss = '" . $user_id_pc[$i] . "' "; And another thing, try something different like this: while ($row = mysqli_fetch_array($job_data)) { $job_results[] = array("job_title" => $row["job_title"], "job_info" => $row["job_info"); } Then try to print the values. A: Sorry but I like foreach(), so your working code is: <?php // To store the result $job_results = []; foreach($user_id_pc as $id ){ // selecting matching rows $query2 ="SELECT job_title, job_info FROM job_description WHERE postcode_ss = '".$id."'"; $job_data = mysqli_query($dbc, $query2); // checking if query fetch any result if(mysqli_num_rows($job_data)){ // fetching the result while ($row = mysqli_fetch_array($job_data)){ // storing resulting row $job_results[] = $row; } } } // to print the result var_dump($job_results);
{ "language": "en", "url": "https://stackoverflow.com/questions/39783391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Open file for Appending with is new indication I need to append data to a file, but if the file does not exist I need to add a header before appending. If I open the file with FileMode.Append, I cannot see a way to work out if the file is new or not. If I open the file with FileStream file; boolean isNew; try { file = File.Open(path, FileMode.CreateNew); isNew = true; } catch (IOException ex) { file = File.Open(path, FileMode.Append); isNew = false; } I run into the risk of another process deleting the file between the the 2 open calls and not detecting the creation of the new file. What is the recommended way of opening for appending and detecing if create or append? A: Does this do what you need? try { var file = File.Open(path, FileMode.OpenOrCreate, FileAccess.ReadWrite); if (file.Length == 0) { // do header stuff } // do the rest } catch (IOException ex) { // handle io ex. } A: Try something like this: if (!File.Exists(path)) { file = File.Open(path, FileMode.CreateNew); isNew = true; return; } // otherwise append to existing file file = File.Open(path, FileMode.Append); isNew = false;
{ "language": "en", "url": "https://stackoverflow.com/questions/44306018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there any way to return the element that does not pass a filter in stream? I have a list of string elements and I want to apply a filter to divide the list into two sub list, elements that start with "el" and others. is there any way to divide the list using just one filter? List elements = List.of("e1", "el2", "el3", "4 el", "5 el") example: elements.stream() .filter(s -> s.startWith("el)) .collect( /* something that hold both the element that pass the filter and element that does not */ ) A: You can do it like so. * *use groupingBy and create a Map<String, List<String>> *if the string starts with "el" group using the els key *otherwise, use the others key List<String> elements = List.of("e1", "el2", "el3", "4 el", "5 el"); Map<String, List<String>> map = elements.stream().collect( Collectors.groupingBy(str -> str.startsWith("el") ? "els" : "others")); map.entrySet().forEach(System.out::println); prints els=[el2, el3] others=[e1, 4 el, 5 el] A: Since you have a threshold condition which determine if an item will go to group A, otherwise B, I suggest using partitioningBy for this. For example: List<String> elements = List.of("e1", "el2", "el3", "4 el", "5 el"); //create the partition map Map<Boolean,List<String>> partitionMap = elements.stream() .collect(Collectors.partitioningBy(s-> s.startsWith("el"))); //getting the partitions List<String> startsWithEl = partitionMap.get(true); List<String> notStartingWilEl = partitionMap.get(false); Now each list will hold its partition .
{ "language": "en", "url": "https://stackoverflow.com/questions/69166471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: IIS redirect rules to multiple 500 error page I need to return a different static page, for the 500 response type, depending on the language particle of the request (fr, es, pt, etc) Is it possible to have one static page for each language (500-fr.aspx, 500-es.aspx, 500-pt.aspx) and somehow, depending on the language particle to return the necesary static page? Eg: mysite.com/fr/somepage is broken and returns a 500 response, and then IIS redirects the user to 500-fr.aspx page. Thank you A: There is a very simple and fast way, that does not even require you to manually write url rewriting rules. In the Error Pages module, IIS has defined the same error to display different pages according to different languages. You can see that it has checked "Try to return the error file in the client languag". And if you enter the file path. Error pages of different languages already exist in different language folders. So if you want to response custom page, just change these pages or create new error pages store into different folder and change the file name in file path.
{ "language": "en", "url": "https://stackoverflow.com/questions/65217804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: I am converting a python/kivy project to apk in google colab and some extension are not working so what extension I should use for audio/font file? I use jpg file for image but when I run my app in phone it won't open but it works for PNG file. And also for audio file I use mp3, OGG file my app won't open but when I use wav file my app works but the sound doesn't come. And for font file, both TTF and OTF extension won't work i.e. my app won't work A: I solve my problem a bit, now the wav file is working correctly because I add the extension of wav file in the google colab file which created after the compilation of my files (Buildozer.spec) but the problem with jpg is still there and for mp3 also
{ "language": "en", "url": "https://stackoverflow.com/questions/72980383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails variables not working in JavaScript code In one of my Rails partials, in which I have access to the Folder model, some JavaScript is strangely enough not working. The Folder model has instances of the Submission model which belong to it. Example JSON for a folder instance looks like this: {"created_at":"2013-09-24T18:55:54Z","id":2,"parent_id":null,"title":"Tumblr Drafts","updated_at":"2013-09-24T18:55:54Z","user_id":1, "submissions": [{"content":"This is a test.","created_at":"2013-09-30T23:00:00Z","folder_id":2,"id":93,"parent_id":null,"title":null,"updated_at":"2013-09-30T23:00:00Z","user_id":1}],"children":[]} As you can see, the submissions are included in the JSON. The controller action looks like this: def show respond_to do |format| format.html format.js format.json {render :json => @folder.as_json( :include => [:submissions, :children])} end end However, when I try to run this JavaScript, it doesn't return a console.log: <script type="text/javascript"> var wordCount = 0; <% @folder.submissions.each do |submission| %> var words = <%= submission.content %>.split(' '); wordCount += words.length; <% end %> console.log(wordCount); </script> I use the @folder variable everywhere else in the partial in question, and it works. I can even output the titles of all the submissions into <p> tags. Is it may because the content field can be left empty, so sometimes it returns null? A: Your problem is right here: var words = <%= submission.content %>.split(' '); That will dump your submission.content value into your JavaScript without any quoting so you'll end up saying things like: var words = blah blah blah.split(' '); and that's not valid JavaScript. You need to quote that string and properly escape it: // You may or may not want submission.content HTML encoded so // you might want submission.content.html_safe instead. var words = '<%=j submission.content %>'.split(' '); Or better, just do it all in Ruby rather than sending a bunch of throw-away text to the browser: <script type="text/javascript"> var wordCount = <% @folder.submissions.map { |s| submission.content.split.length }.sum %> console.log(wordCount); </script> That assumes that your JavaScript .split(' ') is really intended to split on /\s+/ rather than just single spaces of course.
{ "language": "en", "url": "https://stackoverflow.com/questions/19203667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: rebar3 doesnt compile erlydtl file I have a problem which my rebar3 does not compile the erlydtl files (.dtl) and I have looking around for a while but not solution. Previously it worked, but after I upgrade erlang to the newest version, the *.dtl file is not compiled and sync:on_sync does not work neight. Anyone here have any idear on that? Here is my rebar.conf file {erl_opts, [debug_info]}. {deps, [ {sync, {git, "https://github.com/rustyio/sync.git" , {branch, "master"}}}, {erlydtl, {git, "https://github.com/erlydtl/erlydtl.git", { branch, "master"}}}, {jsx, {git, "https://github.com/talentdeficit/jsx.git", {branch, "main"}}}, {epgsql, {git, "https://github.com/epgsql/epgsql.git"}}, {gen_smtp, {git, "https://github.com/gen-smtp/gen_smtp.git", {branch, "master"}}}, {cowboy, { git, "https://github.com/ninenines/cowboy.git", {branch, "master"}}}, poolboy ]}. {erlydtl_opts, [ {doc_root, "src/templates"} %{outdir, "ebin"}, %{compiler_options, [report, return, debug_info]}, %{source_ext, ".dtl"} %% {module_ext, "_view"} ]}. {provider_hooks, [ {pre_hooks, [{compile, {erlydtl, compile}}]} ]}. {relx, [ {release, {core, "0.1.0"}, [ core, poolboy, erlydtl, account, realestate, realestate_admin, sync, sasl, epgsql, jsx, gen_smtp ]}, {mode, dev}, %% automatically picked up if the files %% exist but can be set manually, which %% is required if the names aren't exactly %% sys.config and vm.args {sys_config, "./config/sys.config"}, {vm_args, "./config/vm.args"} %% the .src form of the configuration files do %% not require setting RELX_REPLACE_OS_VARS %% {sys_config_src, "./config/sys.config.src"}, %% {vm_args_src, "./config/vm.args.src"} ]}. {profiles, [ {prod, [ {relx, [ %% prod is the default mode when prod %% profile is used, so does not have %% to be explicitly included like this {mode, prod} %% use minimal mode to exclude ERTS %% {mode, minimal} ]} ]} ]}. {dialyzer, [ %% Warns the undefined type or unknown function {warnings, [ unknown ]} ]}. {xref_checks, [ %% enable most checks, but avoid 'unused calls' which is often %% very verbose undefined_function_calls, undefined_functions, locals_not_used, deprecated_function_calls, deprecated_functions ]}. {plugins, [ % {rebar3_auto, {git, "https://github.com/xuchaoqian/rebar3-auto.git", {branch, "master"}}} ]}. Thanks,
{ "language": "en", "url": "https://stackoverflow.com/questions/68012519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Converting .pb to coreml I'm new to coreml and I'm trying to convert my tiny-yolov3 model to a coreml model, but I keep getting this error. [SSAConverter] [78/170] Converting op type: 'Conv2D', name: 'yolov3-tiny/convolutional1/Conv2D', output_shape: (-1, 16, 416, 416). Traceback (most recent call last): File "/Users/ryanyang/Desktop/DW2TF/env/lib/python3.7/site-packages/coremltools/converters/tensorflow/_tf_converter.py", line 95, in convert optional_inputs=optional_inputs) File "/Users/ryanyang/Desktop/DW2TF/env/lib/python3.7/site-packages/coremltools/converters/nnssa/coreml/ssa_converter.py", line 149, in ssa_convert converter.convert() File "/Users/ryanyang/Desktop/DW2TF/env/lib/python3.7/site-packages/coremltools/converters/nnssa/coreml/ssa_converter.py", line 567, in convert convert_func(node) File "/Users/ryanyang/Desktop/DW2TF/env/lib/python3.7/site-packages/coremltools/converters/nnssa/coreml/ssa_converter.py", line 1493, in _convert_conv2d '[SSAConverter] Dynamic weights in convolution not implemented') NotImplementedError: [SSAConverter] Dynamic weights in convolution not implemented UPDATE: I have added minimum_ios_deployment_target="13" to the converter, but now it is giving me a different error. My code is import tfcoreml as tf_converter tf_converter.convert(tf_model_path = 'data/yolov3-tiny-one-class.pb', mlmodel_path = 'TinyYOLO.mlmodel', output_feature_names = ['yolov3-tiny/convolutional13/BiasAdd'], input_name_shape_dict = {'yolov3-tiny/net1' : [1, 416, 416, 3]}, image_input_names = ['yolov3-tiny/net1'], image_scale = 1 / 255.0, minimum_ios_deployment_target="13" ) and my visualizer for my pb model from tf-coreml is --------------------------------------------------------------------------------------------------------------------------------------------- 0: op name = import/yolov3-tiny/net1, op type = ( Placeholder ), inputs = , outputs = import/yolov3-tiny/net1:0 @input shapes: @output shapes: name = import/yolov3-tiny/net1:0 : (?, 416, 416, 3) --------------------------------------------------------------------------------------------------------------------------------------------- 1: op name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/shape, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/shape:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/shape:0 : (4,) --------------------------------------------------------------------------------------------------------------------------------------------- 2: op name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/min, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/min:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/min:0 : () --------------------------------------------------------------------------------------------------------------------------------------------- 3: op name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/max, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/max:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/max:0 : () --------------------------------------------------------------------------------------------------------------------------------------------- 4: op name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/RandomUniform, op type = ( RandomUniform ), inputs = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/shape:0, outputs = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/RandomUniform:0 @input shapes: name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/shape:0 : (4,) @output shapes: name = import/yolov3-tiny/convolutional1/kernel/Initializer/random_uniform/RandomUniform:0 : (3, 3, 3, 16) --------------------------------------------------------------------------------------------------------------------------------------------- ...... ...... 358: op name = import/yolov3-tiny/convolutional12/dilation_rate, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional12/dilation_rate:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/dilation_rate:0 : (2,) --------------------------------------------------------------------------------------------------------------------------------------------- 359: op name = import/yolov3-tiny/convolutional12/Conv2D, op type = ( Conv2D ), inputs = import/yolov3-tiny/route2:0, import/yolov3-tiny/convolutional12/kernel/read:0, outputs = import/yolov3-tiny/convolutional12/Conv2D:0 @input shapes: name = import/yolov3-tiny/route2:0 : (?, 26, 26, 384) name = import/yolov3-tiny/convolutional12/kernel/read:0 : (3, 3, 384, 256) @output shapes: name = import/yolov3-tiny/convolutional12/Conv2D:0 : (?, 26, 26, 256) --------------------------------------------------------------------------------------------------------------------------------------------- 360: op name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/Initializer/ones, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/gamma/Initializer/ones:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/Initializer/ones:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 361: op name = import/yolov3-tiny/convolutional12/BatchNorm/gamma, op type = ( VariableV2 ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/gamma:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/gamma:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 362: op name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/Assign, op type = ( Assign ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/gamma:0, import/yolov3-tiny/convolutional12/BatchNorm/gamma/Initializer/ones:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/gamma/Assign:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/gamma:0 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/Initializer/ones:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/Assign:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 363: op name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/read, op type = ( Identity ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/gamma:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/gamma/read:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/gamma:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/read:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 364: op name = import/yolov3-tiny/convolutional12/BatchNorm/beta/Initializer/zeros, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/beta/Initializer/zeros:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/beta/Initializer/zeros:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 365: op name = import/yolov3-tiny/convolutional12/BatchNorm/beta, op type = ( VariableV2 ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/beta:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/beta:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 366: op name = import/yolov3-tiny/convolutional12/BatchNorm/beta/Assign, op type = ( Assign ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/beta:0, import/yolov3-tiny/convolutional12/BatchNorm/beta/Initializer/zeros:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/beta/Assign:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/beta:0 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/beta/Initializer/zeros:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/beta/Assign:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 367: op name = import/yolov3-tiny/convolutional12/BatchNorm/beta/read, op type = ( Identity ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/beta:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/beta/read:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/beta:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/beta/read:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 368: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Initializer/zeros, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Initializer/zeros:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Initializer/zeros:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 369: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean, op type = ( VariableV2 ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 370: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Assign, op type = ( Assign ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean:0, import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Initializer/zeros:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Assign:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean:0 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Initializer/zeros:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/Assign:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 371: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/read, op type = ( Identity ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/read:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/read:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 372: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Initializer/ones, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Initializer/ones:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Initializer/ones:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 373: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance, op type = ( VariableV2 ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 374: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Assign, op type = ( Assign ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance:0, import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Initializer/ones:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Assign:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance:0 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Initializer/ones:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/Assign:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 375: op name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/read, op type = ( Identity ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/read:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/read:0 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 376: op name = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm, op type = ( FusedBatchNorm ), inputs = import/yolov3-tiny/convolutional12/Conv2D:0, import/yolov3-tiny/convolutional12/BatchNorm/gamma/read:0, import/yolov3-tiny/convolutional12/BatchNorm/beta/read:0, import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/read:0, import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/read:0, outputs = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:0, import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:1, import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:2, import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:3, import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:4 @input shapes: name = import/yolov3-tiny/convolutional12/Conv2D:0 : (?, 26, 26, 256) name = import/yolov3-tiny/convolutional12/BatchNorm/gamma/read:0 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/beta/read:0 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/moving_mean/read:0 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/moving_variance/read:0 : (256,) @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:0 : (?, 26, 26, 256) name = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:1 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:2 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:3 : (256,) name = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:4 : (256,) --------------------------------------------------------------------------------------------------------------------------------------------- 377: op name = import/yolov3-tiny/convolutional12/BatchNorm/Const, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional12/BatchNorm/Const:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/Const:0 : () --------------------------------------------------------------------------------------------------------------------------------------------- 378: op name = import/yolov3-tiny/convolutional12/Activation, op type = ( LeakyRelu ), inputs = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:0, outputs = import/yolov3-tiny/convolutional12/Activation:0 @input shapes: name = import/yolov3-tiny/convolutional12/BatchNorm/FusedBatchNorm:0 : (?, 26, 26, 256) @output shapes: name = import/yolov3-tiny/convolutional12/Activation:0 : (?, 26, 26, 256) --------------------------------------------------------------------------------------------------------------------------------------------- 379: op name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/shape, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/shape:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/shape:0 : (4,) --------------------------------------------------------------------------------------------------------------------------------------------- 380: op name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/min, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/min:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/min:0 : () --------------------------------------------------------------------------------------------------------------------------------------------- 381: op name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/max, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/max:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/max:0 : () --------------------------------------------------------------------------------------------------------------------------------------------- 382: op name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/RandomUniform, op type = ( RandomUniform ), inputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/shape:0, outputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/RandomUniform:0 @input shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/shape:0 : (4,) @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/RandomUniform:0 : (1, 1, 256, 18) --------------------------------------------------------------------------------------------------------------------------------------------- 383: op name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/sub, op type = ( Sub ), inputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/max:0, import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/min:0, outputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/sub:0 @input shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/max:0 : () name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/min:0 : () @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/sub:0 : () --------------------------------------------------------------------------------------------------------------------------------------------- 384: op name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/mul, op type = ( Mul ), inputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/RandomUniform:0, import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/sub:0, outputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/mul:0 @input shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/RandomUniform:0 : (1, 1, 256, 18) name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/sub:0 : () @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/mul:0 : (1, 1, 256, 18) --------------------------------------------------------------------------------------------------------------------------------------------- 385: op name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform, op type = ( Add ), inputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/mul:0, import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/min:0, outputs = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform:0 @input shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/mul:0 : (1, 1, 256, 18) name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform/min:0 : () @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform:0 : (1, 1, 256, 18) --------------------------------------------------------------------------------------------------------------------------------------------- 386: op name = import/yolov3-tiny/convolutional13/kernel, op type = ( VariableV2 ), inputs = , outputs = import/yolov3-tiny/convolutional13/kernel:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional13/kernel:0 : (1, 1, 256, 18) --------------------------------------------------------------------------------------------------------------------------------------------- 387: op name = import/yolov3-tiny/convolutional13/kernel/Assign, op type = ( Assign ), inputs = import/yolov3-tiny/convolutional13/kernel:0, import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform:0, outputs = import/yolov3-tiny/convolutional13/kernel/Assign:0 @input shapes: name = import/yolov3-tiny/convolutional13/kernel:0 : (1, 1, 256, 18) name = import/yolov3-tiny/convolutional13/kernel/Initializer/random_uniform:0 : (1, 1, 256, 18) @output shapes: name = import/yolov3-tiny/convolutional13/kernel/Assign:0 : (1, 1, 256, 18) --------------------------------------------------------------------------------------------------------------------------------------------- 388: op name = import/yolov3-tiny/convolutional13/kernel/read, op type = ( Identity ), inputs = import/yolov3-tiny/convolutional13/kernel:0, outputs = import/yolov3-tiny/convolutional13/kernel/read:0 @input shapes: name = import/yolov3-tiny/convolutional13/kernel:0 : (1, 1, 256, 18) @output shapes: name = import/yolov3-tiny/convolutional13/kernel/read:0 : (1, 1, 256, 18) --------------------------------------------------------------------------------------------------------------------------------------------- 389: op name = import/yolov3-tiny/convolutional13/bias/Initializer/zeros, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional13/bias/Initializer/zeros:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional13/bias/Initializer/zeros:0 : (18,) --------------------------------------------------------------------------------------------------------------------------------------------- 390: op name = import/yolov3-tiny/convolutional13/bias, op type = ( VariableV2 ), inputs = , outputs = import/yolov3-tiny/convolutional13/bias:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional13/bias:0 : (18,) --------------------------------------------------------------------------------------------------------------------------------------------- 391: op name = import/yolov3-tiny/convolutional13/bias/Assign, op type = ( Assign ), inputs = import/yolov3-tiny/convolutional13/bias:0, import/yolov3-tiny/convolutional13/bias/Initializer/zeros:0, outputs = import/yolov3-tiny/convolutional13/bias/Assign:0 @input shapes: name = import/yolov3-tiny/convolutional13/bias:0 : (18,) name = import/yolov3-tiny/convolutional13/bias/Initializer/zeros:0 : (18,) @output shapes: name = import/yolov3-tiny/convolutional13/bias/Assign:0 : (18,) --------------------------------------------------------------------------------------------------------------------------------------------- 392: op name = import/yolov3-tiny/convolutional13/bias/read, op type = ( Identity ), inputs = import/yolov3-tiny/convolutional13/bias:0, outputs = import/yolov3-tiny/convolutional13/bias/read:0 @input shapes: name = import/yolov3-tiny/convolutional13/bias:0 : (18,) @output shapes: name = import/yolov3-tiny/convolutional13/bias/read:0 : (18,) --------------------------------------------------------------------------------------------------------------------------------------------- 393: op name = import/yolov3-tiny/convolutional13/dilation_rate, op type = ( Const ), inputs = , outputs = import/yolov3-tiny/convolutional13/dilation_rate:0 @input shapes: @output shapes: name = import/yolov3-tiny/convolutional13/dilation_rate:0 : (2,) --------------------------------------------------------------------------------------------------------------------------------------------- 394: op name = import/yolov3-tiny/convolutional13/Conv2D, op type = ( Conv2D ), inputs = import/yolov3-tiny/convolutional12/Activation:0, import/yolov3-tiny/convolutional13/kernel/read:0, outputs = import/yolov3-tiny/convolutional13/Conv2D:0 @input shapes: name = import/yolov3-tiny/convolutional12/Activation:0 : (?, 26, 26, 256) name = import/yolov3-tiny/convolutional13/kernel/read:0 : (1, 1, 256, 18) @output shapes: name = import/yolov3-tiny/convolutional13/Conv2D:0 : (?, 26, 26, 18) --------------------------------------------------------------------------------------------------------------------------------------------- 395: op name = import/yolov3-tiny/convolutional13/BiasAdd, op type = ( BiasAdd ), inputs = import/yolov3-tiny/convolutional13/Conv2D:0, import/yolov3-tiny/convolutional13/bias/read:0, outputs = import/yolov3-tiny/convolutional13/BiasAdd:0 @input shapes: name = import/yolov3-tiny/convolutional13/Conv2D:0 : (?, 26, 26, 18) name = import/yolov3-tiny/convolutional13/bias/read:0 : (18,) @output shapes: name = import/yolov3-tiny/convolutional13/BiasAdd:0 : (?, 26, 26, 18) --------------------------------------------------------------------------------------------------------------------------------------------- OPS counts: Placeholder : 1 ResizeNearestNeighbor : 1 ConcatV2 : 1 BiasAdd : 2 Fill : 4 MaxPool : 6 FusedBatchNorm : 11 LeakyRelu : 11 RandomUniform : 13 Sub : 13 Mul : 13 Add : 13 Conv2D : 13 VariableV2 : 59 Assign : 59 Identity : 60 Const : 116 Any help would be Greatly Appreciated !!!
{ "language": "en", "url": "https://stackoverflow.com/questions/60388219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: GoBack get called 3times uwp If i press the backbutton on my phone the GoBack method get called 3 times and i go to the start page insted of the previous page (but it works 1 time of 20). But on the PC it workes every time with only one call and Always get to the previous page. This is the startMethod in the class: public DetailPageFavorites() { this.InitializeComponent(); // If on a phone device that has hardware buttons then we hide the app's back button. if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons")) { this.BackButton.Visibility = Visibility.Collapsed; this.button_like.Margin = new Thickness(0, 0, 0, 0); } SystemNavigationManager.GetForCurrentView().BackRequested += SystemNavigationManager_BackRequested; } Method used if hardwareButton is pressent: private void SystemNavigationManager_BackRequested(object sender, BackRequestedEventArgs e) { Frame frame = Window.Current.Content as Frame; e.Handled = true; if (frame.CanGoBack) { frame.GoBack(); } } Methods used on the PC: private void BackButton_Click(object sender, RoutedEventArgs e) { GoBack(); } private void GoBack() { Frame frame = Window.Current.Content as Frame; if (frame == null) { return; } if (frame.CanGoBack) { frame.GoBack(); } } A: Make sure you remove SystemNavigationManager.GetForCurrentView().BackRequested event handler before navigate to other page. Either atPage.Unloaded event or OnNavigatedFrom method. protected override void OnNavigatedFrom(NavigationEventArgs e) { base.OnNavigatedFrom(e); SystemNavigationManager.GetForCurrentView().BackRequested -= SystemNavigationManager_BackRequested; }
{ "language": "en", "url": "https://stackoverflow.com/questions/34615008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: R: Substitute multiple unwanted variables with NAs across multiple columns Having a tough time with some data wrangling. I am trying to keep only certain author institutional affiliations in a bibliographic data frame and replace the many unwanted affiliations with NAs. I need to do this across multiple columns since papers have multiple authors (up to 68). reproducible code: so <- data.frame(inst_1=c("FC1","FC2","Uni1","lab3","lab2"), inst_2=c("FC1","FC5","college4","laboratory1","lab2"), inst_3=c("FC2","FC2","University2","lab5","lab5"), inst_4=c("FC3","FC6","dept2","lab3.2","lab1"), inst_5=c("FC1","FC2","Uni3","labB","lab5")) Example data frame: inst_1 inst_2 inst_3 inst_4 inst_5 1 FC1 FC1 FC2 FC3 FC1 2 FC2 FC5 FC2 FC6 FC2 3 Uni1 college4 University2 dept2 Uni3 4 lab3 laboratory1 lab5 lab3.2 labB 5 lab2 lab2 lab5 lab1 lab5 I want to select every column that has the prefix "inst" (likely using str_detect), and in those selected columns replace any institution that does not contain the characters "FC" with NAs. This is necessary because this sheet has 68 institution columns (inst prefix) and hundred of rows (individual scientific articles). I can't just select which institutions to replace with NAs because there are hundreds of different institutions, while I am just interested in keeping the institutions that contain "FC". A: 1) dplyr Use mutate/across like this library(dplyr) so %>% mutate(across(starts_with("inst"), ~ replace(., !grepl("FC", .), NA))) giving: inst_1 inst_2 inst_3 inst_4 inst_5 1 FC1 FC1 FC2 FC3 FC1 2 FC2 FC5 FC2 FC6 FC2 3 <NA> <NA> <NA> <NA> <NA> 4 <NA> <NA> <NA> <NA> <NA> 5 <NA> <NA> <NA> <NA> <NA> 2) Base R or using only base R: ok <- startsWith(names(so), "inst") repl <- function(x) replace(x, !grepl("FC", x), NA) replace(so, ok, lapply(so[ok], repl)) 3) collapse or using the collapse package with repl from (2) library(collapse) tfmv(so, startsWith(names(so), "inst"), repl) 4) data.table With data.table we define a vector of inst names and use it with repl from (2). library(data.table) DT <- as.data.table(so) inst <- grep("^inst", names(so)) DT[, (inst) := lapply(.SD, repl), .SDcols = inst]
{ "language": "en", "url": "https://stackoverflow.com/questions/67505793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to get part of the string? I have this string: var MapId ='Library://London/Maps/Main-Mobile.MapDefinition' From string above I need to get this part: MapId ='Library://London/' How can I do it with help of regex or jquery? A: You can use split() and slice() operations: var MapId ='Library://London/Maps/Main-Mobile.MapDefinition'; MapId = MapId.split(/\//).slice(0,3).join('/') + '/'; console.log(MapId);
{ "language": "en", "url": "https://stackoverflow.com/questions/51357834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Android Studio illegal state exception: cannot find method (view) for android:onClick I'm using radio buttons to switch the colors in a project. When the user clicks one of the radio buttons, I want the associated theme to be saved to SharedPreferences. The necessary themes are set up, but upon clicking a radio button, this error is thrown: java.lang.IllegalStateException: Could not find method onThemeRadio(View) in a parent or ancestor Context for android:onClick attribute defined on view class android.widget.RadioButton with id 'theme2' The relevant XML and Java blocks are shown below. Java: public void onThemeRadio(View view){ SharedPreferences themeStorage = PreferenceManager.getDefaultSharedPreferences(this); SharedPreferences.Editor themeStorageEditor = themeStorage.edit(); boolean checked = ((RadioButton)view).isChecked(); int themeId = themeStorage.getInt("themeId",1); switch(view.getId()) { case R.id.theme1: if (checked) themeId = 1; break; case R.id.theme2: if (checked) themeId = 2; break; case R.id.theme3: if (checked) themeId = 3; break; } themeStorageEditor.putInt("themeId",themeId); themeStorageEditor.apply(); } XML: <RadioGroup android:id="@+id/themeRadio" android:layout_width="match_parent" android:layout_height="wrap_content" android:checkedButton="@+id/theme1" android:orientation="horizontal"> <RadioButton android:id="@+id/theme1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:onClick="onThemeRadio" android:text="Default Theme" /> <RadioButton android:id="@+id/theme2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:onClick="onThemeRadio" android:text="Alternate Theme" /> <RadioButton android:id="@+id/theme3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:onClick="onThemeRadio" android:text="Combined Theme" /> </RadioGroup> I've read a few similar issues, but none of the fixes worked or applied. Thanks in advance! A: It's difficult to tell based on the limited code you provided. But based on the error message, it would seem that you do not have the onThemeRadio(View) method in the right place. It needs to be a method on the Activity class that uses that XML layout. Use Android Studio to help figure it out. For example, does Android Studio highlight onThemeRadio(View) as an unused method? Or does the onclick attribute in XML get flagged that the method is missing or has the wrong signature? These are signs that the method is in the wrong place. You can also use the light bulb that pops up next to a highlighted onclick attribute to automatically create the method in the right place.
{ "language": "en", "url": "https://stackoverflow.com/questions/53528635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to make UITextView detect links for website, mail and phone number I have a UITextView object. The text in UIView has a phone number, mail link, a website link. I want to show them as links with following functionality. When someone taps on URL - Safari should open the the website. When someone taps on email link - Mail should open up with my address in to field When someone taps on phone number - Phone application should call the number Has anyone done this before or knows how to handle it? Thanks, AJ A: Though the Question is super Old. Still if anyone faces the same issue, Also it can be used as a UILabel. Though Below solution will do the job : [There isn't a need for any library..] So I've used MFMailcomposer() and UITexView [ Code is in Swift 3.0 - Xcode 8.3.2 ] A 100% Crash Proof and Working Code Handles all the corner cases. =D Step 1. import MessageUI Step 2. Add the delegate class ViewController: UITextViewDelegate, MFMailComposeViewControllerDelegate{ Step 3. Add the textView IBOutlet From StoryBoard @IBOutlet weak var infoTextView: UITextView! Step 4. Call the below method in your viewDidload() func addInfoToTextView() { let attributedString = NSMutableAttributedString(string: "For further info call us on : \(phoneNumber)\nor mail us at : \(email)") attributedString.addAttribute(NSLinkAttributeName, value: "tel://", range: NSRange(location: 30, length: 10)) attributedString.addAttribute(NSLinkAttributeName, value: "mailto:", range: NSRange(location: 57, length: 18)) self.infoTextView.attributedText = attributedString self.infoTextView.linkTextAttributes = [NSForegroundColorAttributeName:UIColor.blue, NSUnderlineStyleAttributeName:NSNumber(value: 0)] self.infoTextView.textColor = .white self.infoTextView.textAlignment = .center self.infoTextView.isEditable = false self.infoTextView.dataDetectorTypes = UIDataDetectorTypes.all self.infoTextView.delegate = self } Step 5. Implement delegate methods for TextView @available(iOS, deprecated: 10.0) func textView(_ textView: UITextView, shouldInteractWith url: URL, in characterRange: NSRange) -> Bool { if (url.scheme?.contains("mailto"))! && characterRange.location > 55{ openMFMail() } if (url.scheme?.contains("tel"))! && (characterRange.location > 29 && characterRange.location < 39){ callNumber() } return false } //For iOS 10 @available(iOS 10.0, *) func textView(_ textView: UITextView, shouldInteractWith url: URL, in characterRange: NSRange, interaction: UITextItemInteraction) -> Bool { if (url.scheme?.contains("mailto"))! && characterRange.location > 55{ openMFMail() } if (url.scheme?.contains("tel"))! && (characterRange.location > 29 && characterRange.location < 39){ callNumber() } return false } Step 6. Write the helper Methods to open MailComposer and Call App func callNumber() { if let phoneCallURL = URL(string: "tel://\(phoneNumber)") { let application:UIApplication = UIApplication.shared if (application.canOpenURL(phoneCallURL)) { let alert = UIAlertController(title: "Call", message: "\(phoneNumber)", preferredStyle: UIAlertControllerStyle.alert) if #available(iOS 10.0, *) { alert.addAction(UIAlertAction(title: "Call", style: .cancel, handler: { (UIAlertAction) in application.open(phoneCallURL, options: [:], completionHandler: nil) })) } else { alert.addAction(UIAlertAction(title: "Call", style: .cancel, handler: { (UIAlertAction) in application.openURL(phoneCallURL) })) } alert.addAction(UIAlertAction(title: "cancel", style: .default, handler: nil)) self.present(alert, animated: true, completion: nil) } } else { self.showAlert("Couldn't", message: "Call, cannot open Phone Screen") } } func openMFMail(){ let mailComposer = MFMailComposeViewController() mailComposer.mailComposeDelegate = self mailComposer.setToRecipients(["\(email)"]) mailComposer.setSubject("Subject..") mailComposer.setMessageBody("Please share your problem.", isHTML: false) present(mailComposer, animated: true, completion: nil) } Step 7. Write MFMailComposer's Delegate Method func mailComposeController(_ controller: MFMailComposeViewController, didFinishWith result: MFMailComposeResult, error: Error?) { switch result { case .cancelled: print("Mail cancelled") case .saved: print("Mail saved") case .sent: print("Mail sent") case .failed: print("Mail sent failure: \(String(describing: error?.localizedDescription))") default: break } controller.dismiss(animated: true, completion: nil) } That's it you're Done... =D Here is the swift file for the above code : textViewWithEmailAndPhone.swift Set the below properties to Use it as a UILabel A: A note on detecting email addresses: The Mail app must be installed (it's not on the iOS Simulator) for email links to open a message compose screen. A: Swift 3.0 + As of swift 3.0, use the following code if you want to do it programmatically. textview.isEditable = false textview.dataDetectorTypes = .all Or if you have a storyboard A: If you are using OS3.0 you can do it like the following textview.editable = NO; textview.dataDetectorTypes = UIDataDetectorTypeAll; A: Step 1. Create a subclass of UITextview and override the canBecomeFirstResponder function KDTextView.h Code: @interface KDTextView : UITextView @end KDTextView.m Code: #import "KDTextView.h" // Textview to disable the selection options @implementation KDTextView - (BOOL)canBecomeFirstResponder { return NO; } @end Step 2. Create the Textview using subclass KDTextView KDTextView*_textView = [[KDTextView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)]; [_textView setScrollEnabled:false]; [_textView setEditable:false]; _textView.delegate = self; [_textView setDataDetectorTypes:UIDataDetectorTypeAll]; _textView.selectable = YES; _textView.delaysContentTouches = NO; _textView.userInteractionEnabled = YES; [self.view addSubview:_textView]; Step 3: Implement the delegate method - (BOOL)textView:(UITextView *)textView shouldInteractWithURL:(NSURL *)URL inRange:(NSRange)characterRange { return true; } A: I'm curious, do you have control over the text shown? If so, you should probably just stick it in a UIWebView and throw some links in there to do it "the right way". A: Swift 4.2 Xcode 10.1 func setupContactUsTextView() { let text = NSMutableAttributedString(string: "Love your App, but need more help? Text, Call (123) 456-1234 or email ") if let font = UIFont(name: "Calibri", size: 17) { text.addAttribute(NSAttributedStringKey.font, value: font, range: NSRange(location: 0, length: text.length)) } else { text.addAttribute(NSAttributedStringKey.font, value: UIFont.systemFont(ofSize: 17), range: NSRange(location: 0, length: text.length)) } text.addAttribute(NSAttributedStringKey.foregroundColor, value: UIColor.init(red: 112/255, green: 112/255, blue: 112/255, alpha: 1.0), range: NSRange(location: 0, length: text.length)) text.addAttribute(NSAttributedStringKey.link, value: "tel://", range: NSRange(location: 49, length: 15)) let interactableText = NSMutableAttributedString(string: "[email protected]") if let font = UIFont(name: "Calibri", size: 17) { interactableText.addAttribute(NSAttributedStringKey.font, value: font, range: NSRange(location: 0, length: interactableText.length)) } else { interactableText.addAttribute(NSAttributedStringKey.font, value: UIFont.systemFont(ofSize: 17), range: NSRange(location: 0, length: interactableText.length)) } interactableText.addAttribute(NSAttributedStringKey.link, value: "[email protected]", range: NSRange(location: 0, length: interactableText.length)) interactableText.addAttribute(NSAttributedStringKey.underlineStyle, value: NSUnderlineStyle.styleSingle.rawValue, range: NSRange(location: 0, length: interactableText.length)) text.append(interactableText) videoDescTextView.attributedText = text videoDescTextView.textAlignment = .center videoDescTextView.isEditable = false videoDescTextView.isSelectable = true videoDescTextView.delegate = self } func textView(_ textView: UITextView, shouldInteractWith URL: URL, in characterRange: NSRange) -> Bool { if (characterRange.location > 48 && characterRange.location < 65){ print("open phone") }else{ print("open gmail") } return false } Steps - 1. Set the delegate to your text field and don't forget to implement UITextViewDelegate 2. Take the textView outlet - @IBOutlet weak var videoDescTextView: UITextView! 3. Add these two functions given above. This function shows how to detect phone numbers, email from textView, how to underline your email id, how to give custom color to your text, custom font, how to call a function when tapping on phone or email, etc. Hope this will help someone to save their valuable time. Happy Coding :) A: If you want to auto detect links, email etc Please make sure "isSelectable" is set to true. textview.isSelectable = true textview.editable = false textview.dataDetectorTypes = .all
{ "language": "en", "url": "https://stackoverflow.com/questions/995219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Why does Debug mode not guarantee the completion of a Task (.net core, C#) every one. I am a newbie to here and the programming world. MS docs is saying A task is a single work that is guaranteed to complete. I found it is true in release build but not in debug build. My testing code is very simple. static void Main(string[] args) { Console.WriteLine("1st Code executed"); Task task = new Task(() => { Console.WriteLine("2nd Code executed"); Console.WriteLine("3rd Code executed"); }); task.Start(); Console.WriteLine("4th Code executed"); Console.ReadLine(); } In debug mode, it results in one of three below: /*Mostly * 1st Code executed * 4th Code executed * rarely * 1st Code executed * 4th Code executed * 2nd Code executed * rarely * 1st Code executed * 4th Code executed * 2nd Code executed * 3rd Code executed */ In release build, there is no exception that all four lines are shown before the app finishes, although lines in the task and 4th line comes in arbitrary order. My question is what makes the app finish without completion of a task in debug mode and, in addition, how I can fix it thru debug options, if any. A: Quick answer Your Main method and your task run in parallel, and the Main method does not wait the task to finish. In release mode, the task is "lucky". It finishes before Main. Not in debug mode. In both case the execution is random. The fact they run in parallel explain why your can't predict the order of the printed lines. Explanations A Task is a thread that comes from the thread pool, so they are background threads. The process running your code (which consist of all your threads) does not wait for background threads to finish in order to terminate. The process only wait that foreground threads have finished. Then you may want to use the Thread class because they are foreground by default. But using Task is easier. So @John Wu's comment is totally relevant: A task is not guaranteed to finish unless you await it or call Wait or Result or do something else to wait for it You simply want to add at the end of your code: task.Wait(); However you'll never be able to predict the order of the printed lines, because the threads run in parallel.
{ "language": "en", "url": "https://stackoverflow.com/questions/59433111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: BackgroundWorker's ProgressChanged not updating UI until end of work loop I am coding a WPF application that will grab email's off of an IMAP account, and then export them into a user-selected folder. I use a BackgroundWorker to download the emails. However, my UI isn't being updated until the loop is over. Any tips would be appreciated. Class MainWindow Public MailRepo As MailRepository Private bw_Connect As New BackgroundWorker Private bw_Save As New BackgroundWorker Public Sub New() InitializeComponent() bw_Connect.WorkerReportsProgress = True bw_Connect.WorkerSupportsCancellation = True AddHandler bw_Connect.DoWork, AddressOf bw_Connect_DoWork bw_Save.WorkerReportsProgress = True bw_Save.WorkerSupportsCancellation = True AddHandler bw_Save.DoWork, AddressOf bw_Save_DoWork AddHandler bw_Save.ProgressChanged, AddressOf bw_Save_ProgressChanged End Sub Private Sub bw_Save_DoWork(ByVal sender As Object, ByVal e As DoWorkEventArgs) Dim worker As BackgroundWorker = CType(sender, BackgroundWorker) If bw_Connect.CancellationPending = True Then e.Cancel = True Else SaveEmails() End If End Sub Private Sub SaveEmails() Dim allMails As IEnumerable(Of Message) 'Get All Emails in Mailbox Try Dim mailBox As String Dispatcher.Invoke(DirectCast(Sub() mailBox = comboBoxEmailFolders.SelectedValue End Sub, Action)) allMails = MailRepo.GetAllMails(mailBox) Catch i4e As Imap4Exception MsgBox("Error: Folder not found" & vbCrLf & i4e.Message) Return End Try Dim msg As Message Dim msgInt As Integer = 1 'Save each message For Each msg In allMails bw_Save.ReportProgress(100 / allMails.Count * msgInt, Nothing) SaveMessage(msg) msgInt += 1 Next End Sub Private Sub bw_Save_ProgressChanged(ByVal sender As Object, ByVal e As ProgressChangedEventArgs) Dim percentDone As String = e.ProgressPercentage.ToString() & "%" updateStatus("Saving Emails " & percentDone & " done.") progressBarStatus.Value = e.ProgressPercentage End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/32854602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Nginx and Unicorn - (Connection refused) while connecting to upstream Trying to deploy my first rails app on a vps. I have followed the instructions in the following setup. https://www.digitalocean.com/community/articles/how-to-1-click-install-ruby-on-rails-on-ubuntu-12-10-with-digitalocean But my site gets a 504 Gateway Time-out. In the nginx log I get the following: 2013/10/16 03:10:45 [error] 19627#0: *82 connect() failed (111: Connection refused) while connecting to upstream, client: 121.218.167.90, server: _, request: "GET / HTTP/1.1", upstream: "http://162.243.39.196:8080/", host: "162.243.39.196" And when I try to run unicorn I get the following E, [2013-10-16T04:26:28.530019 #30087] ERROR -- : adding listener failed addr=0.0.0.0:8080 (in use) My nginx default file has the following server { listen 80; root /home/rails/public; server_name _; index index.htm index.html; location / { try_files $uri/index.html $uri.html $uri @app; } location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mp3|flv|mpeg|avi)$ { try_files $uri @app; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } My /home/unicorn/unicorn.conf has listen "127.0.0.1:8080" worker_processes 2 user "rails" working_directory "/home/rails" pid "/home/unicorn/pids/unicorn.pid" stderr_path "/home/unicorn/log/unicorn.log" stdout_path "/home/unicorn/log/unicorn.log" Thanks for your help. A: You are missing an upstream block where you refer to in proxy_pass http://app_server;. You can put it above the server block like this. upstream app_server { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; root /home/rails/public; server_name _; index index.htm index.html; ...
{ "language": "en", "url": "https://stackoverflow.com/questions/19395451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: how do I order by the results of multiple union statements? I can't use a common table expression: WITH cte AS (SELECT [StationID], [LastDistribution] FROM [DB1].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB2].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB3].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB4].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB5].[dbo].[ProcessingStations] ORDER BY [StationID] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB6].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB7].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB8].[dbo].[ProcessingStations]) SELECT * FROM cte ORDER BY StationID How would I go about doing this? A: Just put the ORDER BY at the end of your chain of SELECT ... FROM ... UNION ALL statements: SELECT [StationID], [LastDistribution] FROM [DB1].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB2].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB3].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB4].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB5].[dbo].[ProcessingStations] ORDER BY [StationID] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB6].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB7].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB8].[dbo].[ProcessingStations] ORDER BY StationID Here's a quick example I did in SSMS: DECLARE @a table (x int) DECLARE @b table (x int) DECLARE @c table (x int) insert into @a values (5) insert into @a values (4) insert into @a values (3) insert into @b values (0) insert into @b values (1) insert into @b values (2) insert into @c values (0) insert into @c values (1) insert into @c values (2) select * from @a union all select * from @b union all select * from @c order by x And here's the output: x ----- 0 0 1 1 2 2 3 4 5 As you can see, even though the SELECT * FROM @a came first, it still placed those last in the result set A: Why can't you use a CTE?. Anyway, you still can do the same using a derived table: SELECT * FROM ( SELECT [StationID], [LastDistribution] FROM [DB1].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB2].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB3].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB4].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB5].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB6].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB7].[dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [DB8].[dbo].[ProcessingStations]) A ORDER BY StationID A: See the fiddle, just put the ORDER BY on the last set to be "unioned". It achieves the same result without the overhead of the CTE. The order will effect the entire unified set. SELECT [StationID], [LastDistribution] FROM [dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [dbo].[ProcessingStations] UNION ALL SELECT [StationID], [LastDistribution] FROM [dbo].[ProcessingStations] ORDER BY [StationID]
{ "language": "en", "url": "https://stackoverflow.com/questions/15927687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a way to get the resulting Makefile after includes? In a (GNU) Makefile, one can do: include foo.mk all: @echo "Hello, world!" Is there a way to get make to spit out the Makefile after foo.mk has been included? Akin to after the C preprocessor has expanded macros etc.? Update: a commenter below asked me to specify which problem I'm trying to solve. I wish to trigger a rebuild when/if there's changes to the Makefile and/or the underlying, included makefiles. I've found a recipe that goes something like: .PHONY: bar SHAFILE := foobar.sha256 $(SHAFILE): bar @sha256sum Makefile | cmp -s - $@ || sha256sum Makefile > $@ ... %.o: %.c $(SHAFILE) $(CC) -c -o $@ $< $(CFLAGS) (Obviously, the checksum should be of the Makefile including includes, but this is a start.) A: Caveat: I haven't tried this, only read the manual. After all your include lines (including the transitive set of nested includes, and even lines created by text substitution or built by functions), the Make variable MAKEFILE_LIST will reference all your makefiles. So it should be sufficient to add a dependency to the end of your file such as %.o: $(MAKEFILE_LIST) You don't actually need the contents of the effective Makefile, just the list of files that comprise it. A: Is there a way to get make to spit out the Makefile after foo.mk has been included? No. There's gmake -d with a ton of debug output, some of which indicating which makefiles are being read: $ gmake -d|grep Reading Reading makefiles... Reading makefile `GNUmakefile'... Reading makefile `foo.mk' (search path) (no ~ expansion)... This might be helpful if there are recursive include directives or those under conditionals. Maybe you could tell us your actual problem you want to solve?
{ "language": "en", "url": "https://stackoverflow.com/questions/51036882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Convert entire dom4j Element's namespace If I have an XML element as such: <First id="" name=""> <Second id="" name=""> </Second> </First> How can I use dom4j to convert the namespace to something like below? Is there a simple way? <test:First test:id="" test:name=""> <test:Second test:id="" test:name=""> </test:Second> </test:First> A: If you prefer a Java centric solution, DOM4J has support for traversing a document tree: Document doc = DocumentHelper.parseText(XML); final Namespace ns = Namespace.get("test", "urn:foo:bar"); doc.accept(new VisitorSupport() { @Override public void visit(Element node) { node.setQName(QName.get(node.getName(), ns)); // Attribute QNames are read-only, so need to create new List<Attribute> attributes = new ArrayList<Attribute>(); while(node.attributes().size() > 0) attributes.add(node.attributes().remove(0)); for(Attribute a: attributes) { node.addAttribute(QName.get(a.getName(), ns), a.getValue()); } } }); A: You could run an XSLT transformation: <xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="3.0"> <xsl:template match="*"> <xsl:element name="test:{local-name()}" namespace="http://test-namepace/ns"> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> <xsl:template match="@*"> <xsl:attribute name="test:{local-name()}" namespace="http://test-namepace/ns"> <xsl:value-of select="."/> </xsl:element> </xsl:template> </xsl:transform> I think DOM4J has methods to apply an XSLT 1.0 transformation directly, but you also have the option to use Saxon, which handles DOM4J as input and/or output alongside many other tree models. Incidentally, (a) in your requirements example, the result document is ill-formed because it doesn't declare the namespace, and (b) it's not generally considered good practice to put all the attributes in the same namespace as the containing elements; I've given you a solution on the assumption that you have good reasons for this rather strange design.
{ "language": "en", "url": "https://stackoverflow.com/questions/61357602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Interrrupt-able ScheduledExecutorService in Android Currently, I am designing a stock market app with the following requirement * *1) Scan stock price repeatedly, with a fixed sleep time. *2) Able to interrupt the sleep any-time. This is because when user adds a new stock, we need to wake up from sleep, and scan immediately. Previously, I'm using bare bone Thread, to full-fill the above 2 requirements. private class StockMonitor extends Thread { @Override public void run() { final Thread thisThread = Thread.currentThread(); while (thisThread == thread) { // Fetch stock prices... try { Thread.sleep(MIN_DELAY); } catch (java.lang.InterruptedException exp) { if (false == refreshed()) { /* Exit the primary fail safe loop. */ thread = null; break; } } } } public synchronized void refresh() { isRefresh = true; interrupt(); } private synchronized boolean refreshed() { if (isRefresh) { isRefresh = false; // Interrupted status of the thread is cleared. interrupted(); return true; } return false; } } When I want to perform requirement (2), I will call refresh. Thread will be waked up, and perform job immediately. However, I feel it is difficult to maintain such bare bone Thread code, and can make mistake easily. I prefer to use ScheduledExecutorService. However, it lack of ability for me to wake up the Thread from sleeping state, and perform job immediately. I was wondering, is there any classes in Android, which enables me to perform periodically task as in ScheduledExecutorService? Yet has the ability to wake up the Thread from sleeping state, and perform job immediately. A: Here is my solution with ScheduledThreadPoolExecutor. To cancel existing tasks, issue future.cancel() on Future object returned from scheduleAtFixedRate(). Then call scheduleAtFixedRate() again with initial delay set to 0. class HiTask implements Runnable { @Override public void run() { System.out.println("Say Hi!"); } } // periodically execute task in every 100 ms, without initial delay time ScheduledThreadPoolExecutor exec = new ScheduledThreadPoolExecutor(1); long initialDelay = 0; long period = 100; ScheduledFuture<?> future1 = exec.scheduleAtFixedRate(new HiTask(), initialDelay, period, TimeUnit.MILLISECONDS); // to trigger task execution immediately boolean success = future.cancel(true); // mayInterruptIfRunning = true: interrupt thread even task has already started ScheduledFuture<?> future2 = exec.scheduleAtFixedRate(new HiTask(), initialDelay, period, TimeUnit.MILLISECONDS); A: Options you have with ScheduledExecutorService: * *submit() the same task again in the moment you need the data to be refreshed. If will be immediately executed (and only once). Possible drawback: If this happens very close to a scheduled execution, you have two scans with very little time in between. *cancel() the Tasks in the moment you need the data to be refreshed, submit() it once for immediate execution and and schedule() for repeated execution. Be aware that any exceptions that occur in the task that are unhandled cancel future executions. How to solve that is answered in How to reschedule a task using a ScheduledExecutorService?. ExecutorService is not persistent (i.e. does not survive device reboot). I don't know if that's a problem for you. If it is there are libraries specialized on background task execution on Android: * *smart-scheduler-android *android-priority-jobqueue *android-job You would have to apply the same workarounds mentioned above.
{ "language": "en", "url": "https://stackoverflow.com/questions/42501670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: get the child key for specific value when append table using Jquery i created a table and used Jquery to put all the user database from firebase info in it. The problem is: i have a button in the table that be should when you click the button console print the element key from database. var userDataRef = firebase.database().ref("User/3N2f2rJSSAZmFOdZEeJdlsuEZam2").orderByKey(); userDataRef.on('child_added', function(childSnapshot) { var key = childSnapshot.key; var childData = childSnapshot.val(); var title_val = childData.Title; var url_val = childData.Url; // Append data $("#data").append("<tr><td>" + title_val + "</td><td><a href=" + url_val + " target='_blank'> <button class='box'>GO</button></a></td><td><button id='del' class='box'>Delete</button></a></td><</tr>"); $('#data').on('click', '#del', function(){ console.log(key) }); <table border="0" style="height: 63px; width: 100%;"> <thead> <tr> <div class="column middle" style="background-color:#f8f8f8;"> <td>Title</td> <td style="width: 40%;" >Link</td> <td style="width: 10%;" >Delete</td> </div> </tr> </thead> <tbody id="data" > </tbody> </table> when this function perform i gut all the keys not just the one from the exact raw $('#data').on('click', '#del', function(){ console.log(key) }); output : -LkT_afLi9nfn65OS2QJ database.js:17:17 -LkTciKQVEa2bsbtSwkW database.js:17:17 -LkTclDO8dYSBfgiBjAZ A: ID's should never be repeated, use the class of the buttons instead as a common attribute. A: You are binding to all the elements and you are duplicating ids. Ids are singular. You would be better off with either binding to the one button alone or event delegation with a data attribute. userDataRef.on('child_added', function(childSnapshot) { /* cut out the vars */ var myRow = $("<tr><td>" + title_val + "</td><td><a href=" + url_val + " target='_blank'> <button class='box'>GO</button></a></td><td><button class='box'>Delete</button></a></td></tr>"); myRow.find("button").on("click", function(){ console.log(key) }); $("#data").append(myRow); }); or event delegation with a data attribute $("#data").on("click", "button[data-key]", function(evt){ var btn = $(this); console.log(btn.data("key")) }); userDataRef.on('child_added', function(childSnapshot) { /* cut out the vars */ var myRow = $("<tr><td>" + title_val + "</td><td><a href=" + url_val + " target='_blank'> <button class='box'>GO</button></a></td><td><button class='box' data-key='" + key + "'>Delete</button></a></td></tr>"); $("#data").append(myRow); });
{ "language": "en", "url": "https://stackoverflow.com/questions/57164817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to query native html elements when testing a React application with Jest? When I render React elements, for example, a Button, then an equivalent HTML button element is created in the DOM. But, when I query the DOM I'm still getting a wrapped element. HTMLButtonElement { '__reactInternalInstance$gt1haeiqrhv': FiberNode { tag: 5, key: null, ... } Why don't I get the native element as if I rendered a normal div using JSDOM? Is there a way to get it? A: I suppose you have to query for getDOMNode: const button = wrapper.find('button').getDOMNode(); // const button = wrapper.getDOMNode(); // <---if wrapper is button element It will give you the lying dom element in the component.
{ "language": "en", "url": "https://stackoverflow.com/questions/60075433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best way to control a variable with radio buttons? I'm very new to Android programming, and I'm unable to find out what the accepted way to reach my goal is: I have a variable in my main activity (sortOrder), which I want to set using radio buttons in a different activity. At the moment, I just change MainActivity.sortOrder from the other activity with a big switch case, so I'm wondering if there is a more elegant way. Also, at the moment the radio buttons "forget" their choice when I return to MainActivity. Is is possible to save the selected radio button, and set the variable from that in MainActivity? Here's some code: in MainActivity: public static String sortOrder; public static int checkedId; These are given values in onCreate: sortOrder = ShipContract.ShipEntry.COLUMN_NAME_NAME + " ASC"; checkedId = R.id.sort_name; And here is OptionsSort, which is started from MainActivity: public class OptionsSort extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_options_sort); Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); getSupportActionBar().setDisplayHomeAsUpEnabled(true); RadioButton checkedButton = (RadioButton) findViewById(MainActivity.checkedId); checkedButton.setChecked(true); } public void onRadioButtonClicked(View view) { boolean checked = ((RadioButton) view).isChecked(); switch(view.getId()) { case R.id.sort_name: if (checked) MainActivity.sortOrder = ShipContract.ShipEntry.COLUMN_NAME_NAME + " ASC"; break; case R.id.sort_date: if (checked) MainActivity.sortOrder = ShipContract.ShipEntry.COLUMN_NAME_DATE + " ASC"; break; case R.id.sort_capacity: if (checked) MainActivity.sortOrder = ShipContract.ShipEntry.COLUMN_NAME_CAPACITY + " ASC"; break; case R.id.sort_line: if (checked) MainActivity.sortOrder = ShipContract.ShipEntry.COLUMN_NAME_LINE + " ASC"; break; case R.id.sort_displacement: if (checked) MainActivity.sortOrder = ShipContract.ShipEntry.COLUMN_NAME_DISPLACEMENT + " ASC"; break; case R.id.sort_speed: if (checked) MainActivity.sortOrder = ShipContract.ShipEntry.COLUMN_NAME_SPEED + " ASC"; break; case R.id.sort_power: if (checked) MainActivity.sortOrder = ShipContract.ShipEntry.COLUMN_NAME_POWER + " ASC"; break; } RadioGroup radiogroup = (RadioGroup) findViewById(R.id.sort_group); MainActivity.checkedId = radiogroup.getCheckedRadioButtonId(); finish(); } } A: so far i understand your problem you want to take sortOder value from activity B and bring it to Activity A.This can be achieved if you startActivityForResult and there in activity B when you are done with everything just call setresult method and give it the resultant intent and finish this activity B. In activity A you have to override onActivityResult and handle the result from activity B . like startActivityForResult(new Intent(this,ActivityB.class),123); @Override public void onActivityResult(int requestCode, int resultCode, Intentdata) { super.onActivityResult(requestCode, resultCode, data); if (resultCode == Activity.RESULT_OK) { if (requestCode==123) { String sordData= data.getStringExtra("sortOrder","byName") sortYourDAta(sordData); } } } and in activity b when sort order is selected just store value in a variable sortOrder="byDate"; and onbackpressed or when ever you want to go back just call this setResult(123,new Intent().putExtra("sortOrder",sortOrder)); finish();
{ "language": "en", "url": "https://stackoverflow.com/questions/38037284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to make flashlight Blink I am trying to make the camera flashlight blink. I already have written a code for switching the flashlight on and off. I am trying to create a method which could blink the flashlight on the click of a button. How can i achieve this. public void flash_effect() throws InterruptedException { camera = Camera.open(); params = camera.getParameters(); params.setFlashMode(Parameters.FLASH_MODE_TORCH); Thread a = new Thread() { public void run() { for(int i =0; i < 10; i++) { camera.setParameters(params); camera.startPreview(); try { Thread.sleep(50); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } camera.stopPreview(); try { Thread.sleep(50); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } }; a.start(); } This code is not working.What am i doing wrong. Waiting for help. EDITED Manifest <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.testlight" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="21" /> <uses-permission android:name="android.permission.CAMERA" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> LOGCAT 06-16 14:03:40.579: E/AndroidRuntime(20302): FATAL EXCEPTION: main 06-16 14:03:40.579: E/AndroidRuntime(20302): java.lang.IllegalStateException: Could not execute method of the activity 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.view.View$1.onClick(View.java:3626) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.view.View.performClick(View.java:4231) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.view.View$PerformClick.run(View.java:17537) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.os.Handler.handleCallback(Handler.java:725) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.os.Handler.dispatchMessage(Handler.java:92) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.os.Looper.loop(Looper.java:158) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.app.ActivityThread.main(ActivityThread.java:5751) 06-16 14:03:40.579: E/AndroidRuntime(20302): at java.lang.reflect.Method.invokeNative(Native Method) 06-16 14:03:40.579: E/AndroidRuntime(20302): at java.lang.reflect.Method.invoke(Method.java:511) 06-16 14:03:40.579: E/AndroidRuntime(20302): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1083) 06-16 14:03:40.579: E/AndroidRuntime(20302): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:850) 06-16 14:03:40.579: E/AndroidRuntime(20302): at dalvik.system.NativeStart.main(Native Method) 06-16 14:03:40.579: E/AndroidRuntime(20302): Caused by: java.lang.reflect.InvocationTargetException 06-16 14:03:40.579: E/AndroidRuntime(20302): at java.lang.reflect.Method.invokeNative(Native Method) 06-16 14:03:40.579: E/AndroidRuntime(20302): at java.lang.reflect.Method.invoke(Method.java:511) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.view.View$1.onClick(View.java:3621) 06-16 14:03:40.579: E/AndroidRuntime(20302): ... 11 more 06-16 14:03:40.579: E/AndroidRuntime(20302): Caused by: java.lang.RuntimeException: Fail to connect to camera service 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.hardware.Camera.native_setup(Native Method) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.hardware.Camera.(Camera.java:362) 06-16 14:03:40.579: E/AndroidRuntime(20302): at android.hardware.Camera.open(Camera.java:336) 06-16 14:03:40.579: E/AndroidRuntime(20302): at com.example.testlight.MainActivity.flash_effect(MainActivity.java:185) A: I've been working on your problem, and my solution can blink the flashlight. I used your same logic, except I used Handler instead of Thread to delay the blink. public void flash_effect(View view) { long delay = 50; camera = Camera.open(); params = camera.getParameters(); params.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH); camera.setParameters(params); camera.startPreview(); handler.postDelayed(new Runnable() { @Override public void run() { params.setFlashMode(Camera.Parameters.FLASH_MODE_OFF); camera.setParameters(params); camera.stopPreview(); camera.release(); } }, delay); } You have to initialise the handler before using it. Do it in the onCreate() method. Handler handler = new Handler(); This method is an onClick() of a button. I used a button to start a blink public void flash_effect(View view)
{ "language": "en", "url": "https://stackoverflow.com/questions/30860541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Google Drive API IOS Permissions of GTLRDriveService I am playing around with the google drive API and trying to build a simple app that uploads a picture to my google drive. The app is supposed to upload a picture once the user is signed in, however it gives an error of "2017-09-14 00:55:20.342237-0400 driveTest[6705:1647551] An error occurred: Error Domain=com.google.GTLRErrorObjectDomain Code=403 "Insufficient Permission" UserInfo={GTLRStructuredError=GTLRErrorObject 0x1c4251d30: {message:"Insufficient Permission" errors:[1] code:403}, NSLocalizedDescription=Insufficient Permission}" I have tried to pass it the service which is of type GTLRDriveService to the initSetup() function of the userSetUp class, but to no avail. Could someone please point me to the right track as to why my permissions are not working even though I have logged on correctly, and the part where I am passing in the GTLRDriveService is in the code that runs after a sucessful login. I instantiate a userSetUp object and I let setUpUser = userSetUp() setUpUser.initSetup(service) I have userSetUp written in objective c as such and it is bridged correctly as I am able to instantiate it in my viewcontroller file which is written in swift. UserSetUp::::::: #import "userSetUp.h" #import <GoogleSignIn/GoogleSignIn.h> @import GoogleAPIClientForREST; @implementation userSetUp - (void) initSetup:(GTLRDriveService *) driveService { printf("heloooooaiosuoiadoidauoalo"); //GTLRDriveService *driveService = [GTLRDriveService new]; //NSData *fileData = [[NSFileManager defaultManager] contentsAtPath:@"files/apple.jpg"]; NSString *filePath = [[NSBundle mainBundle] pathForResource:@"apple" ofType:@"jpg"]; NSData *fileData = [NSData dataWithContentsOfFile:filePath]; GTLRDrive_File *metadata = [GTLRDrive_File object]; metadata.name = @"apple.jpg"; //metadata.mimeType = @"application/vnd.google-apps.document"; GTLRUploadParameters *uploadParameters = [GTLRUploadParameters uploadParametersWithData:fileData MIMEType:@"image/jpeg"]; uploadParameters.shouldUploadWithSingleRequest = TRUE; GTLRDriveQuery_FilesCreate *query = [GTLRDriveQuery_FilesCreate queryWithObject:metadata uploadParameters:uploadParameters]; query.fields = @"id"; [driveService executeQuery:query completionHandler:^(GTLRServiceTicket *ticket, GTLRDrive_File *file, NSError *error) { if (error == nil) { //NSLog(@"File ID %@", file.identifier); printf("it worked"); } else { NSLog(@"An error occurred: %@", error); } }]; printf("upload complete!"); } @end And Viewcontroller. swift import GoogleAPIClientForREST import GoogleSignIn import UIKit class ViewController: UIViewController, GIDSignInDelegate, GIDSignInUIDelegate { // If modifying these scopes, delete your previously saved credentials by // resetting the iOS simulator or uninstall the app. private let scopes = [kGTLRAuthScopeDriveReadonly] let service = GTLRDriveService() let signInButton = GIDSignInButton() let output = UITextView() override func viewDidLoad() { super.viewDidLoad() // Configure Google Sign-in. GIDSignIn.sharedInstance().delegate = self GIDSignIn.sharedInstance().uiDelegate = self GIDSignIn.sharedInstance().scopes = scopes GIDSignIn.sharedInstance().signInSilently() signInButton.frame = CGRect(x: view.frame.width/2 - signInButton.frame.width , y: view.frame.height/2, width: signInButton.frame.width, height: signInButton.frame.height) // Add the sign-in button. view.addSubview(signInButton) // Add a UITextView to display output. output.frame = view.bounds output.isEditable = false output.contentInset = UIEdgeInsets(top: 20, left: 0, bottom: 20, right: 0) output.autoresizingMask = [.flexibleHeight, .flexibleWidth] output.isHidden = true view.addSubview(output); //let itsASetup() } func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) { if let error = error { showAlert(title: "Authentication Error", message: error.localizedDescription) self.service.authorizer = nil } else { self.signInButton.isHidden = true self.output.isHidden = false self.service.authorizer = user.authentication.fetcherAuthorizer() listFiles() } } // List up to 10 files in Drive func listFiles() { let query = GTLRDriveQuery_FilesList.query() query.pageSize = 10 service.executeQuery(query, delegate: self, didFinish: #selector(displayResultWithTicket(ticket:finishedWithObject:error:)) ) } // Process the response and display output @objc func displayResultWithTicket(ticket: GTLRServiceTicket, finishedWithObject result : GTLRDrive_FileList, error : NSError?) { if let error = error { showAlert(title: "Error", message: error.localizedDescription) return } var text = ""; if let files = result.files, !files.isEmpty { text += "Files:\n" for file in files { text += "\(file.name!) (\(file.identifier!))\n" } } else { text += "No files found." } output.text = text let setUpUser = userSetUp() setUpUser.initSetup(service) } // Helper for showing an alert func showAlert(title : String, message: String) { let alert = UIAlertController( title: title, message: message, preferredStyle: UIAlertControllerStyle.alert ) let ok = UIAlertAction( title: "OK", style: UIAlertActionStyle.default, handler: nil ) alert.addAction(ok) present(alert, animated: true, completion: nil) } } A: try to Change your scope like: class ViewController: UIViewController, GIDSignInDelegate, GIDSignInUIDelegate { // If modifying these scopes, delete your previously saved credentials by private let scopes = ["https://www.googleapis.com/auth/drive"] ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/46210827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Prevent triggering multiple triggers at same time on table update In the SQL server, I have a table X and a table Y. The primary key of both is same say Z. I created a trigger that if anything is created/deleted/updated in X then * *If the entry doesn't exist in Y for Z then create a new entry. *If an entry exists in Y for Z then update that row. I ran a query like this: Delete FROM Table2 where TId = 1; This query deleted 10 rows and 10 triggers ran simultaneously. As all ran parallel every trigger executed the else block because there was no entry in Y initially for Z and as all are running they won't find the row in Y. So because of this 10 rows gets created in the Y table. I want that I should get only 1 entry created and the other triggers should update that entry. As example I have tables: Table1(TId PRIMARY KEY, C12, C13); Table2(C21 PRIMARY KEY, TId FOREIGN KEY(Table1, C11), C23); Table3(TId PRIMARY KEY, C32, C33); SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[trg1] ON [dbo].[TABLE2] AFTER DELETE AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM Deleted D ) BEGIN IF EXISTS (Select * from [dbo].[Table3] where TId in (Select TId from Deleted D) ) BEGIN Update [dbo].[Table3] SET C32 = 1 where TId in (Select TId from Deleted D); END ElSE BEGIN INSERT INTO [dbo].[Table3] ( TId, C32, C33 ) SELECT TId, 3, GETUTCDATE() FROM Deleted D WHERE TId is not null SET NOCOUNT OFF END END SET NOCOUNT OFF END PRINT ''; PRINT 'End of script'; PRINT ' --- // ---'; A: Your entire trigger's code should be something like this: MERGE INTO Table3 t USING (SELECT TId,CASE WHEN COUNT(*) > 1 THEN 1 ELSE 3 END as C32 FROM deleted GROUP BY TId) u ON t.TId = u.TId WHEN MATCHED THEN UPDATE SET C32 = 1 WHEN NOT MATCHED AND u.TId is not null THEN INSERT (TId,C32,C33) VALUES (u.TId, u.C32, GETUTCDATE()); Which will insert a new row but decide whether to set C32 to 1 or 3, depending on how many rows in DELETED are for the same TId, and just update C32 to 1 if a row already existed. This is all about thinking in sets. You weren't accounting for the fact that deleted could contain multiple rows, some or all of which may have the same TId value. You don't write IF/ELSE blocks that can only make a single decision for all rows in deleted1. 1E.g. if deleted had contained 4 rows total, 2 for TId 6 and 2 for TId 8, and Table3 had contained a row for TId 6 but no row for TId 8, your trigger would have found some matching rows in Table3 and just performed the UPDATE.
{ "language": "en", "url": "https://stackoverflow.com/questions/69113357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: android - how to find incompatible apis with older versions I have built an app with api 23/22/21 sdk with minTargetVersion = 16. I use Android Studio Sometimes during the development process, I saw android giving hints that a newer api is used when minTarget is 16. I have fixed most of those but may not be all. Now I have written lot of code and I am looking for a way to find out all usages of newer api which are not fully compatible with older versions. I am looking for the same in layout files as well. Is there an easy way I can find out? Harder way is to look at every single line of code again or source files. A: in Android Studio you can click on Analyze in the toolbar at the top > Inspect code > Whole Project after AS is finished you will have a list of lint errors you can go through
{ "language": "en", "url": "https://stackoverflow.com/questions/38404506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Gulp clean task being called from other tasks gulp.task('watch', gulp.series('clean', gulp.parallel('css')), gulp.watch(['./app/scss/**/*.scss'], gulp.series('clean', gulp.parallel('css'))) ); When using any other task in my gulpfile for some reason gulp.watch is being called. Does anyone know what is going wrong with this task? A: gulp.task('watch', gulp.series('clean', gulp.parallel('css')), () => { gulp.watch(['./app/scss/**/*.scss'], gulp.series('clean', gulp.parallel('css'))) } ); It appears the issue is calling the gulp.watch function directly rather than from inside of a function. The above code fixes the issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/50259721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: TwinCAT 3 user control Is it possible to create a library with Hmi usercontrols? https://infosys.beckhoff.com/english.php?content=../content/1033/te2000_tc3_hmi_engineering/18014401986701963.html&id= I have created a librarie for the plc with functionblocks and datastructs. My usercontrols are mathed with my plcLib. And it would be nice if can use this set-up in every project A: Your question is a bit unclear, however if you are asking about user access controls, whereby you can give certain users certain permissions, TwinCAT does have a built-in user management system. However, it relies on Beckhoff's HMI software products (Either HMI or HMI Web). If you are using a different HMI solution, you would need to implement this functionality yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/54639316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Optimizing pairwise subtruction I have to compute the pairwise difference of particle's velocities for around 1e6 objects each with approx. 1e4 particles. Right now I am using itertools.combinations to loop over the particles, but just for one object my code takes already more than 30 minutes. I was wondering what else can I do to speed it up to a feasible speed, since parallelising doesn't seem to add much in python. Is cython the way to go? Here is my code just for one of the objects: def pairwisevel(hist,velj,velk, xj, xk): vlos = (velj - velk) if( (xj - xk) < 0.): vlos = - vlos hist.add_from_value(vlos) for i in itertools.combinations(np.arange(0,int(particles_per_group[0]),1),2): pairwisevel(hist, pvel[i[0]], pvel[i[1]],\ pcoords[i[0]], pcoords[i[1]]) A: I hope I understood your qusition. In this example I calculated the histogramm of one particle object. But if you wan't to do this for all 1e6 groups (1e4*1e4*1e6=1e14) comparisons, this would still take a few days. In this example I used Numba to accomplish the task. Code import numpy as np import numba as nb import time #From Numba source #Copyright (c) 2012, Anaconda, Inc. #All rights reserved. @nb.njit(fastmath=True) def digitize(x, bins, right=False): # bins are monotonically-increasing n = len(bins) lo = 0 hi = n if right: if np.isnan(x): # Find the first nan (i.e. the last from the end of bins, # since there shouldn't be many of them in practice) for i in range(n, 0, -1): if not np.isnan(bins[i - 1]): return i return 0 while hi > lo: mid = (lo + hi) >> 1 if bins[mid] < x: # mid is too low => narrow to upper bins lo = mid + 1 else: # mid is too high, or is a NaN => narrow to lower bins hi = mid else: if np.isnan(x): # NaNs end up in the last bin return n while hi > lo: mid = (lo + hi) >> 1 if bins[mid] <= x: # mid is too low => narrow to upper bins lo = mid + 1 else: # mid is too high, or is a NaN => narrow to lower bins hi = mid return lo #Variant_1 @nb.njit(fastmath=True,parallel=True) def bincount_comb_1(pvel,pcoords,bins): vlos_binned=np.zeros(bins.shape[0]+1,dtype=np.uint64) for i in nb.prange(pvel.shape[0]): for j in range(pvel.shape[0]): if( (pcoords[i] - pcoords[j]) < 0.): vlos = 0. else: vlos = (pvel[i] - pvel[j]) dig_vlos=digitize(vlos, bins, right=False) vlos_binned[dig_vlos]+=1 return vlos_binned #Variant_2 #Is this also working? @nb.njit(fastmath=True,parallel=True) def bincount_comb_2(pvel,pcoords,bins): vlos_binned=np.zeros(bins.shape[0]+1,dtype=np.uint64) for i in nb.prange(pvel.shape[0]): for j in range(pvel.shape[0]): #only particles which fulfill this condition are counted? if( (pcoords[i] - pcoords[j]) < 0.): vlos = (pvel[i] - pvel[j]) dig_vlos=digitize(vlos, bins, right=False) vlos_binned[dig_vlos]+=1 return vlos_binned #Variant_3 #Only counting once @nb.njit(fastmath=True,parallel=True) def bincount_comb_3(pvel,pcoords,bins): vlos_binned=np.zeros(bins.shape[0]+1,dtype=np.uint64) for i in nb.prange(pvel.shape[0]): for j in range(i,pvel.shape[0]): #only particles, where this condition is met are counted? if( (pcoords[i] - pcoords[j]) < 0.): vlos = (pvel[i] - pvel[j]) dig_vlos=digitize(vlos, bins, right=False) vlos_binned[dig_vlos]+=1 return vlos_binned #Create some data to test bins=np.arange(2,32) pvel=np.random.rand(10_000)*35 pcoords=np.random.rand(10_000)*35 #first call has compilation overhead, we don't measure this res_1=bincount_comb_1(pvel,pcoords,bins) res_2=bincount_comb_2(pvel,pcoords,bins) t1=time.time() res=bincount_comb_1(pvel,pcoords,bins) print(time.time()-t1) t1=time.time() res=bincount_comb_2(pvel,pcoords,bins) print(time.time()-t1) t1=time.time() res=bincount_comb_3(pvel,pcoords,bins) print(time.time()-t1) Performance #Variant_1: 0.5s 5.78d for 1e6 groups of points #Variant_2: 0.3s 3.25d for 1e6 groups of points #Variant_3: 0.22s 2.54d for 1e6 groups of points
{ "language": "en", "url": "https://stackoverflow.com/questions/50787624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Matlab, finding common values in two array I want to find common values in two arrays(For example if A=[1 2 3 4 5 6] and B=[9 8 7 6 3 1 2] the result is ans=[1 2 3 6]).Is there any method without using loop? Thanks A: use intersect(A,B) to get the answer. Another option is to use ismember, for example A(ismember(A,B)).
{ "language": "en", "url": "https://stackoverflow.com/questions/16494045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Possible to save and list queries in MySQL? I have a database and have saved the code on a separate document for a number of queries - is there anyway of saving them as part of the database as some kind of list so that they can be clicked on to run them rather than keeping on having to paste the code in? I realise that there are various Reports systems available to purchase on line, but this is a small database and a one-off, and they look complicated and not worth the trouble. Is there anyway of streamlining this? A: If I understand correctly you can write those sql queries to a script-or bunch of script files- file and directly run on mysql without copy/paste. mysql -u user -ppass < script.sql
{ "language": "en", "url": "https://stackoverflow.com/questions/28010537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Generating repeated iterations of data across two data frames in R I'm trying to calculate distance between two sets of phonetic variables in R using an established matrix of phonetic measurements. For example, I want to take the measurements for /p/ and the measurements for /b/, and then compare the distance between these two phonetic units. I can do this using the following matrix of distinctive phonetic features: library(tibble) distinctive.feature.matrix <- tribble(~Symbol, ~Sonorant, ~Consonantal, ~Voice, ~Nasal, ~Degree, ~Labial, ~Palatal, ~Pharyngeal, ~Round, ~Tongue, ~Radical, "p", -1, 1, -1, -1, 1, 1, 0, -1, 1, 0, 0, "b", -1, 1, 0, -1, 1, 1, 0, -1, 1, 0, 0, "t", -1, 1, -1, -1, 1, -1, 1, -1, -1, 1, 0, "d", -1, 1, 0, -1, 1, -1, 1, -1, -1, 1, 0, "k", -1, 1, -1, -1, 1, -1, -1, -1, -1, -1, 0, "g", -1, 1, 0, -1, 1, -1, -1, -1, -1, -1, 0, "f", -0.5, 1, -1, -1, 0, -1, 1, -1, 1, 0, 0, "v", -0.5, 1, 0, -1, 0, -1, 1, -1, 1, 0, 0, "θ", -0.5, 1, -1, -1, 0, -1, 1, -1, -1, 0, 0, "ð", -0.5, 1, 0, -1, 0, -1, 1, -1, -1, 0, 0, "s", -0.5, 1, -1, -1, 0, -1, 1, -1, -1, 1, 0, "z", -0.5, 1, 0, -1, 0, -1, 1, -1, -1, 1, 0, "h", -0.5, 1, 0, -1, 0, -1, -1, 1, -1, -1, -1, "ʃ", -0.5, 1, -1, -1, 0, -1, 0, -1, -1, 0, 0, "ʒ", -0.5, 1, 0, -1, 0, -1, 0, -1, -1, 0, 0, "tʃ", -0.8, 1, -1, -1, 1, -1, 0, -1, -1, 0, 0, "dʒ", -0.8, 1, 0, -1, 1, -1, 0, -1, -1, 0, 0, "m", 0, 0, 1, 1, 1, 1, 0, -1, 1, 0, 0, "n", 0, 0, 1, 1, 1, -1, 1, -1, -1, 1, 0, "ŋ", 0, 0, 1, 1, 1, -1, -1, -1, -1, -1, 0, "r", 0.5, 0, 1, 0, -1, -1, -1, 1, 1, -1, -1, "l", 0.5, 0, 1, 0, -1, -1, 1, -1, -1, 1, 0, "w", 0.8, 0, 1, 0, 0, 1, -1, -1, 1, -1, 0, "j", 0.8, 0, 1, 0, 0, -1, 0, -1, -1, 0, 1) I have another set of data showing child productions of different words, and I want to calculate the distance between the child's production and that of the target, for each consonant in the target word. The production data looks a bit like this: library(tibble) production.data <- tribble(~Subject, ~Age, ~Target, ~C1_target, ~C1_actual, "subj1", "001126", "teddy", "t", "d", "subj1", "001126", "teddy", "t", "t", "subj1", "001126", "daddy", "d", "d", "subj1", "001126", "daddy", "d", "d", "subj1", "001126", "daddy", "d", "t", "subj1", "001126", "baby", "b", "p", "subj1", "001126", "Tigger", "t", "d", "subj1", "001126", "doggy", "d", "d", "subj1", "001126", "milk", "m", "m") So, for each instance of production.data$C1_target I want to take the values of the corresponding consonant in distinctive.feature.matrix and compare them with the values of productiondata$C1_actual. Once I have these values, I will subtract the C1_actual value from the C1_target value, across each of the 11 distinctive features. By way of example, for the first instance of 'teddy', I want to compare /t/ with /d/, which means subtracting -1 from -1 (distinctive.feature.matrix$Sonorant), 1 from 1 (distinctive.feature.matrix$Consonantal), 0 from -1 (distinctive.feature.matrix$Voice), etc. I'll then do some further calculations on these, but I'll leave it as this for now as it's already quite complicated. I think I can use for loops to do this but I haven't used this function before and no amount of searching brings up a usable example. A: The dplyr package provides join functions that can help. For every row of the production.data, you can bring in the corresponding features for each C1_target, C1_actual to create a large tibble: library(dplyr) x <- production.data %>% inner_join(distinctive.feature.matrix, by = c("C1_target"="Symbol")) %>% inner_join(distinctive.feature.matrix, by = c("C1_actual"="Symbol")) Note that there are two calls to inner_join: one to get features corresponding to C1_target, the second for features for C1_actual The new tibble has column names such as Sonorant.x and Sonorant.y, the first corresponding to C1_target and the second corresponding to C1_actual. You can create a list of feature names by taking the column names, excluding Symbol: features <- colnames(distinctive.feature.matrix)[-1] Now you can do your 'for-loop' to calculate the difference between the x and y values, and then combine the lot into a new data frame: diffs <- do.call(cbind, sapply(features, function(f) x[paste0(f, '.x')] - x[paste0(f, '.y')])) colnames(diffs) <- features The paste0 function concatenates each feature name with .x or .y; the sapply is the equivalent of the for-loop; and the cbind bashes the result of each computation into a new table. You will end up with a data frame whose column names are the features, and as many rows as your production.data had. To be fair, the above code is an ugly confection of dplyr-like syntax and base R...
{ "language": "en", "url": "https://stackoverflow.com/questions/57729231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: System.ArgumentOutOfRangeException' occurred in mscorlib.dll but was not handled in user code Index was out of range Getting: "An exception of type 'System.ArgumentOutOfRangeException' occurred in mscorlib.dll but was not handled in user code Index was out of range. Must be non-negative and less than the size of the collection." in if (resD.Tables.Contains(ReDo.TableName)) { _result = resD.Tables[ReDo.TableName]; } When I try to execute this code repeatedly, I am getting above exception, though, resD.Table contains the table name specified in condition. Any help much appreciated.
{ "language": "en", "url": "https://stackoverflow.com/questions/50917930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Android SDK problem setting ringtone I am trying to add and set the default ringtone on the emulator/phone. The ringtone has been downloaded and is stored in the application folder /dada/dada/com.xxx/ringtones. the ringtones are ogg files. I use the following code to add and set the ringtone: public void setRingtone() { aajoAsset asset = null; asset = mXXXX.getAssetManager().getCurrentRingtoneAsset(); if (asset != null && asset.isSaved()/* && !asset.getName().equals(mLastAssetName)*/) { String filepath = asset.getDirectoryPath() + asset.getFilename(); File ringtoneFile = new File(filepath); if (LOG) { Log.i(TAG, "Sending Intent : " + Intent.ACTION_MEDIA_SCANNER_SCAN_FILE); } mContext.sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, Uri.parse("file://" + asset.getDirectoryPath() + asset.getFilename()))); ContentValues content = new ContentValues(); content.put(MediaStore.MediaColumns.DATA,ringtoneFile.getAbsolutePath()); content.put(MediaStore.MediaColumns.TITLE, "1234"); content.put(MediaStore.MediaColumns.SIZE, asset.getSize()); content.put(MediaStore.MediaColumns.MIME_TYPE, "audio/ogg"); content.put(MediaStore.Audio.Media.ARTIST, "1234"); content.put(MediaStore.Audio.Media.DURATION, 4800); content.put(MediaStore.Audio.Media.IS_RINGTONE, true); content.put(MediaStore.Audio.Media.IS_NOTIFICATION, false); content.put(MediaStore.Audio.Media.IS_ALARM, false); content.put(MediaStore.Audio.Media.IS_MUSIC, false); if (LOG) { Log.i(TAG, "the absolute path of the file is : " + ringtoneFile.getAbsolutePath()); } Uri uri = MediaStore.Audio.Media.getContentUriForPath( ringtoneFile.getAbsolutePath()); Uri newUri = mContext.getContentResolver().insert(uri, content); if (LOG) { Log.i(TAG,"the ringtone uri is : " + newUri); } RingtoneManager.setActualDefaultRingtoneUri(mContext, RingtoneManager.TYPE_RINGTONE, newUri); mLastAssetName = asset.getName(); } The code executes fine and the ringtone shows in the ringtone list but when I click on it to test it or when I simulate a incoming call I get the following errors in LogCat: DEBUG/MediaPlayer(1230): Couldn't open file on client side, trying server side ERROR/MediaPlayerService(33): Couldn't open fd for content://media/internal/audio/media/1 ERROR/MediaPlayer(1230): Unable to to create media player ERROR/RingtoneManager(1230): Failed to open ringtone content://media/internal/audio/media/1 I have searched for solutions in many forums and cannot find any. any help is most welcome Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/6710286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to turn column of number into a list of strings? I don't know why I cant figure this out. But I have a column of numbers that I would like to turn into a list of strings. I should of mention this when i initially posted this but this isn't a DataFrame or did it come from a file this is a result of a some code, sorry wasn't trying to waste anybody's time, I just didn't want to add a bunch of clutter. This is exactly how it prints out. Here is my column of numbers. 3,1,3 3,1,3 3,1,3 3,3,3 3,1,1 And I would like them to look like this. ['3,1,3', '3,1,3', '3,1,3', '3,3,3', '3,1,1'] I'm trying to find a way that is not dependent on how many numbers are in each row or how many sets of numbers are in the column. Thanks, really appreciate it. A: Assume you start with a DataFrame df = pd.DataFrame([[3, 1, 3], [3, 1, 3], [3, 1, 3], [3, 3, 3], [3, 1, 1]]) df.astype(str).apply(lambda x: ','.join(x.values), axis=1).values.tolist() Looks like: ['3,1,3', '3,1,3', '3,1,3', '3,3,3', '3,1,1'] A: def foo(): l = [] with open("file.asd", "r") as f: for line in f: l.append(line) return l A: To turn your dataframe in to strings, use the astype function: df = pd.DataFrame([[3, 1, 3], [3, 1, 3], [3, 1, 3], [3, 3, 3], [3, 1, 1]]) df = df.astype('str') Then manipulating your columns becomes easy, you can for instance create a new column: In [29]: df['temp'] = df[0] + ',' + df[1] + ',' + df[2] df Out[29]: 0 1 2 temp 0 3 1 3 3,1,3 1 3 1 3 3,1,3 2 3 1 3 3,1,3 3 3 3 3 3,3,3 4 3 1 1 3,1,1 And then compact it into a list: In [30]: list(df['temp']) Out[30]: ['3,1,3', '3,1,3', '3,1,3', '3,3,3', '3,1,1'] A: # Done in Jupyter notebook # add three quotes on each side of your column. # The advantage to dataframe is the minimal number of operations for # reformatting your column of numbers or column of text strings into # a single string a = """3,1,3 3,1,3 3,1,3 3,3,3 3,1,1""" b = f'"{a}"' print('String created with triple quotes:') print(b) c = a.split('\n') print ("Use split() function on the string. Split on newline character:") print(c) print ("Use splitlines() function on the string:") print(a.splitlines())
{ "language": "en", "url": "https://stackoverflow.com/questions/37474207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to compare text strings in a table colum matlab If I have an N-by-1 table colum how is it possible to detect if any of the rows is identical? A: If you simply want to determine if there are duplicate rows, you can use unique to do this. You can check the number of unique values in the column and compare this to the total number of elements (numel) in the same column tf = unique(t.Column) == numel(t.Column) If you want to determine which rows are duplicates, you can again use unique but use the third output and then use accumarray to count the number of occurrences of each value and then select those values which appear more than once. [vals, ~, inds] = unique(t.Column, 'stable'); repeats = vals(accumarray(inds, 1) > 1); % And to print them out: fprintf('Duplicate value: %s\n', repeats{:}) If you want a logical vector of true/false for where the duplicates exist you can do something similar to that above [vals, ~, inds] = unique(t.Column, 'stable'); result = ismember(inds, find(accumarray(inds, 1) > 1)); Or [vals, ~, inds] = unique(t.Column, 'stable'); result = sum(bsxfun(@eq, inds, inds.'), 2) > 1; Update You can combine the two approaches above to accomplish what you want. [vals, ~, inds] = unique(t.Column, 'stable'); repeats = vals(accumarray(inds, 1) > 1); hasDupes = numel(repeats) > 0; if hasDupes for k = 1:numel(repeats) fprintf('Duplicate value: %s\n', repeats{k}); fprintf(' Found at: '); fprintf('%d ', find(strcmp(repeats{k}, t.Column))); fprintf('\n'); end end
{ "language": "en", "url": "https://stackoverflow.com/questions/43370507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Find newest message from each sender? I've got a Messages menu in my app, and I'm completely at a loss on how to show the newest message received/sent from EACH user, without loading ALL messages received/sent. I'm using Parse as my backend, so I've gotta believe there's some clever subquery, or NSPredicate that I could user for this. The current solution I have in mind is another Boolean attribute for my messages, where all but the most recent get augmented, so I can known what to load with a simple predicate. This seems slopy though, save me from myself! EDIT: Right now I'm getting the messages where the user is the sender or receiver (depending on a segmented control), and then displaying each user who was sent to/received from with a sample of their newest message. Right now I'm using internal logic to do this, and still in the process of figuring it out (the logic is a little backwards right now...) let query = PFQuery(className: "Messages") if recievedOrSent { query.whereKey("other", equalTo: userName) print("called OTHER query") } else { query.whereKey("sender", equalTo: userName) print("called SENDER query") } query.addDescendingOrder("createdAt") query.findObjectsInBackgroundWithBlock { (objects, error) -> Void in if error == nil { if thisSender == userName || self.recievedUsers.contains(thisSender) && self.lesserDate(thisTime, rhs: self.recievedUsersTimes[self.recievedUsers.indexOf(thisSender)!]) {} else { self.recievedUsers.append(thisSender) self.recievedUsersMsg.append(thisMessage) self.recievedUsersTimes.append(thisTime) self.recievedUsersMsgRead.append(thisRead) } if thisOther == userName || self.sentUsers.contains(thisOther) && self.lesserDate(thisTime, rhs: self.sentUsersTimes[self.sentUsers.indexOf(thisOther)!]) {} else { self.sentUsers.append(thisOther) self.sentUsersMsg.append(thisMessage) self.sentUsersTimes.append(thisTime) self.sentUsersMsgRead.append(thisRead) } } A: If you want to get only the most recent message from each user that you are having a conversation with, create a new class called RecentMessage and update it each time using an afterSave cloud function on the Message class. In the afterSave hook, maintain a pointer in the RecentMessage class to the latest Message in the conversation for each user. Then all you have to do is query for all of the current user's RecentMessage objects and use includeKey on the Message pointer. This let's you abstract away more logic from the client side and streamline your queries where performance really counts :)
{ "language": "en", "url": "https://stackoverflow.com/questions/33223661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Get the alternate rows in SQL I have a table PERSON with a single column GENDER, and values in 6 rows is like: GENDER M M M F F F The output should be like GENDER M F M F M F What should be the SQL query to get such output? I believe ROWNUMBER() must be used. A: SELECT GENDER, R = ROW_NUMBER() OVER (PARTITION BY GENDER ORDER BY GENDER) FROM PERSON ORDER BY R, GENDER DESC
{ "language": "en", "url": "https://stackoverflow.com/questions/36738548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Unity Photon PUN - unable to get IPunObservable call back to operate I am trying to understand the Photon Pun architecture. I cannot get the callback OnPhotonSerializeView() in 'IPunObservable' to be called. What I have in my scene is * *a GameObject called DataHandler at the top of the hierachy *DataHandler has a PhotonView and PhotonTransformView attached *DataHandler also has a script attached called DataHandler *The DataHandler script component has been dragged to the first observable in the PhotonView component (so it shows 'DataHandler (DataHandler)' as the observed component). *The DataHandler script implements IPunObservable, and the one method in that is public class DataHandler : MonoBehaviourPunCallbacks, IPunObservable { .... public void OnPhotonSerializeView(PhotonStream stream, PhotonMessageInfo info) { Debug.Log("OnPhotonSerializeView(): " + (stream.IsWriting ? "writing" : "reading")); } .... } Nothing shows up in the log when it is run. However when built and run as a Windows EXE, connected with a room in place, it comes up with NullReferenceExceptions. The log shows: NullReferenceException: Object reference not set to an instance of an object at Photon.Pun.PhotonView.SerializeComponent (UnityEngine.Component component, Photon.Pun.PhotonStream stream, Photon.Pun.PhotonMessageInfo info) [0x0001e] in C:\Users\nick\Documents\Unity3D\PhotonPunTest\Assets\Photon\PhotonUnityNetworking\Code\PhotonView.cs:368 at Photon.Pun.PhotonView.SerializeView (Photon.Pun.PhotonStream stream, Photon.Pun.PhotonMessageInfo info) [0x00024] in C:\Users\nick\Documents\Unity3D\PhotonPunTest\Assets\Photon\PhotonUnityNetworking\Code\PhotonView.cs:330 at Photon.Pun.PhotonNetwork.OnSerializeWrite (Photon.Pun.PhotonView view) [0x00089] in C:\Users\nick\Documents\Unity3D\PhotonPunTest\Assets\Photon\PhotonUnityNetworking\Code\PhotonNetworkPart.cs:1593 at Photon.Pun.PhotonNetwork.RunViewUpdate () [0x000b3] in C:\Users\nick\Documents\Unity3D\PhotonPunTest\Assets\Photon\PhotonUnityNetworking\Code\PhotonNetworkPart.cs:1522 at Photon.Pun.PhotonHandler.LateUpdate () [0x00042] in C:\Users\nick\Documents\Unity3D\PhotonPunTest\Assets\Photon\PhotonUnityNetworking\Code\PhotonHandler.cs:155 (Filename: C:/Users/nick/Documents/Unity3D/PhotonPunTest/Assets/Photon/PhotonUnityNetworking/Code/PhotonView.cs Line: 368) and the code around line 368 in PhotonView.cs is protected internal void SerializeComponent(Component component, PhotonStream stream, PhotonMessageInfo info) { IPunObservable observable = component as IPunObservable; if (observable != null) { observable.OnPhotonSerializeView(stream, info); } else { Debug.LogError("Observed scripts have to implement IPunObservable. "+ component + " does not. It is Type: " + component.GetType(), component.gameObject); } } (The Debug.LogError line is 368). Don't see what is wrong, though I assume component is probably null. The script is implementing IPunObservable. Can anyone help? A: The problem here is the photon logic about local player and master client. The rules about who can send are bound up in this. In the end I found it much easier to go below the higher level 'helper' components (PhotonView etc) and send everything with the lowever level RaiseEvent() / OnEvent() mechanism. See answer to previous question.
{ "language": "en", "url": "https://stackoverflow.com/questions/62551763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Django: Cannot assign "'2'": "Issues.reporter" must be a "User" instance I am working on a simple "issue tracking" web application as way to learn more about Django. I am using Django 4.1.4 and Python 3.9.2. I have the following classes in models.py (which may look familiar to people familiar with JIRA): * *Components *Issues *IssueStates *IssueTypes *Priorities *Projects *Releases *Sprints Originally I also had a Users class in models.py but now am trying to switch to using the Django User model. (The User class no longer exists in my models.py) I have been studying the following pages to learn how best to migrate to using the Django Users model. * *Django Best Practices: Referencing the User Model *Referencing the User Model All of my List/Detail/Create/Delete view classes worked fine with all of the above models until I started working on using the Django User class. -- models.py -- from django.conf import settings class Issues(models.Model): id = models.BigAutoField(primary_key=True) project = models.ForeignKey( to=Projects, on_delete=models.RESTRICT, blank=True, null=True ) summary = models.CharField(max_length=80, blank=False, null=False, default="") issue_type = models.ForeignKey( to=IssueTypes, on_delete=models.RESTRICT, blank=True, null=True ) issue_state = models.ForeignKey( to=IssueStates, on_delete=models.RESTRICT, blank=True, null=True, default="New" ) # https://learndjango.com/tutorials/django-best-practices-referencing-user-model # https://docs.djangoproject.com/en/4.0/topics/auth/customizing/#referencing-the-user-model reporter = models.ForeignKey( settings.AUTH_USER_MODEL, on_delete=models.RESTRICT, related_name="reporter_id", ) priority = models.ForeignKey( to=Priorities, on_delete=models.RESTRICT, blank=True, null=True ) component = models.ForeignKey( to=Components, on_delete=models.RESTRICT, blank=True, null=True ) description = models.TextField(blank=True, null=True) planned_release = models.ForeignKey( to=Releases, on_delete=models.RESTRICT, blank=True, null=True ) # https://learndjango.com/tutorials/django-best-practices-referencing-user-model # https://docs.djangoproject.com/en/4.0/topics/auth/customizing/#referencing-the-user-model assignee = models.ForeignKey( settings.AUTH_USER_MODEL, on_delete=models.RESTRICT, related_name="assignee_id", ) slug = models.ForeignKey( to="IssueSlugs", on_delete=models.RESTRICT, blank=True, null=True ) sprint = models.ForeignKey( to=Sprints, on_delete=models.RESTRICT, blank=True, null=True ) def save(self, *args, **kwargs): if not self.slug: # generate slug for this new Issue slug = IssueSlugs() slug.project_id = self.project.id slug.save() self.slug = slug super().save(*args, **kwargs) def __str__(self): return self.slug.__str__() + " - " + self.summary.__str__() class Meta: managed = True db_table = "issues" class IssueSlugs(models.Model): """ This table is used to generate unique identifiers for records in the Issues table. My goal was to model the default behavior found in JIRA where each Issue has a unique identifier that is a combination of: 1) the project abbreviation 2) a sequential number for the project So here when creating a new Issue record, if it is the first record for a particular project, the sequential number starts at 100, otherwise it is the next sequential number for the project. """ id = models.BigAutoField(primary_key=True) project = models.ForeignKey( to=Projects, on_delete=models.RESTRICT, blank=True, null=True ) slug_id = models.IntegerField(default=100) slug = models.CharField( max_length=80, blank=False, null=False, unique=True, ) def __str__(self): return self.slug.__str__() def save(self, *args, **kwargs): if not self.slug: result = IssueSlugs.objects.filter( project_id__exact=self.project.id ).aggregate(Max("slug_id")) # The first issue being created for the project # {'slug_id__max': None} if not result["slug_id__max"]: self.slug_id = 100 self.slug = self.project.abbreviation + "-" + str(100) else: logging.debug(result) next_slug_id = result["slug_id__max"] + 1 self.slug_id = next_slug_id self.slug = self.project.abbreviation + "-" + str(next_slug_id) super().save(*args, **kwargs) class Meta: managed = True db_table = "issue_slugs" -- issues.py -- class CreateUpdateIssueForm(forms.ModelForm): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # save for IssueCreateView.form_valid() self.kwargs = kwargs font_size = "12pt" for field_name in self.fields: if field_name in ("summary", "description"): self.fields[field_name].widget.attrs.update( { "size": self.fields[field_name].max_length, "style": "font-size: {0}".format(font_size), } ) elif field_name in ("reporter", "assignee"): # https://docs.djangoproject.com/en/4.0/topics/auth/customizing/#referencing-the-user-model User = get_user_model() choices = list() choices.append(("", "")) for element in [ { "id": getattr(row, "id"), "display": row.get_full_name(), } for row in User.objects.exclude(is_superuser__exact="t") ]: choices.append((element["id"], element["display"])) self.fields[field_name] = forms.fields.ChoiceField( choices=choices, # I had to specify required=False here to eliminate a very # strange error: # An invalid form control with name='assignee' is not focusable. required=False, ) else: # all the <select> fields ... self.fields[field_name].widget.attrs.update( { "class": ".my-select", } ) class Meta: model = Issues fields = [ "project", "summary", "component", "description", "issue_type", "issue_state", "reporter", "priority", "planned_release", "assignee", "sprint", ] class IssueCreateView(LoginRequiredMixin, PermissionRequiredMixin, generic.CreateView): """ A view that displays a form for creating an object, redisplaying the form with validation errors (if there are any) and saving the object. https://docs.djangoproject.com/en/4.1/ref/class-based-views/generic-editing/#createview """ model = Issues permission_required = "ui.add_{0}".format(model.__name__.lower()) template_name = "ui/issues/issue_create.html" success_url = "/ui/issue_list" form_class = CreateUpdateIssueForm def form_valid(self, form): User = get_user_model() if "reporter" in self.kwargs: form.instance.reporter = User.objects.get(id__exact=self.kwargs["reporter"]) if not form.is_valid(): messages.add_message( self.request, messages.ERROR, "ERROR: '{0}'.".format(form.errors) ) return super().form_valid(form) action = self.request.POST["action"] if action == "Cancel": # https://docs.djangoproject.com/en/4.1/topics/http/shortcuts/#django.shortcuts.redirect return redirect("/ui/issue_list") return super().form_valid(form) def get_initial(self): """ When creating a new Issue I'm setting default values for a few fields on the Create Issue page. """ # https://docs.djangoproject.com/en/4.0/topics/auth/customizing/#referencing-the-user-model User = get_user_model() from ui.models import IssueStates, Priorities, IssueTypes issue_state = IssueStates.objects.get(state__exact="New") priority = Priorities.objects.get(priority__exact="Medium") issue_type = IssueTypes.objects.get(issue_type__exact="Task") reporter = User.objects.get(username__exact=self.request.user) return { "issue_state": issue_state.id, "priority": priority.id, "issue_type": issue_type.id, "reporter": reporter.id, } When I try to create a new Issue, the "new Issue" form is displayed normally, but when I save the form I get a Django error with a stack trace I don't understand because it does not have a reference to any of my code, so I have no idea where to start debugging. 16:22:48 ERROR Internal Server Error: /ui/issue/create Traceback (most recent call last): File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/views/generic/base.py", line 103, in view return self.dispatch(request, *args, **kwargs) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/contrib/auth/mixins.py", line 73, in dispatch return super().dispatch(request, *args, **kwargs) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/contrib/auth/mixins.py", line 109, in dispatch return super().dispatch(request, *args, **kwargs) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/views/generic/base.py", line 142, in dispatch return handler(request, *args, **kwargs) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/views/generic/edit.py", line 184, in post return super().post(request, *args, **kwargs) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/views/generic/edit.py", line 152, in post if form.is_valid(): File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/forms/forms.py", line 205, in is_valid return self.is_bound and not self.errors File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/forms/forms.py", line 200, in errors self.full_clean() File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/forms/forms.py", line 439, in full_clean self._post_clean() File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/forms/models.py", line 485, in _post_clean self.instance = construct_instance( File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/forms/models.py", line 82, in construct_instance f.save_form_data(instance, cleaned_data[f.name]) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/db/models/fields/__init__.py", line 1006, in save_form_data setattr(instance, self.name, data) File "/Users/a0r470/git/issue_tracker/env/lib/python3.9/site-packages/django/db/models/fields/related_descriptors.py", line 237, in __set__ raise ValueError( ValueError: Cannot assign "'2'": "Issues.reporter" must be a "User" instance. [27/Dec/2022 16:22:48] "POST /ui/issue/create HTTP/1.1" 500 120153 Generally I understand that under the covers, Django creates two fields in the Issues model for me: * *reporter *reporter_id and I understand that the reporter field needs to contain a User instance instead of an integer (2). BUT I don't know WHERE in my code I should do this assignment. I have tried overriding a few methods in my CreateUpdateIssueForm and IssueCreateView as a way to try to find where my code is causing problems - no luck so far. In my IssueCreateView(generic.CreateView) class, I added the following to my form_valid() method, intending to retrieve the correct User record and assign it to form.instance.reporter, but the code appears to be failing before it gets to my form_valid() method. def form_valid(self, form): User = get_user_model() if "reporter" in self.kwargs: form.instance.reporter = User.objects.get(id__exact=self.kwargs["reporter"]) Clearly I do not fully understand the flow of control in these Generic View classes. Thank you for any help you can provide! A: I discovered that trying to migrate my own Users model to a CustomUser model is a non-trivial undertaking! I learned this from Will Vincent and his excellent post on this very topic! Django Best Practices: Custom User Model The Django documentation also states that migrating to the Django User in the midst of an existing project is non-trivial. Changing to a custom user model mid-project So, to solve my problem I started with a new empty project with only the CustomUser in my models.py as Mr. Vincent described, which worked perfectly. After that, I setup the rest of my model classes in models.py, referencing the CustomUser model as needed. assignee = models.ForeignKey( to=CustomUser, on_delete=models.RESTRICT, blank=True, null=True, ) And copied the rest of my template files, view source files, static files, etc. from my original project into this new project. My codebase is now working as expected using the Django User model. Huge Thanks to Mr. Will Vincent's excellent article on this issue!
{ "language": "en", "url": "https://stackoverflow.com/questions/74934892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Where to draw the line - is it possible to love LINQ too much? I recently found LINQ and love it. I find lots of occasions where use of it is so much more expressive than the longhand version but a colleague passed a comment about me abusing this technology which now has me second guessing myself. It is my perspective that if a technology works efficiently and the code is elegant then why not use it? Is that wrong? I could spend extra time writing out processes "longhand" and while the resulting code may be a few ms faster, it's 2-3 times more code and therefore 2-3 times more chance that there may be bugs. Is my view wrong? Should I be writing my code out longhand rather than using LINQ? Isn't this what LINQ was designed for? Edit: I was speaking about LINQ to objects, I don't use LINQ to XML so much and I have used LINQ to SQL but I'm not so enamoured with those flavours as LINQ to objects. A: Yes you can love LINQ too much - Single Statement LINQ RayTracer Where do you draw the line? I'd say use LINQ as much as it makes the code simpler and easier to read. The moment the LINQ version becomes more difficult to understand then the non-LINQ version it's time to swap, and vice versa. EDIT: This mainly applies to LINQ-To-Objects as the other LINQ flavours have their own benefits. A: Its not possible to love Linq to Objects too much, it's a freaking awesome technology ! But seriously, anything that makes your code simple to read, simple to maintain and does the job it was intended for, then you would be silly not to use it as much as you can. A: LINQ's supposed to be used to make filtering, sorting, aggregating and manipulating data from various sources as intuitive and expressive as possible. I'd say, use it wherever you feel it's the tidiest, most expressive and most natural syntax for doing what it is you're trying to do, and don't feel guilty about it. If you start humping the documentation, then it may be time to reconsider your position. A: It's cases like these where it's important to remember the golden rules of optimization: * *Don't Do It *For Experts: Don't do it yet You should absolutely not worry about "abusing" linq unless you can indentify it explicitly as the cause of a performance problem A: Like anything, it can be abused. As long as you stay away from obvious poor decisions such as var v = List.Where(...); for(int i = 0; i < v.Count(); i++) {...} and understand how differed execution works, then it is most likely not going to be much slower than the longhand way. According to Anders Hejlsburg (C# architect), the C# compiler is not particularly good at optimizing loops, however it is getting much better at optimizing and parallelizing expression trees. In time, it may be more effective than a loop. The List<>'s ForEach version is actually as fast as a for loop, although I can't find the link that proves that. P.S. My personal favorite is ForEach<>'s lesser known cousin IndexedForEach (utilizing extension methods) List.IndexedForEach( (p,i) => { if(i != 3) p.DoSomething(i); }; A: LINQ can be like art. Keep using it to make the code beautiful. A: I have to agree with your view - if it's more efficient to write and elegant then what's a few milliseconds. Writing extra code gives more room for bugs to creep in and it's extra code that needs to be tested and most of all it's extra code to maintain. Think about the guy who's going to come in behind you and maintain your code - they'll thank you for writing elegant easy to read code long before they thank you for writing code that's a few ms faster! Beware though, this cost of a few ms could be significant when you take the bigger picture into account. If that few milliseconds is part of a loop of thousands of repetitions, then the milliseconds add up fast. A: You're answering your own question by talking about writing 2-3 times more code for a few ms of performance. I mean, if your problem domain requires that speedup then yes, if not probably not. However, is it really only a few ms of performance or is it > 5% or > 10%. This is a value judgement based on the individual case. A: Where to draw the line? Well, we already know that it is a bad idea to implement your own quicksort in linq, at least compared to just using linq's orderby. A: I've found that using LINQ has speed up my development and made it easier to avoid stupid mistakes that loops can introduce. I have had instances where the performance of LINQ was poor, but that was when I was using it to things like fetch data for an excel file from a tree structure that had millions of nodes. A: While I see how there is a point of view that LINQ might make a statement harder to read, I think it is far outweighed by the fact that my methods are now strictly related to the problems that they are solving and not spending time either including lookup loops or cluttering classes with dedicated lookup functions. It took a little while to get used to doing things with LINQ, since looping lookups, and the like, have been the main option for so long. I look at LINQ as just being another type of syntactic sugar that can do the same task in a more elegant way. Right now, I am still avoiding it in processing-heavy mission critical code - but that is just until the performance improves as LINQ evolves. A: My only concern about LINQ is with its implementation of joins. As I determined when trying to answer this question (and it's confirmed here), the code LINQ generates to perform joins is (necessarily, I guess) naive: for each item in the list, the join performs a linear search through the joined list to find matches. Adding a join to a LINQ query essentially turns a linear-time algorithm into a quadratic-time algorithm. Even if you think premature optimization is the root of all evil, the jump from O(n) to O(n^2) should give you pause. (It's O(n^3) if you join through a joined item to another collection, too.) It's relatively easy to work around this. For instance, this query: var list = from pr in parentTable.AsEnumerable() join cr in childTable.AsEnumerable() on cr.Field<int>("ParentID") equals pr.Field<int>("ID") where pr.Field<string>("Value") == "foo" select cr; is analogous to how you'd join two tables in SQL Server. But it's terribly inefficient in LINQ: for every parent row that the where clause returns, the query scans the entire child table. (Even if you're joining on an unindexed field, SQL Server will build a hashtable to speed up the join if it can. That's a little outside LINQ's pay grade.) This query, however: string fk = "FK_ChildTable_ParentTable"; var list = from cr in childTable.AsEnumerable() where cr.GetParentRow(fk).Field<string>("Value") == "foo" select cr; produces the same result, but it scans the child table once only. If you're using LINQ to objects, the same issues apply: if you want to join two collections of any significant size, you're probably going to need to consider implementing a more efficient method to find the joined object, e.g.: Dictionary<Foo, Bar> map = buildMap(foos, bars); var list = from Foo f in foos where map[f].baz == "bat" select f;
{ "language": "en", "url": "https://stackoverflow.com/questions/426889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Spring batch : Job instances run sequentially when using annotaitons I have a simple annotation configuration for a Spring batch job as follows : @Configuration @EnableBatchProcessing public abstract class AbstractFileLoader<T> { private static final String FILE_PATTERN = "*.dat"; @Bean @StepScope @Value("#{stepExecutionContext['fileName']}") public FlatFileItemReader<T> reader(String file) { FlatFileItemReader<T> reader = new FlatFileItemReader<T>(); String path = file.substring(file.indexOf(":") + 1, file.length()); FileSystemResource resource = new FileSystemResource(path); reader.setResource(resource); DefaultLineMapper<T> lineMapper = new DefaultLineMapper<T>(); lineMapper.setFieldSetMapper(getFieldSetMapper()); DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(","); tokenizer.setNames(getColumnNames()); lineMapper.setLineTokenizer(tokenizer); reader.setLineMapper(lineMapper); reader.setLinesToSkip(1); return reader; } @Bean public ItemProcessor<T, T> processor() { // TODO add transformations here return null; } //Exception when using JobScope for the writer @Bean public ItemWriter<T> writer() { ListItemWriter<T> writer = new ListItemWriter<T>(); return writer; } @Bean public Job loaderJob(JobBuilderFactory jobs, Step s1, JobExecutionListener listener) { return jobs.get(getLoaderName()).incrementer(new RunIdIncrementer()) .listener(listener).start(s1).build(); } @Bean public Step readStep(StepBuilderFactory stepBuilderFactory, ItemReader<T> reader, ItemWriter<T> writer, ItemProcessor<T, T> processor, TaskExecutor taskExecutor, ResourcePatternResolver resolver) { final Step readerStep = stepBuilderFactory .get(getLoaderName() + " ReadStep:slave").<T, T> chunk(25254) .reader(reader).processor(processor).writer(writer) .taskExecutor(taskExecutor).throttleLimit(16).build(); final Step partitionedStep = stepBuilderFactory .get(getLoaderName() + " ReadStep:master") .partitioner(readerStep) .partitioner(getLoaderName() + " ReadStep:slave", partitioner(resolver)).taskExecutor(taskExecutor) .build(); return partitionedStep; } @Bean public TaskExecutor taskExecutor() { return new SimpleAsyncTaskExecutor(); } @Bean public Partitioner partitioner( ResourcePatternResolver resourcePatternResolver) { MultiResourcePartitioner partitioner = new MultiResourcePartitioner(); Resource[] resources; try { resources = resourcePatternResolver.getResources("file:" + getFilesPath() + FILE_PATTERN); } catch (IOException e) { throw new RuntimeException( "I/O problems when resolving the input file pattern.", e); } partitioner.setResources(resources); return partitioner; } @Bean public JobExecutionListener listener(ItemWriter<T> writer) { /* org.springframework.batch.core.scope.StepScope scope; */ return new JobCompletionNotificationListener<T>(writer); } public abstract FieldSetMapper<T> getFieldSetMapper(); public abstract String getFilesPath(); public abstract String getLoaderName(); public abstract String[] getColumnNames(); } When I run the same instance of the job with two different job parameters, both instances run sequentially instead of running in parallel. I have a SimpleAysncTaskExecutor bean configured which I assume should cause the jobs to be triggered asynchronously. Do I need to add any more configuration to this class to have the job instances execute in parallel? A: You have to configure the jobLauncher that you're using to launch jobs to use your TaskExecutor (or a separate pool). The simplest way is to override the bean: @Bean JobLauncher jobLauncher(JobRepository jobRepository) { new SimpleJobLauncher( taskExecutor: taskExecutor(), jobRepository: jobRepository) } Don't be confused by the warning that will be logged saying that a synchronous task executor will be used. This is due to an extra instance that is created owing to the very awkward way Spring Batch uses to configure the beans it provides in SimpleBatchConfiguration (long story short, if you want to get rid of the warning you'll need to provide a BatchConfigurer bean and specify how 4 other beans are to be created, even if you want to change just one). Note that it being the same job is irrelevant here. The problem is that by default the job launcher will launch the job on the same thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/35608765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Connection phonegap/cordova I took this model of the Phonegap site and yet he does not issue an alert showing the connection status. I want to open the InAppBrowser I'm creating it if it shows an internet connected via 3g or wifi and if not it will issue an alert warning the user. Example: Example Connection Phonegap site <!DOCTYPE html> <html> <head> <title>navigator.connection.type Example</title> <script type="text/javascript" charset="utf-8" src="cordova.js"></script> <script type="text/javascript" charset="utf-8"> // Wait for device API libraries to load // document.addEventListener("deviceready", onDeviceReady, false); // device APIs are available // function onDeviceReady() { checkConnection(); } function checkConnection() { var networkState = navigator.connection.type; var states = {}; states[Connection.UNKNOWN] = 'Unknown connection'; states[Connection.ETHERNET] = 'Ethernet connection'; states[Connection.WIFI] = 'WiFi connection'; states[Connection.CELL_2G] = 'Cell 2G connection'; states[Connection.CELL_3G] = 'Cell 3G connection'; states[Connection.CELL_4G] = 'Cell 4G connection'; states[Connection.CELL] = 'Cell generic connection'; states[Connection.NONE] = 'No network connection'; alert('Connection type: ' + states[networkState]); } </script> </head> <body> <p>A dialog box will report the network state.</p> </body> </html>
{ "language": "en", "url": "https://stackoverflow.com/questions/23367004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating a writeable DocumentFile from URI I'm trying to adapt a File-based document system to something using DocumentFile in order to allow external storage read/write access on API >= 29. I get the user to select the SD card root using Intent.ACTION_OPEN_DOCUMENT_TREE, and I get back a Uri as expected, which I can then handle using: getContentResolver().takePersistableUriPermission(resultData.getData(), Intent.FLAG_GRANT_READ_URI_PERMISSION | Intent.FLAG_GRANT_WRITE_URI_PERMISSION); I can browse successfully through the external storage contents up to the selected root. All good. But what I need to be able to do is write an arbitrary file in the chosen (sub)folder, and that's where I'm running into problems. DocumentFile file = DocumentFile.fromSingleUri(mContext, Uri.parse(toPath)); Uri uri = file.getUri(); FileOutputStream output = mContext.getContentResolver().openOutputStream(uri); Except on the openOutputStream() call I get: java.io.FileNotFoundException: Failed to open for writing: java.io.FileNotFoundException: open failed: EISDIR (Is a directory) That's slightly confusing to me, but the "file not found" part suggests I might need to create the blank output file first, so I try that, like: DocumentFile file = DocumentFile.fromSingleUri(mContext, Uri.parse(toPath)); Uri uri = file.getUri(); if (file == null) { return false; } if (file.exists()) { file.delete(); } DocumentFile.fromTreeUri(mContext, Uri.parse(getParentPath(toPath))).createFile("", uri.getLastPathSegment()); FileOutputStream output = mContext.getContentResolver().openOutputStream(uri); I get a java.io.IOException: java.lang.IllegalStateException: Failed to touch /mnt/media_rw/0B07-1910/Testing.tmp: java.io.IOException: Read-only file system at android.os.Parcel.createException(Parcel.java:2079) at android.os.Parcel.readException(Parcel.java:2039) at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:188) at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:140) at android.content.ContentProviderProxy.call(ContentProviderNative.java:658) at android.content.ContentResolver.call(ContentResolver.java:2042) at android.provider.DocumentsContract.createDocument(DocumentsContract.java:1327) at androidx.documentfile.provider.TreeDocumentFile.createFile(TreeDocumentFile.java:53) at androidx.documentfile.provider.TreeDocumentFile.createFile(TreeDocumentFile.java:45) Which doesn't make sense to me, since the tree should be writeable. For what it's worth, the Uri I get back from Intent.ACTION_OPEN_DOCUMENT_TREE looks like this: content://com.android.externalstorage.documents/tree/0B07-1910%3A Interestingly, when I use that Uri to create a DocumentFile object to browse, using documentFile = DocumentFile.fromTreeUri(context, uri), then documentFile.getURI().toString() looks like: content://com.android.externalstorage.documents/tree/0B07-1910%3A/document/0B07-1910%3A i.e., it's had something appended to the end of it. Then, I descend into what should be a writeable folder (like "Download"), and try creating a writeable file as described above. The "Download" folder gets the Uri: content://com.android.externalstorage.documents/tree/0B07-1910%3A/document/0B07-1910%3ADownload and the Uri I'm using for toPath, above, is then: content://com.android.externalstorage.documents/tree/0B07-1910%3A/document/0B07-1910%3ADownload/Testing.tmp which leads to the problems described previously trying to create it. I haven't actually found any decent information about writing an arbitrary file under Storage Access Framework restrictions. What am I doing wrong? Thanks. :) A: Uri uri = uri obtained from ACTION_OPEN_DOCUMENT_TREE String folderName = "questions.59189631"; DocumentFile documentDir = DocumentFile.fromTreeUri(context, uri); DocumentFile folder = documentDir.createDirectory(folderName); return folder.getUri(); Use createFile() for a writable file.
{ "language": "en", "url": "https://stackoverflow.com/questions/59189631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Rails - show item price and add to total price I have a form where it displays the items in a select2 drop down box but I would like to also display the price for the item and enter a quantity amount that will show the total price. It would be nice to use select2 gem with these added features but I am not sure how to do this. I have to know which item is selected and then pull the price from the table for it. Form <%= simple_form_for(@order) do |f| %> <%= f.error_notification %> <%= f.input :code %> <%= f.association :items, collection: Item.all, label_method: :name, value_method: :id, prompt: "Choose an item", input_html: { id: 'item-select2' } %> <%= f.submit %> <% end %> Talbe create_table "items", force: true do |t| t.string "name" t.decimal "price" t.integer "quantity" t.decimal "discount" end create_table "orders", force: true do |t| t.string "code" t.integer "user_id" end create_table "order_items", force: true do |t| t.integer "item_id" t.integer "order_id" t.integer "quantity" t.decimal "price" end A: You have to use AJAX. Let say we have a form (order) where we have a table (order_items). We will add rows with some goodies (items), their price and quantity. Let assume that an user already opened new order and added a new row. In the row we put the select with items, span price and text field quantity. Under table we have span total * *User selects item. When item is selected we call 'on_change' javascript event for this select. *Inside that call we sent AJAX request to items controller (method 'show') with id of selected item. Inside the controller#show we find our item and return it to client as json. *On client we have an javasript object item. Using javascript we place item.price inside span price. *Another 'on_change' we must call for quantity text field. When quantity changed we iterate all rows, calc sum for each row and accumalate it in result vatiable. Then we put this result in total span.
{ "language": "en", "url": "https://stackoverflow.com/questions/23878405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Expression Engine Advanced Search - Problems searching for keyword I'm having a little trouble with expression engine searching! I've got a drop down form set-up that works great, however I need to add an OPTIONAL keyword field into this search form. Any ideas how i'll do this with my current code: Main form code: <form method="post" action="/properties/search-results/"> <p>Keywords:</p> <input id="keywords" type="text" name="keywords"/> <p>Town:</p> <select id="town" name="cat[]" multiple="multiple"> <option value="" selected="selected">Any</option> {exp:channel:categories channel="property" style="linear" category_group="1"} <option value="{category_id}">{category_name}</option> {/exp:channel:categories} </select> <p>Property Type:</p> <select id="propertyType" name="cat[]"> <option value="" selected="selected">Any</option> {exp:channel:categories channel="property" style="linear" category_group="2"} <option value="{category_id}">{category_name}</option> {/exp:channel:categories} </select> <input style="margin-top:20px; width: 100px;" type="submit" name="submit" value="Search"/> </form> Search-results template: <?php // Grab the categories selected from the $_POST // join them with an ampersand - we are searching for AND matches $cats = ""; foreach($_POST['cat'] as $cat){ // check we are working with a number if(is_numeric($cat)){ $cats .= $cat."&"; } } // strip the last & off the category string $cats = substr($cats,0,-1); ?> {exp:channel:entries channel="property" dynamic="on" category="<?php echo($cats);?>" orderby="date" sort="asc"} I need the keyword field to search in the {title} of my entries! Thanks for any help! A: Try this: first install the Search Fields plugin. (You need this because the native EE "search:field_name" parameter only works on custom fields, not entry titles.) Then use this revised code: <?php // Grab the categories selected from the $_POST // join them with an ampersand - we are searching for AND matches $cats = array(); foreach($_POST['cat'] as $cat) { // check we are working with a number if(is_numeric($cat)) { $cats[] = $cat; } } $cats = implode('&', $cats); if(!empty($_POST['keywords'])) { $keywords = trim($_POST['keywords']); } ?> <?php if($keywords) : ?> {exp:search_fields search:title="<?php echo($keywords);?>" channel="property" parse="inward"} <?php endif; ?> {exp:channel:entries channel="property" <?php if($keywords) : ?>entry_id="{search_results}"<?php endif; ?> category="<?php echo($cats);?>" orderby="date" sort="asc"} {!-- do stuff --} {/exp:channel:entries} <?php if($keywords) : ?> {/exp:search_fields} <?php endif; ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/8124687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Tail recursion and call by name / value Learning Scala and functional programming in general. In the following tail-recursive factorial implementation: def factorialTailRec(n: Int) : Int = { @tailrec def factorialRec(n: Int, f: => Int): Int = { if (n == 0) f else factorialRec(n - 1, n * f) } factorialRec(n, 1) } I wonder whether there is any benefit to having the second parameter called by value vs called by name (as I have done). In the first case, every stack frame is burdened with a product. In the second case, if my understanding is correct, the entire chain of products will be carried over to the case if ( n== 0) at the nth stack frame, so we will still have to perform the same number of multiplications. Unfortunately, this is not a product of form a^n, which can be calculated in log_2n steps through repeated squaring, but a product of terms that differ by 1 every time. So I can't see any possible way of optimizing the final product: it will still require the multiplication of O(n) terms. Is this correct? Is call by value equivalent to call by name here, in terms of complexity? A: Let me just expand a little bit what you've already been told in comments. That's how by-name parameters are desugared by the compiler: @tailrec def factorialTailRec(n: Int, f: => Int): Int = { if (n == 0) { val fEvaluated = f fEvaluated } else { val fEvaluated = f // <-- here we are going deeper into stack. factorialTailRec(n - 1, n * fEvaluated) } } A: Through experimentation I found out that with the call by name formalism, the method becomes... non-tail recursive! I made this example code to compare factorial tail-recursively, and factorial non-tail-recursively: package example import scala.annotation.tailrec object Factorial extends App { val ITERS = 100000 def factorialTailRec(n: Int) : Int = { @tailrec def factorialTailRec(n: Int, f: => Int): Int = { if (n == 0) f else factorialTailRec(n - 1, n * f) } factorialTailRec(n, 1) } for(i <-1 to ITERS) println("factorialTailRec(" + i + ") = " + factorialTailRec(i)) def factorial(n:Int) : Int = { if(n == 0) 1 else n * factorial(n-1) } for(i <-1 to ITERS) println("factorial(" + i + ") = " + factorial(i)) } Observe that the inner tailRec function calls the second argument by name. for which the @tailRec annotation still does NOT throw a compile-time error! I've been playing around with different values for the ITERS variable, and for a value of 100,000, I receive a... StackOverflowError! (The result of zero is there because of overflow of Int.) So I went ahead and changed the signature of factorialTailRec/2, to: def factorialTailRec(n: Int, f: Int): Int i.e call by value for the argument f. This time, the portion of main that runs factorialTailRec finishes absolutely fine, whereas, of course, factorial/1 crashes at the exact same integer. Very, very interesting. It seems as if call by name in this situation maintains the stack frames because of the need of computation of the products themselves all the way back to the call chain.
{ "language": "en", "url": "https://stackoverflow.com/questions/56435969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: filing in TRUE/FALSE in column of dataframe A based on information in dataframe B I searched for other topics, but didn't find anything which was really matching what I was looking for. My dataframes: UN_match excerpt: country country_code emissions sector_code year Austria AT 65779.1172 1.AA 2005 Austria AT 62430.4336 1.AA 2006 Austria AT 59108.4180 1.AA 2007 Austria AT 58656.6719 1.AA 2008 Austria AT 55252.9922 1.AA 2009 Austria AT 58317.9570 1.AA 2010 Austria AT 55898.7344 1.AA 2011 Austria AT 53886.8242 1.AA 2012 Austria AT 53923.7578 1.AA 2013 Austria AT 50087.0000 1.AA 2014 Austria AT 51978.9609 1.AA 2015 Austria AT 52990.2305 1.AA 2016 Belgium BE 103917.1484 1.AA 2005 Belgium BE 102263.9297 1.AA 2006 Belgium BE 100104.8906 1.AA 2007 Belgium BE 99960.6328 1.AA 2008 Belgium BE 92900.2188 1.AA 2009 Belgium BE 96538.8047 1.AA 2010 Belgium BE 87202.2188 1.AA 2011 Belgium BE 86242.7656 1.AA 2012 Belgium BE 86289.1562 1.AA 2013 Belgium BE 80720.1406 1.AA 2014 Belgium BE 84283.8438 1.AA 2015 Belgium BE 84081.2031 1.AA 2016 ETS_match country country_code value year smaller1.AA Austria AT 16539.659 2005 0 Austria AT 15275.065 2006 0 Austria AT 14124.646 2007 0 Austria AT 14572.511 2008 0 Austria AT 12767.555 2009 0 Austria AT 15506.112 2010 0 Austria AT 15131.551 2011 0 Austria AT 13121.434 2012 0 Austria AT 8074.514 2013 0 Austria AT 6426.135 2014 0 Austria AT 7514.263 2015 0 Austria AT 7142.937 2016 0 Belgium BE 25460.856 2005 0 Belgium BE 24099.282 2006 0 Belgium BE 23706.084 2007 0 Belgium BE 23166.180 2008 0 Belgium BE 21185.552 2009 0 Belgium BE 22073.616 2010 0 Belgium BE 18950.876 2011 0 Belgium BE 17463.388 2012 0 Belgium BE 16728.267 2013 0 Belgium BE 15230.243 2014 0 Belgium BE 16053.800 2015 0 Belgium BE 15027.777 2016 0 I want to add TRUE or FALSE into ETS_match$smaller1.AA based on whether ETS_match$value < UN_match$emissions is true or false. This should be done for every row. I tried some things myself with mutate and if_else, but didn't manage to finish it. It should look like this: country country_code value year smaller1.AA Austria AT 16539.659 2005 TRUE Austria AT 15275.065 2006 TRUE Austria AT 14124.646 2007 TRUE I know this is probably very basic, but I would be happy about any kind of help. Best wishes, nordsee A: Something like this, with dplyr, maybe: library(dplyr) UN_match %>% left_join(ETS_match) %>% # join the data mutate(smaller1.AA = value < emissions, TRUE, FALSE) %>% # add the true false select(country, country_code, value, year, smaller1.AA) # only useful columns But all your value are < of emission, so the data are rows are all TRUE in this case. Improved removing if_else, thanks @Rui Barradas. A: If the datasets have the same number of rows and all the rows align correctly, you can just do: ETS_match$smaller1.AA <- ETS_match$value < UN_match$emissions A: Because the data you're working with are from two different data frames, you have either two options: * *Compare them independently *Combine the data into one data frame. For 1, you can do something like ETS_match$smaller1.AA <- ETS_match$value < UN_match$emissions For 2, you could use merge to bring the data frames together. How to join (merge) data frames (inner, outer, left, right)?
{ "language": "en", "url": "https://stackoverflow.com/questions/53430068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: TFS 2008 Merge/Resolve Conflict Tutorial I am just wondering if anyone know of a good resource/tutorial/video for explaining the "Resolve Conflict" and the "Merge Tool" in TFS 2008. I just need to know how the comparison between files is done (I think it's comparing version number to version number), but it's not very easy to explain. thanks!! A: I am not sure what you mean when you say "files in drawn." Did you mean to say "files is drawn" as in "how does TFS know how to compare files? Resolve conflict tool is used when TFS cannot resolve the conflict on its own. This MS Article will walk you through how to get more detailed information and explain how the tool works. There are a few "buckets" for conflicts (see below). As for wanting video tutorials, there are a few that simply show you how to use the tool and some cursory conflicts but there are no videos that I have found that go over each conflict case type. Conflicts are always difficult when they can't be automatically managed. I would consider swapping out your merge tool to a better one. I hope that helps you. Version Conflict Version conflicts can occur in Team Foundation version control with a check-in, get, or merge operation. In each case, the evolution of an item along divergent paths results in a conflict. Check-in Two users check out the latest version of a file. The first user checks in changes; this creates a new version of the file. When the second user tries a check-in, there is a version conflict because the second user's changes were not made against the latest version of the file. * Get Two users check out the latest version of a file. The first user checks in changes; this creates a new version of the file. When the second user performs a get latest operation, there is a version conflict because the get latest operation is trying to update the checked-out file in the workspace. * Merge A branched file has been modified in both branches. A user tries to merge changes from one branch to the other. There is a version conflict because the file has been modified on both branches. File Name Collision Conflict File name collisions can occur in Team Foundation version control with a check-in, get, or merge operation. In all three cases, the conflict results when two or more items try to occupy the same path in the source control server. Check-in Two users each add a file to the same application. Coincidentally, the two users choose the same name for the new files. One user checks in his or her file. When the second user tries a check-in, there is a file name collision. * Get Two users add files with identical names to an application. One user checks in the file. When the second user tries a get latest operation, there is a file name collision. This is because the first user's file cannot be retrieved where the second user has added a file. * Merge An application has been branched and has then been worked on in both branches. In both branches, a file that has the same name has been added. A user tries to merge changes from one branch to the other. There is a file name collision because the file added to the source branch can not be branched where a file has already been added to the target branch. Local overwrite conflict Local overwrite conflicts only occur in Team Foundation version control during a get operation. These conflicts occur when a get operation tries to write over a writable file in your workspace. By default, the get operation will only replace files that are read-only. Resolving local overwrite conflicts involves either overwriting the file or checking out the file and merging changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/6192260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Java code Optimization and Safe class Testing{ private static int counter; private static int[] intArray; public static ReturnClassName className(File f){ ReturnClassName returnCN= new ReturnClassName(); byte[] b; try{ DataInputStream dataIStream= new DataInputStream(new FileInputStream(f)); intArray= new int[dataIStream.available()]; b= new byte[dataIStream.available()]; dataIStream.read(b); intArray= b; // setting methods for ReturnClassName // counter increment returnCN.setNumber(someMethod(5)); }//catch() block return returnCN; } private static int[] someMethod(int l){ return Arrays.copyOfRange(intArray, counter, counter + l); } Or class Testing{ private static int counter; public static ReturnClassName className(File f){ ReturnClassName returnCN= new ReturnClassName(); byte[] b; try{ DataInputStream dataIStream= new DataInputStream(new FileInputStream(f)); intArray= new int[dataIStream.available()]; b= new byte[dataIStream.available()]; dataIStream.read(b); intArray= b; // setting methods for ReturnClassName // counter increment returnCN.setNumber(someMethod(intArray,5)); }//catch() block return returnCN; } private static int[] someMethod(int[] iArray, int l){ return Arrays.copyOfRange(iArray, counter, counter + l); } I want to know which one is more optimized and safe of the above two codes. Also while passing the array in the 2nd code, is it passing the whole array or just the address of that array. Like both intArray and iArray are pointing to the same integer array? A: Arrays are passed by reference so both snippets are equivalent concerning efficiency except for the fact that if you are not using intArray for some other purpose: The second version will unreference the array and make it a candidate for garbage collection. This is, in the second case, the array will be a candidate to be collected as soon as someMethod execution returns whereas the first version will keep the array referenced until the program ends since it is static. From your comments I understand that you will call className once per file for different files and for each file you will call 'someMethod' many times. Then I like a solution similar to the firstone at some points but different to both the first and the second one. That solution is to have a instance of Testing for each file you load data from: * *Force each instance to be associated with a concrete file. *Make methods and attributes not static. This is for each Testing element to have its own data loaded from its file. *Change className so it will load data from its file only once. *Make a right user of Testing and its instances. class Testing{ public Testing(File f) { this.f = f; } private File f; private int[] intArray; public static ReturnClassName className(){ ReturnClassName returnCN= new ReturnClassName(); byte[] b; if(intArray == null || intArray.length > 0) return //If it was called before, then we don't load the file again. { try{ DataInputStream dataIStream= new DataInputStream(new FileInputStream(f)); intArray= new int[dataIStream.available()]; b = new byte[dataIStream.available()]; dataIStream.read(b); intArray= b; // setting methods for ReturnClassName // counter increment } catch(Exception e) { ... ... } } returnCN.setNumber(someMethod(5)); return returnCN; } private int[] someMethod(int l){ return Arrays.copyOfRange(intArray, counter, counter + l); } } Example of use: Testing forFile1 = new Testing(fileObj01); ReturnClassName x = ReforFile1.className(); ReturnClassName y = ReforFile1.className(); Testing forFile2 = new Testing(fileObj02); ReturnClassName z = ReforFile2.className(); ReturnClassName w = ReforFile2.className(); You could, on the other hand, implement a better solution were you have a map of integer arrays indexed by the input file (like a cache) and you keep a copy if their bytes on it. Having thus a single instance od Testing and keep File f as input parameter for 'className'.
{ "language": "en", "url": "https://stackoverflow.com/questions/23103812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Web Development, Domain and Site Hosting I'm not 100% sure if I'm on the correct section but i'll transfer the post once I know where it's supposed to be but I am looking for advice regarding web development, setting up a domain and hosting it. I'm currently working on a personal project using Cloud9. Which is an online development environment where I can develop and host. The reason why I chose this is because it has MySQL usability as I retrieve and store data on the project I am working on. I am now at the point where I want to get a domain and host a site to transfer my project over to a live website with my own domain which also has SQL/MySQL usability. Any advice or where would be the best place to look up on methods of pursuing this? Edit: I develop using PHP and JavaScript with MySQL A: To start I would suggest heroku, they have a free option and some nice guides, depending on which server side language you use . This way you can get used to hosting some apps and doing deployments, seeing logs etc. The database doesn't have to be on the same hosting necessarily, you can use mongolab for example. For domain names it's a different thing, you will have to use the likes of godaddy A: Why not setup your own VPS (Virtual Private Server)? There are many providers.. A: Design the software application with portability and avoid vendor lock-in to cloud services. For file transfer, there are FTP/FXP, zip/gzip, & version control standards like Git, CVS, SVN, etc. Use phpMyAdmin for the database export & import process to change web hosts. Otherwise, build a staging subdomain or a local development environment with copies of the original web app & use those to transfer files+DB to a new host for production.
{ "language": "en", "url": "https://stackoverflow.com/questions/44412333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: EPiServer XForm editor reverts default value on radio buttons on save When editing a EPiServer XForm (in the CMS) it seems that the XForm editor does not save the default settings state when working with radio buttons. What I do: I begin editiong a XForm and, in the XForm editor preview window, I select a radio button collection with has two options ("Private" And "Corporate") with "Private" checked as default. I uncheck "Private" as default (since neither of the two options are to be pre-selected) and press "Save" to save the radio button collection field. The XForm preview updates correctly and shows that no radio button is checked. But when I try to save the entire form and the XForm preview is reloaded, the radio button collection I just edited reverts back and "Private" is again pre-checked! Any idea why this happens? The form is implemented in a Block container and runs in a EPiServer CMS 7.5 MVC. A: Found the anwser myself; the error occurs if the value-fields in the radion button collection is left blank. So make sure your value-fields has some kind of value entered, when appropiate.
{ "language": "en", "url": "https://stackoverflow.com/questions/29652602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MySQL Column 'id' in IN/ALL/ANY subquery is ambiguous I am trying to do a search functionalities that involves three tables. Searching for users and returning wheather the user id 1 is a friend of the returned users. Also The returned users is being filtered from a third table where it checks tag of that users. So I can say, "Return users who has tag 'Programming', 'Php' in userinterests table and also if the returned user is a friend of usr id 1 or not " I am trying to use the bellow query but getting Column 'id' in IN/ALL/ANY subquery is ambiguous If I remove the left join then it works. SELECT n.id, n.firstName, n.lastName, t.id, t.tag, t.user_id, if(id in ( SELECT u.id as id from friends f, users u WHERE CASE WHEN f.following_id=1 THEN f.follower_id = u.id WHEN f.follower_id=1 THEN f.following_id = u.id END AND f.status= 2 ), "Yes", "No") as isFriend FROM users n LEFT JOIN userinterests t on n.id = t.id WHERE t.tag in ('Programming', 'Php') Thank you for your time :) A: Qualify all your column names. You seem to know this, because all other column names are qualified. I'm not sure if your logic is correct, but you can fix the error by qualifying the column name: SELECT . . . (CASE WHEN n.id IN (SELECT u.id as id FROM friends f CROSS JOIN users u WHERE CASE WHEN f.following_id=1 THEN f.follower_id = u.id WHEN f.follower_id=1 THEN f.following_id = u.id END ) AND f.status= 2 THEN 'Yes' ELSE 'No' END) as isFriend . . . A: This is the way I will go for your approach: 1) I used INNER JOIN instead of LEFT JOIN for skip users that are not related to tags: Programming and Php. 2) I replaced the logic to find the set of friends related to user with id equal to 1. SELECT n.id, n.firstName, n.lastName, t.id, t.tag, t.user_id, IF( n.id IN (SELECT follower_id FROM friends WHERE status = 2 AND following_id = 1 UNION SELECT following_id FROM friends WHERE status = 2 AND follower_id = 1), "Yes", "No" ) AS isFriend FROM users n INNER JOIN userinterests t ON n.id = t.id AND t.tag IN ('Programming', 'Php') Just curious, whats is the meaning of status = 2 ?
{ "language": "en", "url": "https://stackoverflow.com/questions/53914440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Admob closes app after closing ad? I have one Activity in my app. Now I wanted to add an Admob interstitial banner. Unfortunately the old activity is also closed when the user closes the ad. I added something like this: interstitial = new InterstitialAd(this, "*************"); // Create ad request AdRequest adRequest = new AdRequest(); // and begin loading your interstitial interstitial.loadAd(adRequest); interstitial.setAdListener(this); And started the ad by using if (interstitial.isReady()){ interstitial.show(); } The representation of the app is working fine. What can I do to solve the problem? A: There is an attribute in your AndroidManifest.xml file, like android:noHistory="true" You must delete this. it solves the problem A: Do like this private boolean isAddShown = false; make this isAddShown = true when the add is visible @Override public void onBackPressed() { // TODO Auto-generated method stub if(!isAddShown){ super.onBackPressed(); }else{ isAddShown = false; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/19047084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Matlab: Saving the filenames of files with specific extension, contained in a folder I'm working on a funcion to plot the data of many .csv files contained on the same folder. To do it automatically, I want to save the filenames as a string array but the only function I know I can use to get the list of files is dir and by doing x = dir('MyFolder') I get a struct array, not a string or char or whatever array instead. Then, I tried to save on another variable only the first column (from the 3rd row to the end) of the struct array because the filenames lay there, but I get the same struct without the first two rows. How would you do to solve it? Thank you in advance. A: files = dir('*.csv') ; % this gives all csv files present in folder N = length(files) ; % total number of files in the folder for i = 1:N thisfile = files(i).name ; end In the above files is a structure, it has all the information of your csv files. You can extract name of the files using files(i).name where i = 1,2,...N. If you want all the names of a file in a string. Use filenames = {files.name}' ; Above line, gives you names of all csv files in the folder into a cell array.
{ "language": "en", "url": "https://stackoverflow.com/questions/50360717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reshaping a Numpy Array into lexicographical list of cubes of shape (n, n, n) In order to understand what I'm trying to achieve let's imagine an ndarray a with shape (8,8,8) from which I lexicographically take blocks of shape (4,4,4). So while iterating through such blocks the indexes would look as follows: 0: a[0:4, 0:4, 0:4] 1: a[0:4, 0:4, 4:8] 2: a[0:4, 4:8, 0:4] 3: a[0:4, 4:8, 4:8] 4: a[4:8, 0:4, 0:4] 5: a[4:8, 0:4, 4:8] 6: a[4:8, 4:8, 0:4] 7: a[4:8, 4:8, 4:8] It is these blocks of data which I'm trying to access. Obviously, this can be described by using an expression which converts the current iteration to the corresponding indexes. An example of that is given below. a = np.ones((8,8,8)) f = 4 length = round(a.shape[0] * a.shape[1] * a.shape[2] / f**3) x = a.shape[0] / f y = a.shape[1] / f z = a.shape[2] / f for i in range(length): print(f"{i}: {round((int(i/(z*y))%x)*f)}:{round(f+(int(i/(z*y))%x)*f)}, {round((int(i/z)%y)*f)}:{round(f+(int(i/z)%y)*f)}, {round((i%z)*f)}:{round(f+(i%z)*f)}") My apologies for having to do that to your eyes but it generates the following output: 0: 0:4, 0:4, 0:4 1: 0:4, 0:4, 4:8 2: 0:4, 4:8, 0:4 3: 0:4, 4:8, 4:8 4: 4:8, 0:4, 0:4 5: 4:8, 0:4, 4:8 6: 4:8, 4:8, 0:4 7: 4:8, 4:8, 4:8 So this does actually generate the right indexes, but it only allows you to access multiple blocks at once if they have the same index in the 0th and 1st axis, so no wrapping around. Ideally I would reshape this whole ndarray into an ndarray b with shape (4, 4, 32) and be ordered in such a way that b[:, :, :4] would return a[0:4, 0:4, 0:4], b[:, :, 4:12] returns an ndarray of shape (4, 4, 8) which contain a[0:4, 0:4, 4:8] and a[0:4, 4:8, 0:4] etc. I want this to be as fast as possible, so ideally, I keep the memory layout and just change the view on the array. Lastly, if it helps to think about this conceptually, this is basically a variant of the ndarray.flatten() method but using blocks of shape (4, 4, 4) as "atomic size" if you will. Hope this makes it clear enough! A: It is a bit unclear what you want as output. Are you looking for this: from skimage.util.shape import view_as_windows b = view_as_windows(a,(f,f,f),f).reshape(-1,f,f,f).transpose(1,2,3,0).reshape(f,f,-1) suggested by @Paul with similar result (I prefer this answer in fact): N = 8 b = a.reshape(2,N//2,2,N//2,N).transpose(1,3,0,2,4).reshape(N//2,N//2,N*4) output: print(np.array_equal(b[:, :, 4:8],a[0:4, 0:4, 4:8])) #True print(np.array_equal(b[:, :, 8:12],a[0:4, 4:8, 0:4])) #True print(np.array_equal(b[:, :, 12:16],a[0:4, 4:8, 4:8])) #True A: def flatten_by(arr, atomic_size): a, b, c = arr.shape x, y, z = atomic_size r = arr.reshape([a//x, x, b//y, y, c//z, z]) r = r.transpose([0, 2, 4, 1, 3, 5]) r = r.reshape([-1, x, y, z]) return r flatten_by(arr, [4,4,4]).shape >>> (8, 4, 4, 4) EDIT: the function applies C-style flattening to the array, as shown below NOTE: this method and @Ehsan's method both produce "copies" NOT "views", im looking into it and would update the answer when if i find a solution flattened = flatten_by(arr, [4,4,4]) required = np.array([ arr[0:4, 0:4, 0:4], arr[0:4, 0:4, 4:8], arr[0:4, 4:8, 0:4], arr[0:4, 4:8, 4:8], arr[4:8, 0:4, 0:4], arr[4:8, 0:4, 4:8], arr[4:8, 4:8, 0:4], arr[4:8, 4:8, 4:8], ]) np.array_equal(required, flattened) >>> True
{ "language": "en", "url": "https://stackoverflow.com/questions/62822637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: CSS fixed nav on top, when opened covers full screen, can't scroll I have a mobile navigation bar set to fixed using CSS at the top of my page so it sticks upon scrolling. When clicked, the full navigation menu is revealed and on many devices it covers the entire screen (which is what I want), but the last 25% or so of it is below the fold. Since the menu is set to fixed, it will not allow scrolling down (when open) to view the hidden content below the fold. Basically, how do I make a fixed element scrollable once it is open and part of it is being hidden below the fold? I know the problem is that the top of the element is still attached to the top of the screen, just not how to make it temporarily "switch" over to an absolute position (which will allow the scrolling down) when the full menu opens. Is this possible? Any help would be greatly appreciated! This is driving me nuts...
{ "language": "en", "url": "https://stackoverflow.com/questions/26458604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Cypress test runs before Vercel deployment in Github Actions I have successfully created a script to test Vercel test deployments using Cypress integrated with Github Actions. Although the test works and the desired result is achieved, there is a slightly annoying issue-- the Cypress test runs (and is skipped) before the Vercel deployment attempt. I am employing a conditional in the GA workflow yml so that the Cypress tests run after successful test deployment, so it ends up running after the deployment. I would like to be able to omit the first skipped attempt at the Cypress test. I have tried incorporating other Github Actions to fix this, but they block the test from being run at all if the deployment is not finished. I have also tried playing with the settings of the repo, to no avail. Below is my GA yml: name: Cypress Testing on: [deployment_status] jobs: e2e: if: github.event.deployment_status.state == 'success' runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - name: Setup npmrc run: echo "//registry.npmjs.org/:_authToken=${{secrets.NPM_AUTH_TOKEN}}" > .npmrc - name: Setup npm package run: npm init -y && npm install - name: Setup node 12 uses: actions/setup-node@v1 with: node-version: 12.x - name: Run Cypress uses: cypress-io/github-action@v2 env: CYPRESS_BASE_URL: ${{ github.event.deployment_status.target_url }} Our Vercel project is integrated with Git, so it deploys automatically with every push. Has anyone ever had this issue, where Cypress tries to run first before Vercel deployment? A: What happens in your deploy: * *you push changes to your repo *vercel watches your repo, see there is a new commit *it sends the status pending to your repo and builds the stuff on vercel servers. So now your repo knows vercel is doing something and you can see that e.g. in your PR in the ckecks. => with this "Status update" your workflow is automaticaly triggered (but skipped because the condition is false), because the status changed from nothing to "pending" *vercel has finished the deployment and sends another status update to GitHub. => your workflow is triggered again. But this time the if condition in the workflow is true and the rest will be executed. So it is not possible to avoid the skipped workflow with the deployment_status as a trigger for the workflow.
{ "language": "en", "url": "https://stackoverflow.com/questions/69275058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Passing arguments - interface implementation vs object of concrete class which implements the interface Considering OnCompleteListener is an interface and OnCompleteListenerImpl is a concrete class as follows ### OnCompleteListener interface #### public interface OnCompleteListener { public void onComplete(); } #### OnCompleteListenerImpl #### class OnCompleteListenerImpl implements OnCompleteListener { public void onComplete() { System.out.println("Yeah, the long running task has been completed!"); } } How is this #### Snippet A ##### longRunningTask.setOnCompleteListener(new OnCompleteListener() { @Override public void onComplete() { System.out.println("Yeah, the long running task has been completed!"); } } ); different from #### Snippet B ###### OnCompleteListenerImpl obj = new OnCompleteListenerImpl(); longRunningTask.setOnCompleteListener(obj); A: Both are approaches are similar but there is a big difference between Usability * *Class OnCompleteListenerImpl is reuseable, you need not override the same object multiple places *You need to be careful while initializing OnCompleteListenerImpl object, as you can initialize in multiple ways, you have to be careful with its scope. *You need not create a new implementation class if you are going with an anonymous approach. In summary, there are different use cases for both the approaches it is up to you which one you want to follow. For clean code and reusability, we go with implementation class but in case of secure and single instance use, you can go with anonymous class approach.
{ "language": "en", "url": "https://stackoverflow.com/questions/58656938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to fix this Loss error in pytorch multi_class model I am training a multi_class model for sentiment analyses and when I start the training I get an error msg regarding the Loss function. I am really lost here. from sklearn.metrics import f1_score, roc_auc_score, accuracy_score from transformers import EvalPrediction import torch def multi_class_metrics(predictions, labels, threshold=0.5): softmax = torch.nn.Softmax() probs = softmax(torch.Tensor(predictions)) y_pred = np.zeros(probs.shape) y_pred[np.where(probs >= threshold)] = 1 y_true = labels f1_micro_average = f1_score(y_true=y_true, y_pred=y_pred, average='micro') roc_auc = roc_auc_score(y_true, y_pred, average = 'micro') accuracy = accuracy_score(y_true, y_pred) metrics = {'f1': f1_micro_average, 'roc_auc': roc_auc, 'accuracy': accuracy} return metrics def compute_metrics(p: EvalPrediction): preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions result = multi_class_metrics( predictions=preds, labels=p.label_ids) return result And When I start the trainning I get this error: Loss trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() 218 if isinstance(k, str): 219 inner_dict = {k: v for (k, v) in self.items()} --> 220 return inner_dict[k] 221 else: 222 return self.to_tuple()[k]
{ "language": "en", "url": "https://stackoverflow.com/questions/73115638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can't continue deploying after for while via capistrano My environment is following the below. Environment Version Rails 7.0.0 Ruby 3.0.0 capistrano 3.16.0 Production environment Amazon EC2 Linux After for while from deploying, I got these logs. Then deploying stopped, not working anymore. INFO [41bfeeb4] Running /usr/bin/env sudo /bin/systemctl stop puma_〇〇_production as deploy@〇〇 DEBUG [41bfeeb4] Command: ( export RBENV_ROOT="/usr/local/src/rbenv" RBENV_VERSION="3.0.0" ; /usr/bin/env sudo /bin/systemctl stop puma_〇〇_production ) DEBUG [41bfeeb4] あなたはシステム管理者から通常の講習を受けたはずです。 これは通常、以下の3点に要約されます: #1) 他人のプライバシーを尊重すること。 #2) タイプする前に考えること。 #3) 大いなる力には大いなる責任が伴うこと。 DEBUG [41bfeeb4] [sudo] deploy password: I think it's related to authorization. I have no idea for fixing. How can I do? deploy.rb set :application, '〇〇' set :repo_url, '〇〇' set :deploy_to, '/var/www/〇〇' set :puma_threads, [4, 16] set :puma_workers, 0 set :pty, true set :use_sudo, false set :stage, :staging set :deploy_via, :remote_cache set :deploy_to, "/var/www/#{fetch(:application)}" set :puma_bind, "unix://#{shared_path}/tmp/sockets/#{fetch(:application)}-puma.sock" set :puma_state, "#{shared_path}/tmp/pids/puma.state" set :puma_pid, "#{shared_path}/tmp/pids/puma.pid" set :puma_access_log, "#{release_path}/log/puma.access.log" set :puma_error_log, "#{release_path}/log/puma.error.log" set :puma_preload_app, true set :puma_worker_timeout, nil set :puma_init_active_record, true set :puma_restart_command, 'bundle exec puma' set :rbenv_type, :system set :rbenv_path, '/usr/local/src/rbenv' set :rbenv_ruby, '3.0.0' set :linked_dirs, fetch(:linked_dirs, []).push( 'log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system', 'public/uploads', ) set :linked_files, fetch(:linked_files, []).push( 'config/database.yml', 'config/secrets.yml', 'config/puma.rb', '.env', ) namespace :puma do Rake::Task[:restart].clear_actions desc 'Overwritten puma:restart task' task :restart do puts 'Overwriting puma:restart to ensure that puma is running. Effectively, we are just starting Puma.' puts 'A solution to this should be found.' invoke 'puma:stop' invoke 'puma:start' end desc 'Create Directories for Puma Pids and Socket' task :make_dirs do on roles(:app) do execute "mkdir #{shared_path}/tmp/sockets -p" execute "mkdir #{shared_path}/tmp/pids -p" end end before :start, :make_dirs end namespace :deploy do desc 'Make sure local git is in sync with remote.' task :check_revision do on roles(:app) do unless `git rev-parse HEAD` == `git rev-parse origin/master` puts 'WARNING: HEAD is not the same as origin/master' puts 'Run `git push` to sync changes.' exit end end end desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do invoke 'puma:restart' end end before :starting, :check_revision after :finishing, :compile_assets after :finishing, :cleanup end after 'deploy', 'sitemap:refresh' Capfile require 'capistrano/setup' require 'capistrano/deploy' require 'capistrano/scm/git' install_plugin Capistrano::SCM::Git require 'capistrano/rails' require 'capistrano/rbenv' require 'capistrano/rails/assets' require 'capistrano/rails/migrations' require 'capistrano/bundler' require 'capistrano/puma' require 'capistrano/sitemap_generator' require 'whenever/capistrano' require 'dotenv' Dotenv.load install_plugin Capistrano::Puma install_plugin Capistrano::Puma::Systemd # Loads custom tasks from `lib/capistrano/tasks' if you have any defined. Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r } A: The deploy user needs to be a sudo user in order to restart the puma systemctl service. You can fix the issue by creating new file inside /etc/sudoers.d directory and add this line deploy ALL=(ALL) NOPASSWD:/bin/systemctl
{ "language": "en", "url": "https://stackoverflow.com/questions/70622422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is the a ScriptEngine or eval()-like function in Swift? In Java, we can build up expressions to be called using ScriptEngine. This is nice for building up frameworks based on a common naming convention. In JavaScript, there is of course eval(). Does Swift have some sort of mechanism for evaluating a string which contains a swift expression? I'm aware that this could be potentially abused; however, it would simplify my present development. A: No. Swift is a compiled language, and the runtime doesn't include the compiler. The iOS SDK doesn't provide a way to evaluate run-time Swift code. You can execute JavaScript using JavaScriptCore, and JavaScriptCore makes it pretty easy to expose Swift objects and functions to the script. Maybe that will help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/26244069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Distribution of objects in equal size groups Let there be a large set of objects, each providing a getSize() method that returns a number representing a "size" of some kind. This set is (or can be) sorted according to size. For simplicity, also assume that: * *The group size is larger than the largest object size, so that each group contains at least one object *There is no limitation to the total number of groups Given the above, what would be an efficient way of distributing those objects into groups, so that the total size of all objects in a group is (approximately) the same to that of all other groups? Obviously one can walk the set linearly and put an object into a group if it fits, or create another group and put it there if is does not, but this approach does not achieve equal size distribution. The problem is independent of programming languages, but an example implementation in Java would also be interesting.
{ "language": "en", "url": "https://stackoverflow.com/questions/17077708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Elasticsearch Python - No viable nodes were discovered on the initial sniff attempt I have a cluster of Elasticsearch nodes running on different AWS EC2 instances. They internally connect via a network within AWS, so their network and discovery addresses are set up within this internal network. I want to use the python elasticsearch library to connect to these nodes from the outside. The EC2 instances have static public IP addresses attached, and the elastic instances allow https connections from anywhere. The connection works fine, i.e. I can connect to the instances via browser and via the Python elasticsearch library. However, I now want to set up sniffing, so I set up my Python code as follows: self.es = Elasticsearch([f'https://{elastic_host}:{elastic_port}' for elastic_host in elastic_hosts], sniff_on_start=True, sniff_on_connection_fail=True, sniffer_timeout=60, sniff_timeout=10, ca_certs=ca_location, verify_certs=True, http_auth=(elastic_user, elastic_password)) If I remove the sniffing parameters, I can connect to the instances just fine. However, with sniffing, I immediately get elastic_transport.SniffingError: No viable nodes were discovered on the initial sniff attempt upon startup. http.publish_host in the elasticsearch.yml configuration is set to the public IP address of my EC2 machines, and the /_nodes/_all/http endpoint returns the public IPs as the publish_address (i.e. x.x.x.x:9200). We have localized this problem to the elasticsearch-py library after further testing with our other microservices, which could perform sniffing with no problem. A: After testing with our other microservices, we found out that this problem was related to the elasticsearch-py library rather than our elasticsearch configuration, as our other microservice, which is golang based, could perform sniffing with no problem. After further investigation we linked the problem to this open issue on the elasticsearch-py library: https://github.com/elastic/elasticsearch-py/issues/2005. The problem is that the authorization headers are not properly passed to the request made to Elasticsearch to discover the nodes. To my knowledge, there is currently no fix that does not involve altering the library itself. However, the error message is clearly misleading.
{ "language": "en", "url": "https://stackoverflow.com/questions/75112032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Drupal Page View not being added to Main Menu Drupal 7 Views has a "Menu" button that should allow one to add a Page View to the Main Menu. I have cloned a view, modified the "Path" and "Menu" parameter. It seems simple enough, because when I create a View via "Add View" I can add that view to the main menu. But, cloning a view and then modifying the "Menu" params does not cause a menu item to show up in the main menu. I have cleared cache a number of times but still nothing appears in the main menu. Thanks A: I know this may be silly, but do you happen to save the view after cloning? Also, make sure that the path is not already existing within you Drupal site. A view creates a menu entry, check within you menu items listing that this entry is enabled.
{ "language": "en", "url": "https://stackoverflow.com/questions/9854673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to normalize the output of neural network in tensorflow 1 I have created a neural network using tensorflow 1.x as below: def coder(x): layer_1 = tf.nn.relu(tf.add(tf.matmul(x, weights['encoder_h1']),biases['encoder_b1'])) layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, weights['encoder_h2']), biases['encoder_b2'])) layer_3 = tf.nn.tanh(tf.add(tf.matmul(layer_2, weights['encoder_h3']),biases['encoder_b3'])) return layer_3 What I need is to normalize the output of that neural network. I have tried to do that manually by: layer_4 = (layer_3 * tf.sqrt(tf.cast(tf.shape(layer_3 )[0], dtype=tf.float32))) / tf.norm(layer_3 ) return layer_4 But, I think this way is not correct, is there a way I can normalize the output of the neural network such as adding a normalization layer or something like that.
{ "language": "en", "url": "https://stackoverflow.com/questions/67556001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Vagrant and symfony2 I'm having kind of a bizzare issue related to installing Symfony2 from within a vagrant environment. The environment is set up correctly and is running a web server that is serving files from a folder that is shared with the vagrant environment that is located in the base directory of vagrant. Basically, vagrant is initiated in directory foo and then within foo, there is a directory called webroot. Vagrant automagically shares the foo directory. An apache server is set up to run so that webroot is the base http directory. This all works fine and I am able to serve basic HTML, PHP and the MySQL connection is tested to be fine. I used composer to install vagrant the recommended way, into directory inside /webroot/ called Symfony. All of the files now exist within the correct directory. The configuration is correct and there are no items that Symfony claims need to be changed in /config.php. The issue comes when I attempt to load /app_dev.php. It throws an exception claiming that it cannot create a file named cache in the /app directory. As chmod +a is not supported within the vagrant box I am using, I elected to set permissions by uncommenting umask(0000) in app_dev. Assuming it was a permission problem, I tried using chmod to adjust the permissions both within the vagrant environment and within osx to 777 for everything. What's strange is that when I chmod a file or directory inside the vagrant environment, it claims to set 777 correctly but then when I ls -l, the permissions have not changed. However, when I chmod a file or directory from OUTSIDE The vagrant environment within the webroot folder, the permissions persist. As symfony does not have r/w permissions within the environment, it cannot create the necessary cache and log files. When i run symfony from the command from osx, everything works fine. Does anyone have any insight as to how to change the permissions for the /webroot directory so things within the vagrant environment can actually read and write to it as chmod doesn't appear to work? A: I think it's a question of user rights. Your apache + php is probably launched by root. You have to set rights with root. Two possibilities : sudo su chmod -R 777 app/cache or sudo chown -v app/cache sudo chmod -R 777 app/cache You will probably have to do the same thing with the log file. My vagrant file if you need it : # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| config.vm.box = "precise64" #Box Name config.vm.box_url = "http://files.vagrantup.com/precise64.box" #Box Location config.vm.provider :virtualbox do |virtualbox| virtualbox.customize ["modifyvm", :id, "--memory", "2048"] end config.vm.synced_folder ".", "/home/vagrant/synced/", :nfs => true #config.vm.network :forwarded_port, guest: 80, host: 8080 # Forward 8080 rquest to vagrant 80 port config.vm.network :private_network, ip: "1.2.3.4" config.vm.network :public_network config.vm.provision :shell, :path => "vagrant.sh" end vagrant.sh #!/usr/bin/env bash #VM Global Config apt-get update #Linux requirement apt-get install -y vim git #Apache Install apt-get install -y apache2 #Apache Configuration rm -rf /var/www ln -fs /home/vagrant/synced/web /var/www chmod -R 755 /home/vagrant/synced #Php Install apt-get install -y python-software-properties add-apt-repository -y ppa:ondrej/php5 apt-get update apt-get install -y php5 libapache2-mod-php5 #Php Divers apt-get install -y php5-intl php-apc php5-gd php5-curl #PhpUnit apt-get install -y phpunit pear upgrade pear pear channel-discover pear.phpunit.de pear channel-discover components.ez.no pear channel-discover pear.symfony.com pear install --alldeps phpunit/PHPUnit #Php Configuration sed -i "s/upload_max_filesize = 2M/upload_max_filesize = 10M/" /etc/php5/apache2/php.ini sed -i "s/short_open_tag = On/short_open_tag = Off/" /etc/php5/apache2/php.ini sed -i "s/;date.timezone =/date.timezone = Europe\/London/" /etc/php5/apache2/php.ini sed -i "s/memory_limit = 128M/memory_limit = 1024M/" /etc/php5/apache2/php.ini sed -i "s/_errors = Off/_errors = On/" /etc/php5/apache2/php.ini #Reload apache configuration /etc/init.d/apache2 reload #Composer php -r "eval('?>'.file_get_contents('https://getcomposer.org/installer'));" mv -f composer.phar /usr/local/bin/composer.phar alias composer='/usr/local/bin/composer.phar' #Postgres apt-get install -y postgresql postgresql-client postgresql-client php5-pgsql su - postgres -c "psql -U postgres -d postgres -c \"alter user postgres with password 'vagrant';\"" A: An updated answer for nfs: config.vm.synced_folder "www", "/var/www", type:nfs, :nfs => { :mount_options => ["dmode=777","fmode=777"] } A: Update as of 15th Jan 2016. Instructions for Vagrant 1.7.4+ and Symfony 3. This works. On a fresh Ubuntu 14.04 install, ACL was installed but I couldn't use +a or setfacl to fix the permissions issues, and of course, as soon as you change any permissions in terminal in vagrant, they're reset to vagrant:vagrant again. I added the following to my vagrant file: # Symfony needs to be able to write to it's cache, logs and sessions directory in var/ config.vm.synced_folder "./var", "/vagrant/var", :owner => 'vagrant', :group => 'www-data', :mount_options => ["dmode=775","fmode=666"] This tells Vagrant to sync var/logs and var/cache (not to be confused with /var/, these are in the root Symfony directory) and have them owned by vagrant:www-data. This is the same as doing a sudo chown vagrant:www-data var/, except Vagrant now does it for you and enforces that instead of enforcing vagrant:vagrant. Note there are no 777 'hacks' here. As soon as I added that, I didn't get any more permissions errors in the apache log and I got a nice Symfony welcome screen. I hope that helps someone! A: Nothing worked for me other than changing location of cache and logs folder to /tmp AppKernel.php public function getCacheDir() { if (in_array($this->getEnvironment(), ['test','dev'])) { return '/tmp/sfcache/'.$this->getEnvironment(); } return parent::getCacheDir(); } public function getLogDir() { if (in_array($this->getEnvironment(), ['test','dev'])) { return '/tmp/sflogs/'.$this->getEnvironment(); } return parent::getLogDir(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/18029973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Duplicating items already in array Every time I run the app, and then re-run it, it saves the same items into the NSUserDefaults even if it is already there. I tried to fix that with contains code, but it hasn't worked. What am I missing? for days in results! { let nD = DayClass() nD.dayOfTheWeek = days[“D”] as! String let defaults = NSUserDefaults.standardUserDefaults() if var existingArr = defaults.arrayForKey("D") as? [String] { if existingArr.contains(days["D"] as! String) == false { existingArr.append(nd.dayOfTheWeek) } } else { defaults.setObject([nD.dayOfTheWeek], forKey: "D") } } A: Every time I run the app, and then re-run it, it saves the same items into the NSUserDefaults even if it is already there. Yes, because that's exactly what your code does: defaults.setObject(existingArr, forKey: "D") But what is existingArr? It's the defaults you've just loaded before: if var existingArr = NSUserDefaults.standardUserDefaults().arrayForKey("D") as? [String] So what happens is that when you enter the .contains branch, you always do the same operation: you save the existing array. Contrary to what your comment in the code states, you're not appending anything to any array in that block, you're saving the existing one.
{ "language": "en", "url": "https://stackoverflow.com/questions/35998773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Single postgres transaction locks xmin for the whole cluster One of my applications performs a lock with a following request SELECT … FROM locks WHERE id = 1 FOR UPDATE SKIP LOCKED; This query lasts as long as application lifetime. Table itself contains only one column with a single row id = 1. I know it's not the best approach to acquire locks and I expect xmin for that database to stuck with this SELECT … FOR UPDATE … xid. But in fact I'm getting situation where all transactions in my cluster, even in another databases are stuck with that xmin, including autovacuum process. At first I was thinking that streaming replication with enabled hot_standby_feedback may be involved somehow, but it's continues to happen even with replication turned off at all. Server version is 11.3. There is no long running prepared transactions in pg_prepared_xacts or any replication running. Xmin blocks cluster-wide exactly on SELECT FOR UPDATE xid.
{ "language": "en", "url": "https://stackoverflow.com/questions/73072493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Vim: no highlighting at startup I am using vim together with NERDTree and MiniBufExplorer. My colorscheme is peaksea. If I start vim, no syntax is highlighted in the first buffer. The other buffers have syntax highlighted. I have "syntax enable" in my vimrc. If I type :edit the syntax gets highlighted. So I tried autocmd VimEnter * edit but still nothing is highlighted. Did anyone encounter a similar problem or has anyone an idea how to fix this? A: I remapped F12 to redo my syntax highlighting in case it gets messed up: nnoremap <F12> :syntax sync fromstart<cr> Maybe you can just run it as an autocmd event in your vimrc based on reading a new buffer? autocmd BufReadPost * :syntax sync fromstart
{ "language": "en", "url": "https://stackoverflow.com/questions/14143858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: AWK Replace strings by the same strings with underscores I have three files. One is the original file, another contains a part of the lines of the original and another file with the modified parts have to replace the original. And do not even know where to start. Can you help me? Match file: a demandé de montrer grandes vacances de faire a montré a remis bien posé n ' quand il l ' arrière du véhicule modèle essence Replace file: a_demandé_de_montrer grandes_vacances de_faire a_montré a_remis bien_posé n_' quand_il_l_' arrière_du_véhicule modèle_essence Original file: A 120km/h, la consommation tourne autour de 7.5l/100km si le vent est dans le dos... A ce jour, je suis totalement satisfaite A ce moment-là aux grandes vacances on m'a demandé de montrer le bon. A chacun son choix A chaque fois c'est moi qui dois les recontacter. A eux de faire leurs avis.... A l'achat, le vendeur m'a montré comment rabattre le siège arrière, mais quand il l'a remis en place, ce n'était pas bien posé. A l'arrière du véhicule, il était inscrit qu'il s'agissait d'une diesel, alors que c'est un modèle essence. A la décharge du garage nous avons constaté un changement de personnel (nouveau directeur nouveau préposé a l accueil) laissons leur un temps d adaptation ... A la limite, chacun son garagiste. Desired output: A 120km/h, la consommation tourne autour de 7.5l/100km si le vent est dans le dos... A ce jour, je suis totalement satisfaite A ce moment-là aux grandes vacances on m'a_demandé_de_montrer le bon. A chacun son choix J'avais droit aux grandes_vacances à l'entretien kit vacances. A chaque fois c'est moi qui dois les recontacter. A eux de_faire leurs avis.... A l'achat, le vendeur m'a_montré comment rabattre le siège arrière, mais quand il l'a_remis en place, ce n'était pas **bien_posé**. A l'arrière_du_véhicule, il était inscrit qu'il s'agissait d'une diesel, alors que c'est un modèle_essence. A la décharge du garage nous avons constaté un changement de personnel (nouveau directeur nouveau préposé a l accueil) laissons leur un temps d adaptation ... A la limite, chacun son garagiste. A: You didn't show any effort on the implementation but this should solve your problem. awk -F"\t" 'NR==FNR{a[$1]=$2;next} {for(k in a) gsub(k,a[k])}1' <(paste search replace) text create a lookup table, do the replacement based on lookup. A: There may be a better way to do this, but if you would like to use AWK, you can asssign a variable for each file you read, build some arrays with the find and replace string, and then loop through each find/replace value: awk ' file == 1 { source[++s] = $0 } file == 2 { replace[++r] = $0 } file == 3 { for (i = 1; i < s; i++) { gsub (source[i], replace[i], $0) } print } ' file=1 match_file \ file=2 replace_file \ file=3 original_file I don't claim this is the most efficient way to do it, but I think it will do what you describe. A: Here is one approach of solving this using awk: #!/usr/bin/awk -f FILENAME == ARGV[1] { m[FNR]=$0 } # Store the match word in an array FILENAME == ARGV[2] { r[FNR]=$0 } # Store the replacement word in a second array FILENAME == ARGV[3] { for (i in m) gsub(m[i],r[i]); print } # Do the replacement for every line in file3 Run it like this: ./script.awk match_file replace_file original_file A: This is the output the first three codes.This is the output the first two codes. Perhaps the encoding? a_demand▒_de_montreraa_demand▒_de_montreria_demand▒_de_montrersa_demand▒_de_montrersa_demand▒_de_montreroa_demand▒_de_montrerna_demand▒_de_montrersa_demand▒_de_montrer a_demand▒_de_montrerla_demand▒_de_montrerea_demand▒_de_montrerua_demand▒_de_montrerra_demand▒_de_montrer a_demand▒_de_montrerua_demand▒_de_montrerna_demand▒_de_montrer a_demand▒_de_montrerta_demand▒_de_montrerea_demand▒_de_montrerma_demand▒_de_montrerpa_demand▒_de_montrersa_demand▒_de_montrer a_demand▒_de_montrerda_demand▒_de_montrer a_demand▒_de_montreraa_demand▒_de_montrerda_demand▒_de_montreraa_demand▒_de_montrerpa_demand▒_de_montrerta_demand▒_de_montreraa_demand▒_de_montrerta_deman d▒_de_montreria_demand▒_de_montreroa_demand▒_de_montrerna_demand▒_de_montrer a_demand▒_de_montrer.a_demand▒_de_montrer.a_demand▒_de_montrer.a_demand▒_de_montrer a_demand▒_de_montrerAa_demand▒_de_montrer a_demand▒_de_montrerla_demand▒_de_montreraa_demand▒_de_montrer a_demand▒_de_montrerla_demand▒_de_montreria_demand▒_de_montrerma_demand▒_de_montreria_demand▒_de_montrerta_demand▒_de_montrerea_demand▒_de_montrer,a_demand▒_de_montrer a_demand▒_de_montrerca_demand▒_de_montrerha_demand▒_de_montreraa_demand▒_de_montrerca_demand▒_de_montrerua_demand▒_de_montrerna_demand▒ I tried this.. BEGIN { while ((getline ln1 < mt) > 0) { source[++s] = ln1; } while ((getline ln2 < rp) > 0) { replace[++r] = ln2; } } { for ( i = 1; i < s; i++) gsub (source[i], replace[i], $0) print; } A: With GNU awk for ARGIND (with other awks just add FNR==1{ARGIND++}): $ cat tst.awk ARGIND==1 { a[FNR] = $0; next } ARGIND==2 { map[a[FNR]] = $0; next } { for (m in map) { gsub(m,map[m]) } print } $ awk -f tst.awk match.txt replace.txt original.txt A 120km/h, la consommation tourne autour de 7.5l/100km si le vent est dans le dos... A ce jour, je suis totalement satisfaite A ce moment-là aux grandes_vacances on m'a_demandé_de_montrer le bon. A chacun son choix A chaque fois c'est moi qui dois les recontacter. A eux de_faire leurs avis.... A l'achat, le vendeur m'a_montré comment rabattre le siège arrière, mais quand il l'a_remis en place, ce n'était pas bien_posé. A l'arrière_du_véhicule, il était inscrit qu'il s'agissait d'une diesel, alors que c'est un modèle_essence. A la décharge du garage nous avons constaté un changement de personnel (nouveau directeur nouveau préposé a l accueil) laissons leur un temps d adaptation ... A la limite, chacun son garagiste. Like all other solutions posted so far, the above will behave undesirably if you have regexp metachars in your match file or regexp capture group identifiers in your replace file. If that can happen use index() and substr() instead of gsub().
{ "language": "fr", "url": "https://stackoverflow.com/questions/33373710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: GStreamer - Sample Plugin I am a newbie in gstreamer and trying to develop a sample plugin for captions decoding. I have downloaded the gStreamer Plugin template: based on this information. When I launched the plugin from command line, it is working fine. I wrote a sample application to verify the plugin. But now, I am facing a problem in setting pipeline state to PLAYING. Below is the code snippet Any inputs would be of great help. Thanks in advance, Kranti gst_init(NULL, NULL); loop = g_main_loop_new (NULL, TRUE); g_print("\n Gstreamer is Initialized and Created the loop "); pipeline = gst_pipeline_new ("pipeline"); source = gst_element_factory_make ("filesrc", "source"); filter = gst_element_factory_make ("myfilter", "testfilter"); sink = gst_element_factory_make ("fakesink", "sink"); if((NULL != pipeline) && (NULL != source) && (NULL != filter) && (NULL != sink)) { g_print("\n Successfully created the factory elements "); g_object_set(G_OBJECT (source), "location", fileName, NULL); g_print("\n Set the file name \n"); g_object_set(G_OBJECT (filter), "silent", 1, NULL); g_print("\n Set the silent type \n"); /* we add a message handler */ bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline)); bus_watch_id = gst_bus_add_watch (bus, bus_call, loop); gst_object_unref (bus); g_print("\n Created bus and a monitor to watch it"); gst_bin_add_many(GST_BIN(pipeline), source, filter, sink, NULL); gst_element_link_many(source, filter, sink); g_print("\n Added and Linked the factory elements"); g_signal_connect (filter, "pad-added", G_CALLBACK (on_pad_added), filter); g_print ("Now reading: %s\n", "test.txt"); g_print ("Setting the pipeline state to PLAYING "); ret = gst_element_set_state (pipeline, GST_STATE_PLAYING); if(ret == GST_STATE_CHANGE_FAILURE) { g_print("\n Failure in setting pipeline state to PLAYING \n"); } else { g_print("\n Successfully set the pipeline state to playing \n"); } } else { g_print("\n Failure in creating factory elements"); } A: After trying with few examples on gstreamer elements, found the problem. Apart from filesrc, filter, fakesink:: If I add 'decoder' element also to the pipeline, then I am able to change the state to PLAYING But why is that required - I am still trying to figure it out And sometimes, the name used to create pipeline is also causing problems: Better to use some unique name rather than pipeline in gst_pipeline_new ("pipeline");
{ "language": "en", "url": "https://stackoverflow.com/questions/15652035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: how to decompose a nested list? I have a nested list and want to decompose it, exactly inverse of merging. suppose i have list bellow: f=[[1,2,3],[1,2,3],[1,2,3],[1,2,3]] I am trying to obtain : a1=[1,1,1,1] a2=[2,2,2,2] a3=[3,3,3,3] I have tried this command: a1=f[:][0:1] a2=f[:][1:2] a3=f[:][2:3] but it does not work right. do you know my wrong ? A: Try this: f=[[1,2,3],[1,2,3],[1,2,3],[1,2,3]] for i in zip(*f): print(i) Output: (1, 1, 1, 1) (2, 2, 2, 2) (3, 3, 3, 3) A: zip() in conjunction with the * operator can be used to unzip a list and it return iterator of tuples. Using map() to apply list on the iterator of tuples that we are getting from zip and it return an iterator. And then passing the iterator to list(). a = list(map(list, zip(*f))) Or a = [] for i in iter(map(list, zip(*f))): a.append(i) A: What you should do in here is to use a for loop. f=[[1,2,3],[1,2,3],[1,2,3],[1,2,3]] a1=[x[0] for x in f] a2=[x[1] for x in f] a3=[x[2] for x in f] print(a1,a2,a3) >>>[1, 1, 1, 1] [2, 2, 2, 2] [3, 3, 3, 3] A: You may do so like the following: a = [list(i) for i in zip(*f)] Now: a[0] will be [1, 1, 1, 1] a[1] will be [2, 2, 2, 2] a[2] will be [3, 3, 3, 3]
{ "language": "en", "url": "https://stackoverflow.com/questions/53442024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Building the computation graph form an expression I'm trying to implement different differentiation algorithms. I'm having trouble breaking up an expression like this: x = Variable() y = Variable() F_x = (x**2) * y + y + 2 I want to build the graph for F_x. Edit: To clarify the graph of F_x would look something like Add / \ Mult Add / \ / \ EXP y y 2 / \ x 2 A: You can use a python library called scipy which has functions that can produce graphs
{ "language": "en", "url": "https://stackoverflow.com/questions/40964042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Generate 6 digit random number I just want to generate 6 digit random number, and the range should be start from 000000 to 999999. new Random().nextInt(999999) is returning me number but it is not in 6 digit. A: Its as simple as that, you can use your code and just do one thing extra here String.format("%06d", number); this will return your number in string format, so the "0" will be "000000". Here is the code. public static String getRandomNumberString() { // It will generate 6 digit random Number. // from 0 to 999999 Random rnd = new Random(); int number = rnd.nextInt(999999); // this will convert any number sequence into 6 character. return String.format("%06d", number); } A: If you need a six digit number it has to start at 100000 int i = new Random().nextInt(900000) + 100000; Leading zeros do not have effect, 000000 is the same as 0. You can further simplify it with ThreadLocalRandom if you are on Java 7+: int i = ThreadLocalRandom.current().nextInt(100000, 1000000) A: 1 + nextInt(2) shall always give 1 or 2. You then multiply it by 10000 to satisfy your requirement and then add a number between [0..9999]. already solved here public int gen() { Random r = new Random( System.currentTimeMillis() ); return ((1 + r.nextInt(2)) * 10000 + r.nextInt(10000)); } A: i know it’s very difficult but you can do something like this: create a class for BinaryNumber; create a constructor that generate a char[] of 6 character where every single one is generated with a randomiser from 0 to 1 override the toStrig() method so that it will return the digits char[] as a string if you want to display it. then crate a method toInt() that esaminate the string char by char with a for and turn it in a decimal base number by multiplying current digit to 10 to the pow of i: char[] digits = {‘1’ , ‘0’ , ‘1’ , ‘1’ , ‘0’ , ‘1’}; //random int result = 0; for( int i = 0; i < digits.length; i++) { result += Integer.parseInt(digits[i]) * Math.pow(10, i); } return result; A: This is the code in java which generate a 6 digit random code. import java.util.*; public class HelloWorld{ public static void main(String []args) { Random r=new Random(); HashSet<Integer> set= new HashSet<Integer>(); while(set.size()<1) { int ran=r.nextInt(99)+100000; set.add(ran); } int len = 6; String random=String.valueOf(len); for(int random1:set) { System.out.println(random1); random=Integer.toString(random1); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/51322750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How could I open an individual python file (or any code file) in an editor from the comand ( os.system (Path) ) or (os.startfile(path/Filename))? I am creating a script that opens an external file through two methods: os.system(Path) or os.startfile(Path) This works for Test files, however, it runs all code files like python which gets executed. I would like the option to open it in a text editor. How would I do this in a python 2.7 script? The text editor I use is VS Code. A: you can use it like this: os.system('code test_01.py') A: You can use either one, but first, read the docs on what os.system and os.startfile does. os.system(command) Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system(), and has the same limitations. Changes to sys.stdin, etc. are not reflected in the environment of the executed command. So this basically runs the command string you pass to it. If your intention is to open a file in VS Code, then you need to check if you can use the VS Code command for opening files/folders from the command line: code myfile.py If that works on your terminal, then your Python script would basically just be: os.system("code myfile.py") os.startfile(path[, operation]) Start a file with its associated application. When operation is not specified or 'open', this acts like double-clicking the file in Windows Explorer, or giving the file name as an argument to the start command from the interactive command shell: the file is opened with whatever application (if any) its extension is associated. I assume you are on Windows, because startfile is only available on Windows. The main thing here is that startfile is the same behavior as double-clicking the file in Windows Explorer. So, first make sure that when you double-click on a file, that it opens in VS Code. If it doesn't, then you need to associate that file with VS Code first. This is usually done by right-click > "Opens with.." then selecting VS Code from the list. Once double-clicking on a file opens it in VS Code, then your Python script would simply be: os.startfile("myfile.py", "open") The "open" here is optional, but I prefer to be explicit.
{ "language": "en", "url": "https://stackoverflow.com/questions/61391514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Setting a select option default to a previously set option In my modal window, I have a select where a user can select a breakfast meal from my database. By selecting a meal and submitting it, a record is saved in the database. However, I would like to achieve that when the user enters into the same modal window, whatever meal they have selected before hand will appear as the default option. I have created a query that checks to see if a meal has already been selected (optionquery1). Any ideas to achieve this? Thanks On a side note, I'm not sure that I'm setting the date php variable currently. The day, month and year is stored in the html hidden inputs but not sure how to extract the values. <div id="myModal" class="modal"> <!-- Modal content --> <div class="modal-content"> <span class="close">&times;</span> <div class="modal-body"> <form action="my_planner.php" method="post"> <!--<span class="test5"></span>--> <!--<span class="test"><input type="text" name="id_disabled" value="" disabled/>--> <input class="test" type="text" name="id1" value="" style="text-align:center; border:0px; font-weight:bold; font-size: 25px;"/> <hr class="my-4"> <input type="hidden" name="id" value=""/> <input type="hidden" name="month" value=""/> <input type="hidden" name="year" value=""/> <br> <p id="breakfastTag"><br>Breakfast:</p> <select id="breakfast" name="breakfast"> <option value="" disabled selected>Select your breakast</option> <? $meal='breakfast'; $date='<input type="hidden" name="id" value=""/>'.'<input type="hidden" name="month" value=""/>'.'<input type="hidden" name="year" value=""/>'; $query = $db->prepare("select * from recipe_category where categoryID=1"); $query->execute(); while ($results = $query->fetch()) { $recipeID=$results["recipeID"]; $query3 = $db->prepare("select * from recipe where id=:recipeID"); $dbParams3=array('recipeID'=>$recipeID); $query3->execute($dbParams3); $breakfast = $query3->fetchAll(); $optionquery = $db->prepare("select * from user_planner where userID=:userID and date=:date and meal=:meal"); $optionParams= array ('userID'=>$thisUser, 'date'=>$date,'meal'=>$meal); $optionquery->execute($optionParams); while ($results12 = $optionquery->fetch()) { $selectedmeal = $results12['recipeID']; } $optionquery1 = $db->prepare("select * from recipe where id=:recipeID"); $optionParams1= array ('recipeID'=>$selectedmeal); $optionquery1->execute($optionParams1); while ($results123 = $optionquery1->fetch()) { $selectedrecipe = $results123['name']; } foreach($breakfast as $breakfast): ?> <option value="<?= $breakfast['id']; ?>"><?= $breakfast['name']; ?></option> <?php endforeach;} ?> </select> A: Of course you can, you need to use the selected attribute on the <option> you want to show. First remove the selected attribute from the first <option>: <option value="" disabled>Select your breakast</option> Then in the foreach loop you need an if condition: $results123 = $optionquery1->fetch(); $selectedrecipe = $results123['id']; foreach($breakfast as $breakfast): ?> <option value="<?= $breakfast['id']; ?>" <?php if ($breakfast['id'] == $selectedrecipe) echo 'selected' ?>><?= $breakfast['name']; ?></option> <?php endforeach;} ?> Notice I have removed the while loop since we are sure Mysql will return only one record (WHERE ID = <ID>).
{ "language": "en", "url": "https://stackoverflow.com/questions/49720336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I test code that uses `requestAnimationFrame` in jest? I want to write a jest unit test for a module that uses requestAnimationFrame and cancelAnimationFrame. I tried overriding window.requestAnimationFrame with my own mock (as suggested in this answer), but the module keeps on using the implementation provided by jsdom. My current approach is to use the (somehow) builtin requestAnimationFrame implementation from jsdom, which seems to use setTimeout under the hood, which should be mockable by using jest.useFakeTimers(). jest.useFakeTimers(); describe("fakeTimers", () => { test.only("setTimeout and trigger", () => { const order: number[] = []; expect(order).toEqual([]); setTimeout(t => order.push(1)); expect(order).toEqual([]); jest.runAllTimers(); expect(order).toEqual([1]); }); test.only("requestAnimationFrame and runAllTimers", () => { const order: number[] = []; expect(order).toEqual([]); requestAnimationFrame(t => order.push(1)); expect(order).toEqual([]); jest.runAllTimers(); expect(order).toEqual([1]); }); }); The first test is successful, while the second fails, because order is empty. What is the correct way to test code that relies on requestAnimationFrame(). Especially if I need to test conditions where a frame was cancelled? A: I'm not sure this solution is perfect but this works for my case. There are two key principles working here. 1) Create a delay that is based on requestAnimationFrame: const waitRAF = () => new Promise(resolve => requestAnimationFrame(resolve)); 2) Make the animation I am testing run very fast: In my case the animation I was waiting on has a configurable duration which is set to 1 in my props data. Another solution to this could potentially be running the waitRaf method multiple times but this will slow down tests. You may also need to mock requestAnimationFrame but that is dependant on your setup, testing framework and implementation My example test file (Vue app with Jest): import { mount } from '@vue/test-utils'; import AnimatedCount from '@/components/AnimatedCount.vue'; const waitRAF = () => new Promise(resolve => requestAnimationFrame(resolve)); let wrapper; describe('AnimatedCount.vue', () => { beforeEach(() => { wrapper = mount(AnimatedCount, { propsData: { value: 9, duration: 1, formatDisplayFn: (val) => "£" + val } }); }); it('renders a vue instance', () => { expect(wrapper.isVueInstance()).toBe(true); }); describe('When a value is passed in', () => { it('should render the correct amount', async () => { const valueOutputElement = wrapper.get("span"); wrapper.setProps({ value: 10 }); await wrapper.vm.$nextTick(); await waitRAF(); expect(valueOutputElement.text()).toBe("£10"); }) }) }); A: So, I found the solution myself. I really needed to override window.requestAnimationFrame and window.cancelAnimationFrame. The problem was, that I did not include the mock module properly. // mock_requestAnimationFrame.js class RequestAnimationFrameMockSession { handleCounter = 0; queue = new Map(); requestAnimationFrame(callback) { const handle = this.handleCounter++; this.queue.set(handle, callback); return handle; } cancelAnimationFrame(handle) { this.queue.delete(handle); } triggerNextAnimationFrame(time=performance.now()) { const nextEntry = this.queue.entries().next().value; if(nextEntry === undefined) return; const [nextHandle, nextCallback] = nextEntry; nextCallback(time); this.queue.delete(nextHandle); } triggerAllAnimationFrames(time=performance.now()) { while(this.queue.size > 0) this.triggerNextAnimationFrame(time); } reset() { this.queue.clear(); this.handleCounter = 0; } }; export const requestAnimationFrameMock = new RequestAnimationFrameMockSession(); window.requestAnimationFrame = requestAnimationFrameMock.requestAnimationFrame.bind(requestAnimationFrameMock); window.cancelAnimationFrame = requestAnimationFrameMock.cancelAnimationFrame.bind(requestAnimationFrameMock); The mock must be imported BEFORE any module is imported that might call requestAnimationFrame. // mock_requestAnimationFrame.test.js import { requestAnimationFrameMock } from "./mock_requestAnimationFrame"; describe("mock_requestAnimationFrame", () => { beforeEach(() => { requestAnimationFrameMock.reset(); }) test("reqest -> trigger", () => { const order = []; expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([]); requestAnimationFrame(t => order.push(1)); expect(requestAnimationFrameMock.queue.size).toBe(1); expect(order).toEqual([]); requestAnimationFrameMock.triggerNextAnimationFrame(); expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([1]); }); test("reqest -> request -> trigger -> trigger", () => { const order = []; expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([]); requestAnimationFrame(t => order.push(1)); requestAnimationFrame(t => order.push(2)); expect(requestAnimationFrameMock.queue.size).toBe(2); expect(order).toEqual([]); requestAnimationFrameMock.triggerNextAnimationFrame(); expect(requestAnimationFrameMock.queue.size).toBe(1); expect(order).toEqual([1]); requestAnimationFrameMock.triggerNextAnimationFrame(); expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([1, 2]); }); test("reqest -> cancel", () => { const order = []; expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([]); const handle = requestAnimationFrame(t => order.push(1)); expect(requestAnimationFrameMock.queue.size).toBe(1); expect(order).toEqual([]); cancelAnimationFrame(handle); expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([]); }); test("reqest -> request -> cancel(1) -> trigger", () => { const order = []; expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([]); const handle = requestAnimationFrame(t => order.push(1)); requestAnimationFrame(t => order.push(2)); expect(requestAnimationFrameMock.queue.size).toBe(2); expect(order).toEqual([]); cancelAnimationFrame(handle); expect(requestAnimationFrameMock.queue.size).toBe(1); expect(order).toEqual([]); requestAnimationFrameMock.triggerNextAnimationFrame(); expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([2]); }); test("reqest -> request -> cancel(2) -> trigger", () => { const order = []; expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([]); requestAnimationFrame(t => order.push(1)); const handle = requestAnimationFrame(t => order.push(2)); expect(requestAnimationFrameMock.queue.size).toBe(2); expect(order).toEqual([]); cancelAnimationFrame(handle); expect(requestAnimationFrameMock.queue.size).toBe(1); expect(order).toEqual([]); requestAnimationFrameMock.triggerNextAnimationFrame(); expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([1]); }); test("triggerAllAnimationFrames", () => { const order = []; expect(requestAnimationFrameMock.queue.size).toBe(0); expect(order).toEqual([]); requestAnimationFrame(t => order.push(1)); requestAnimationFrame(t => order.push(2)); requestAnimationFrameMock.triggerAllAnimationFrames(); expect(order).toEqual([1,2]); }); test("does not fail if triggerNextAnimationFrame() is called with an empty queue.", () => { requestAnimationFrameMock.triggerNextAnimationFrame(); }) }); A: Here solution from the jest issue: beforeEach(() => { jest.spyOn(window, 'requestAnimationFrame').mockImplementation(cb => cb()); }); afterEach(() => { window.requestAnimationFrame.mockRestore(); }); A: Here is my solution inspired by the first answer. beforeEach(() => { jest.useFakeTimers(); let count = 0; jest.spyOn(window, 'requestAnimationFrame').mockImplementation(cb => setTimeout(() => cb(100*(++count)), 100)); }); afterEach(() => { window.requestAnimationFrame.mockRestore(); jest.clearAllTimers(); }); Then in test mock the timer: act(() => { jest.advanceTimersByTime(200); }); Directly call cb in mockImplementation will produce infinite call loop. So I make use of the Jest Timer Mocks to get it under control. A: My solution in typescript. I figured by making time go very quickly each frame, it would make the animations go very (basically instant) fast. Might not be the right solution in certain cases but I'd say this will help many. let requestAnimationFrameSpy: jest.SpyInstance<number, [callback: FrameRequestCallback]>; beforeEach(() => { let time = 0; requestAnimationFrameSpy = jest.spyOn(window, 'requestAnimationFrame') .mockImplementation((callback: FrameRequestCallback): number => { callback(time+=1000000); return 0; }); }); afterEach(() => { requestAnimationFrameSpy.mockRestore(); }); A: The problem with previous version is that callbacks are called directly, which does not reflect the asynchronous nature of requestAnimationFrame. Here is a mock which uses jest.useFakeTimers() to achieve this while giving you control when the code is executed: beforeAll(() => { jest.useFakeTimers() let time = 0 jest.spyOn(window, 'requestAnimationFrame').mockImplementation( // @ts-expect-error (cb) => { // we can then use fake timers to preserve the async nature of this call setTimeout(() => { time = time + 16 // 16 ms cb(time) }, 0) }) }) afterAll(() => { jest.useRealTimers() // @ts-expect-error window.requestAnimationFrame.mockRestore() }) in your test you can then use: yourFunction() // will schedule a requestAnimation jest.runAllTimers() // execute the callback expect(....) // check that it happened This helps to contain Zalgo.
{ "language": "en", "url": "https://stackoverflow.com/questions/61593774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I print information from a database (retrieved using java code) into html? I'm doing a web-development project in Java Eclipse IDE. I was wondering if there is a way I can print information from a database and show it on html. In my code below, I have a login function which searches the database table in mySQL for employee_id and password and then validates the login. In that table, there is a column called FirstName, which is the first name of the employee. How can I print the first name of the employee on the home page when they log in? (employeeHome.html) Validate.java public static boolean userIsAdmin(String employee_id, String password) { boolean st = false; try { Class.forName("com.mysql.jdbc.Driver").newInstance(); Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/payroll_system", "root", ""); PreparedStatement ps = con.prepareStatement("select * from employee_login where employeeID = ? and pwd = ? and Admin = 1 "); ps.setString(1, employee_id); ps.setString(2, password); ResultSet rs =ps.executeQuery(); st = rs.next(); Login.java if(Validate.userIsNotAdmin(employee_id, password)) { RequestDispatcher rs = request.getRequestDispatcher("employeeHome.html"); rs.forward(request, response); employeeHome.html <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Employee Home Page</title> </head> <body> <form action="Welcome" method="post"> <h3> Employee Home Page </h3> <input type="submit" value="View Personal Information" name="vpi"> <br> <br> <input type="submit" value="View Expense Claims" name="vec"> <br> <br> <input type="submit" value="View Payslips" name="vps"> <br> <br> <input type="submit" value="Change Password" name="cp"> </form> </body> </html> A: We use Apache Wicket. We have been using it for 9 years or so and have built large scale enterprise applications for financial institutions. It works well with many technologies and it also has a unit testing framework which we use quite a bit. I highly recommend it. Here's a link. http://wicket.apache.org/ I've never used JSF so I can't vouch for it, but a lot of people use it and it is a Java standard. Either one of these technologies are built for solving the problem that you need to solve in an efficient way. Now, if you want to do everything yourself and you don't want any frameworks in the way you could write a Java Servlet and output the HTML yourself. Here's a tutorial. There are many on the web. http://www.tutorialspoint.com/servlets/servlets-first-example I recommend using a framework. Hopefully my answer will help you make an educated decision for your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/34583090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Git commands in Visual Studio Code Terminal are not working, yet they work on the cmd prompt Error when trying a Git command in cmd: The term 'git' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. I need this to work in the integrated terminal on Visual Studio Code! I've already tried editing the path variables and the settings in Visual Studio Code. Nothing works. A: "Not recognized" is normally the way terminals politely tell you they don't know what you typed means. If you can use Git from a command line, then it's installed properly. You can use where git or which git depending on your command line to find the path of the functioning Git (if those don't work, please specify your terminal type in the question). Once done, open Visual Studio Code, hit Ctrl + , to open settings and type git path in the search. Add this path, and you should be able to use Git in Visual Studio Code.
{ "language": "en", "url": "https://stackoverflow.com/questions/58189196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is it possible to store PART of the URL in to a variable or somehow using cypress and use it later? i need to store the url from my website in to an URL and use part of it for an API call A: ...coming from the comments, I leave an answer here: With version 6.0.0, Cypress introduced a new command: .intercept(). .intercept() allows you to manage the behavior of network requests. It supports fetch, it can intercept both request and response of your app API calls and so much more, official docs here. Related to your case, make sure you declare your intercept command before the proper fetch action, let's say on the top of the test. Then, simply call it below in the body of the test by its alias: cy.intercept('GET', '/project/*').as('myObject') // ... cy.wait('@myObject').its('response.body.data.id').then(objUUID => { // here is your uuid cy.log(objUUID) }) Note, that any other action related to this uuid should be done inside above callback: .then(objUUID => {}). In case you need it shared between different tests, then intercept your request inside .beforeEach() hook and assign the uuid in a context shared variable, however, this is a bad practice (more about aliases and variables, here)
{ "language": "en", "url": "https://stackoverflow.com/questions/70426190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Call function when PyQt QWebEngineView is finished rendering QWebEngineView and QWebEnginePage have loadFinished, however, per the docs, "This signal is independent of script execution or page rendering." I want to call QWidget.render() (QWebEngineView implements QWidget) when the website is finished RENDERING... the only solution i've seen to this problem is to just give up and use QTimer to just wait 1000ms, which is a very bad solution in my opinion. If it takes less than 1000ms to render, time is wasted just waiting. If it takes more than 1000ms, then it's not finished rendering and even bigger problems emerge. I need to know exactly when QWebEnginePage completes its rendering so I can save it as an image as fast as possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/65798724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cleanup after exception I have a bit of code that resembles the following: try: fn() except ErrorA as e: ... do something unique ... cleanup() except ErrorB as e: ... do something unique ... cleanup() except ErrorC as e: ... do something unique ... cleanup() Is there any mechanism in Python that would allow me to call cleanup just once, only if an exception is raised? Basically the opposite of else. The best I can think of is: error = True try: fn() error = False except ErrorA as e: ... do something unique ... except ErrorB as e: ... do something unique ... except ErrorC as e: ... do something unique ... if error: cleanup() A: def _cleanup(): # clean it up return cleanup = _cleanup try: # stuff except: # handle it else: cleanup = lambda: None cleanup() A: The most clear way I can think of is do exactly the opposite of else: do_cleanup = True try: fn() except ErrorA as e: ... do something unique ... except ErrorB as e: ... do something unique ... except ErrorC as e: ... do something unique ... else: do_cleanup = False if do_cleanup: cleanup() If the code is enclosed and lets itself be done, you can simplify it by returning or breaking in the else. A: How about catching all the exceptions with one except clause and dividing up the different parts of your handling with if/elif blocks: try: fn() except (ErrorA, ErrorB, ErrorC) as e: if isinstance(e, ErrorA): ... do something unique ... elif isinstance(e, ErrorB): ... do something unique ... else: # isinstance(e, ErrorC) ... do something unique ... cleanup()
{ "language": "en", "url": "https://stackoverflow.com/questions/24174116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why gradle plugin 3.3.0 doesn't want to build google-services? I'm trying to Sync my project with gradle:3.3.0 , but in result file values.xml for google-services is not generated in folder: D:\Android\workspace\myproject\app\build\generated\res\google-services\debug\values\values.xml Top level build file build.gradle: apply plugin: 'kotlin' buildscript { ext.kotlin_version = '1.3.11' repositories { google() jcenter() } dependencies { classpath 'com.android.tools.build:gradle:3.3.0' classpath 'com.google.gms:google-services:4.1.0' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } } allprojects { repositories { mavenLocal() google() jcenter() mavenCentral() maven { url "https://jitpack.io" } } configurations.all { exclude group: 'org.jetbrains.kotlin', module: 'kotlin-stdlib-jre7' } } dependencies { implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" } compileKotlin { kotlinOptions { jvmTarget = "1.8" } } compileTestKotlin { kotlinOptions { jvmTarget = "1.8" } } app/build.gradle: apply plugin: 'com.android.application' apply plugin: 'kotlin-android' apply plugin: 'kotlin-android-extensions' apply plugin: 'kotlin-kapt' android { ... } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation "com.google.firebase:firebase-database:16.0.5" implementation "com.google.firebase:firebase-messaging:17.3.4" implementation "com.google.firebase:firebase-auth:16.1.0" implementation 'com.google.android.gms:play-services-auth:16.0.1' implementation 'com.google.firebase:firebase-storage:16.0.5' } apply plugin: 'com.google.gms.google-services' google-services.json File google-services.json is located in D:\Android\workspace\myproject\app\google-services.json In result after run of app the error occurs: 01-23 10:31:31.578 30044-30073/E/FA: GoogleService failed to initialize, status: 10, Missing google app id value from from string resources with name google_app_id. 01-23 10:31:31.578 30044-30073/E/FA: Missing google_app_id. Firebase Analytics disabled. See 01-23 10:31:33.758 30044-30044/ E/AndroidRuntime: FATAL EXCEPTION: main Process: , PID: 30044 java.lang.IllegalStateException: Default FirebaseApp is not initialized in this process . Make sure to call FirebaseApp.initializeApp(Context) first. at com.google.firebase.FirebaseApp.getInstance(com.google.firebase:firebase-common@@16.0.4:240) at com.google.firebase.auth.FirebaseAuth.getInstance(Unknown Source) A: Remove dependencies block from top-level gradle file: dependencies { implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" } And update to classpath 'com.android.tools.build:gradle:3.3.0' and classpath 'com.google.gms:google-services:4.2.0' In result your Gradle Files should be Like This For Android Studio 3.3 (stable channel) build.gradle(project:yourProject) // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { ext.kotlin_version = '1.3.11' repositories { google() jcenter() // Add repository maven { url 'https://maven.fabric.io/public' } } dependencies { classpath 'com.android.tools.build:gradle:3.3.0' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" classpath 'com.google.gms:google-services:4.2.0' classpath 'io.fabric.tools:gradle:1.26.1' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { google() jcenter() // Add repository maven { url 'https://maven.google.com/' } } } task clean(type: Delete) { delete rootProject.buildDir } build.gradle(Module:app) apply plugin: 'com.android.application' apply plugin: 'kotlin-android' apply plugin: 'kotlin-android-extensions' apply plugin: 'kotlin-kapt' apply plugin: 'com.google.gms.google-services' apply plugin: 'io.fabric' repositories { maven { url 'https://maven.fabric.io/public' } } android { compileSdkVersion 28 defaultConfig { applicationId "sanaebadi.info.teacherhandler" minSdkVersion 15 targetSdkVersion 28 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } packagingOptions { exclude 'META-INF/LICENSE' exclude 'META-INF/LICENSE-FIREBASE.txt' exclude 'META-INF/NOTICE' } dataBinding { enabled = true } } dependencies { def room_version = "2.1.0-alpha03" def lifecycle_version = "2.0.0" implementation fileTree(dir: 'libs', include: ['*.jar']) implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version" implementation 'androidx.appcompat:appcompat:1.1.0-alpha01' implementation 'androidx.constraintlayout:constraintlayout:2.0.0-alpha3' implementation 'androidx.legacy:legacy-support-v4:1.0.0' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test:runner:1.1.1' androidTestImplementation 'androidx.test.espresso:espresso-core:3.1.1' implementation 'androidx.cardview:cardview:1.0.0' implementation 'com.google.android.material:material:1.1.0-alpha02' implementation 'androidx.recyclerview:recyclerview:1.0.0' //Firebase implementation 'com.google.firebase:firebase-core:16.0.6' implementation 'com.crashlytics.sdk.android:crashlytics:2.9.8' implementation 'com.google.firebase:firebase-messaging:17.3.4' implementation 'com.google.firebase:firebase-storage:16.0.5' implementation 'com.google.firebase:firebase-auth:16.1.0' implementation 'com.google.android.gms:play-services-auth:16.0.1' implementation 'com.google.firebase:firebase-database:16.0.5' } gradle-wrapper.properties #Tue Jan 15 07:24:23 EST 2019 distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-4.10.1-all.zip A: update to classpath 'com.google.gms:google-services:4.2.0' in project level build.gradle.
{ "language": "en", "url": "https://stackoverflow.com/questions/54322007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ASP.NET Error with Database I have been learning ASP.NET and building a little customer portal. I used the built in CreateUserWizard. I wanted to look inside the ASPNETDB.MDF file to see how it was storing the users and maybe add some rows of my own. I opened the file in SQL Server Management Studio and viewed the file. I closed it with out saving. Now when I try to run the program I get this new error: The database 'C:\PROJECTS\PORTAL\PORTAL\APP_DATA\ASPNETDB.MDF' cannot be opened because it is version 706. This server supports version 662 and earlier. A downgrade path is not supported. So I assume opening the project in SQL Server it upgraded the version to 706, How can I either delete this database and create a new one or change the version of the database to an support version. Thanks A: The following blog entry will help you http://conceptdev.blogspot.com/2009/04/mdf-cannot-be-opened-because-it-is.html A: As soon as you attached it to SQL Server 2012, the database was upgraded to version 706. As the error message suggests, there is no way to downgrade the file back to version 662 (SQL Server 2008 R2). You can run the script found in your Visual Studio folder - [drive:]\%windir%\Microsoft.NET\Framework\version\asp_regsql. It'll display a UI for you to select the server to install a new copy on. Here's a MSDN article about it.
{ "language": "en", "url": "https://stackoverflow.com/questions/14247269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: css - vertical nested menu - can't get the nested styles right, it makes a bigger rectangle I am trying to make a vertical menu, but everytime when i show the sub menu on hover, it expands the previous element, making a 'bigger box'. I have no idea how to style that. Dont want to use some jquery plugin if there is an css solution. I have also bootstrap3, but there is no support for nested dropdowns, dropdowns inside dropdowns ... the nested ones did not open... JSFiddle link: http://jsfiddle.net/WqW5j/ index.html <div class="nav"> <ul class="main"> <li><a href="">1</a></li> <li><a href="">2</a></li> <li> <a href="">3</a> <ul class="sub"> <li><a href="">3-1</a></li> <li><a href="">3-2</a> <ul class="sub"> <li><a href="">3-2-1</a></li> <li><a href="">3-2-2</a> <ul class="sub"> <li><a href="">3-2-2-1</a></li> <li><a href="">3-2-2-2</a></li> </ul> </li> </ul> </li> </ul> </li> </ul> css .main{ list-style: none; padding:0px; margin: 0px; } .main li{ background-color:#f1f1f1; padding: 10px; margin: 5px; float:left; clear:both; } .main li:hover{ background-color:#d8d8d8; } .main .sub{ display: none; } .sub > li > .sub{ display: none; } .main > li:hover > .sub:nth-of-type(1){ display: block; position: relative; left: 20px; top:-30px; list-style: none; float:left; width: 100px; clear: both; } .sub > li:hover > .sub{ display: block; position: relative; left: 20px; top:-30px; list-style: none; float:left; width: 100px; } A: To get the nested menu work make all the li items position:relative and make the ul displayed on hover as position:absolute. Check this fiddle HTML: <div class="nav"> <ul class="main"> <li><a href="">1</a> <ul class="sub"> <li><a href="">1-1</a> <ul class="sub"> <li><a href="">1-1-1</a></li> <li><a href="">1-2-1</a></li> </ul> </li> <li><a href="">1-2</a> <ul class="sub"> <li><a href="">1-2-1</a></li> <li><a href="">1-2-2</a></li> </ul> </li> </ul> </li> <li><a href="">2</a></li> </ul> </div> CSS .main{ list-style: none; padding:0px; margin: 0px; } .main li{ background-color:#f1f1f1; padding: 10px; margin: 5px; float:left; clear:both; position:relative; } .main li:hover{ background-color:#d8d8d8; } .main .sub{ display: none; list-style:none; padding-left:0; width:auto; } .main .sub li{ float:none; } .main > li:hover > .sub{ display:block; position:absolute; top:0; left:100%; } .sub li:hover .sub{ display:block; position:absolute; top:0; left:100%; }
{ "language": "en", "url": "https://stackoverflow.com/questions/22683151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best Practices consuming a Soap request over SSL Just have a quick a question regarding best practices. I am connecting to a 3rd party api via soap over HTTPS. To do this I have simply added a service reference to my application by entering the 3rd party wsdl address. In my C# application I have simply instantiated the service passed in my api key and can successfully retrieve data. I am aware that this would of failed if the servers ssl certificate didn't meet the following requirements. * *the URL in the certificate matches the URL I'm posting to *the certificate is valid and trusted *the certificate has not expired However, is this secure enough? Is it possible for the ssl certificate to be hijacked or faked? Do I need to store a copy of the valid certificate locally and check the server certificate against the local copy? Thanks Ant
{ "language": "en", "url": "https://stackoverflow.com/questions/31625115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: UWP SerialDevice works only if I read it's properties So, I have some code for setting up my serial port in UWP (which is used with an USB-to-Serial adapter): public async Task Init() { string qFilter = SerialDevice.GetDeviceSelector("COM5"); DeviceInformationCollection devices = await DeviceInformation.FindAllAsync(qFilter); deviceId = devices.First().Id; var serialPort = await SerialDevice.FromIdAsync(deviceId); serialPort.WriteTimeout = TimeSpan.FromMilliseconds(1000); serialPort.ReadTimeout = TimeSpan.FromMilliseconds(1000); serialPort.BaudRate = 4800; serialPort.Parity = SerialParity.None; serialPort.StopBits = SerialStopBitCount.One; serialPort.DataBits = 8; serialPort.Handshake = SerialHandshake.None; } now, somewhere later in the code, I read a stream from the SerialDevice: public async Task<string> ReadValueFromSerial() { var dataReaderObject = new DataReader(serialPort.InputStream); uint ReadBufferLength = 1024; dataReaderObject.InputStreamOptions = InputStreamOptions.Partial; CancellationTokenSource cts = new CancellationTokenSource(); //set timeout here. cts.CancelAfter(10000); var bytesRead = await dataReaderObject.LoadAsync(ReadBufferLength).AsTask(cts.Token); (...) } no above code hangs on the last line, altough I know, that the serial device is sending bytes to the serial port. Funny thing is, that it works, when I put a breakpoint on the set-up part (the first snippet). But without a breakpoint and stepping through it, it just hangs, or times out. By trials and errors I found out, that reading the serial device properties which I set in the set-up code, makes it magically work. So after the code in the first snippet, I have added: public async Task Init() { (...) var s1 = " " + serialPort.BaudRate; var s2 = " " + serialPort.DataBits; var s3 = " " + serialPort.Parity.ToString(); } And now the var bytesRead = await dataReaderObject.LoadAsync(ReadBufferLength).AsTask(cts.Token); doesn't hang anymore and the bytes are loaded. It works, which is great, but why?
{ "language": "en", "url": "https://stackoverflow.com/questions/41388425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it necessary to build another project to fit mobile devices when using reactjs plus redux? As a simple project, we built the website using reactjs+redux. But now, business grows, we need to build the native apps to fit mobile devices, not only the mobile browser. I did some read work about React Native, it seems not possible to use current JS code directly to build native apps for mobile. Some of my colleague suggested to build another project for the mobile. But I'm wandering, how could the JS technique show its power to do fast iteration if I have to build another project just to do the same thing on the mobile. I know Cordova and did some work on it, but not very much. So, is there some guideline of integrating current JS code to Cordova? or is there any suggestion? Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/37914024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Updating displayed results after modifying Firestore doc React Native I have a list of games that I'm able to add to without issue using UseEffect and onSnapshot. I can modify an item in the list without issue, and return one set of results (with the updated data properly displaying). When I try to modify another item (or the item same again), I get this error: Could not update game: TypeError: undefined is not an object (evaluating '_doc.data().numPlayers') because the results/list of games are null. I'm sure I have something wrong with my code, but I can't figure it out. Thanks in advance! Here is my code: useEffect(() => { setIsLoading(true) let results = []; const unsubscribe = db .collection('games') .onSnapshot( (querySnapshot) => { querySnapshot.docChanges().forEach(change => { const id = change.doc.id; if (change.type === 'added') { const gameData = change.doc.data(); gameData.id = id; results.push(gameData); } if (change.type === 'modified') { console.log('Modified game: ', id); results = results.map(game => { if (game.id === id) { return change.doc.data() } return game }) console.log(results) } if (change.type === 'removed') { console.log('Removed game: ', id); } }); setIsLoading(false); setGame(results); return () => unsubscribe }, (err) => { setIsLoading(false); console.log("Data could not be fetched", err); } ); }, []); A: I forgot to add the doc ID to the gameData before adding it to the results. I did that in the "added" section, but not in the "modified" section (thinking that it was already included), forgetting that I hadn't added it as an actual field in the database (it just exists as the doc id).
{ "language": "en", "url": "https://stackoverflow.com/questions/71990276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to make HTML element square I would like to have square tiles in my react calendar. Currently width is calculated based on width of the whole calendar and I would like my height to be the same. Html <button class="react-calendar__tile react-calendar__tile--active react-calendar__tile--range react-calendar__tile--rangeStart react-calendar__tile--rangeEnd react-calendar__tile--rangeBothEnds react-calendar__month-view__days__day" type="button" style="flex: 0 0 14.2857%; overflow: hidden;"> <abbr aria-label="July 14, 2022">14</abbr> </button> I have tried this .booking-calendar .react-calendar__tile { flex: 0 0 14.2857%; overflow: hidden; } .booking-calendar .react-calendar__tile--active abbr { outline: 2px solid #444444; box-sizing: border-box; width: 100%; height: 100%; border-radius: 40px; font-weight: 700; } .booking-calendar .react-calendar__month-view__days__day abbr { padding: 10px; border-radius: 40px; display: inline-block; vertical-align: middle; } A: You can use the aspect-ration css property https://developer.mozilla.org/en-US/docs/Web/CSS/aspect-ratio .booking-calendar { width: 400px; display: flex; } .booking-calendar .react-calendar__tile { width: 100%; aspect-ratio: 1 / 1; margin: 10px; } .booking-calendar .react-calendar__tile--active abbr { outline: 2px solid #444444; box-sizing: border-box; width: 100%; height: 100%; border-radius: 40px; font-weight: 700; } .booking-calendar .react-calendar__month-view__days__day abbr { padding: 10px; border-radius: 40px; display: flex; justify-content: center; align-items: center; } <div class="booking-calendar"> <button class="react-calendar__tile react-calendar__tile--active react-calendar__tile--range react-calendar__tile--rangeStart react-calendar__tile--rangeEnd react-calendar__tile--rangeBothEnds react-calendar__month-view__days__day" type="button" style="overflow: hidden;"> <abbr aria-label="July 11, 2022">11</abbr> </button> <button class="react-calendar__tile react-calendar__tile--active react-calendar__tile--range react-calendar__tile--rangeStart react-calendar__tile--rangeEnd react-calendar__tile--rangeBothEnds react-calendar__month-view__days__day" type="button" style="overflow: hidden;"> <abbr aria-label="July 12, 2022">12</abbr> </button> <button class="react-calendar__tile react-calendar__tile--active react-calendar__tile--range react-calendar__tile--rangeStart react-calendar__tile--rangeEnd react-calendar__tile--rangeBothEnds react-calendar__month-view__days__day" type="button" style="overflow: hidden;"> <abbr aria-label="July 13, 2022">13</abbr> </button> <button class="react-calendar__tile react-calendar__tile--active react-calendar__tile--range react-calendar__tile--rangeStart react-calendar__tile--rangeEnd react-calendar__tile--rangeBothEnds react-calendar__month-view__days__day" type="button" style="overflow: hidden;"> <abbr aria-label="July 14, 2022">14</abbr> </button> </div> You will need to remove the flex inline style that collides with the aspect-ratio effect.
{ "language": "en", "url": "https://stackoverflow.com/questions/72391099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ngcsv - Trouble in safari and IE Browsers I am using Angular js to load table in my page. Now i want to export the data to excel. For that i find a solution of using ngcsv. It is mentioned as it can easily convert json array to .csv file. I tried with that and got succeeded in Firefox and Chrome Browser. But in safari and IE it is not working. I tried the following fiddler example given in internet `http://jsfiddle.net/asafdav/dR6Nb/` It is also not working in Safari and IE. Is there any work around for this? A: No, there is no workaround as such, unless you decide to write your own library. On the github page of ng-csv : https://github.com/asafdav/ng-csv It is clearly stated that Safari is not supported and only IE 10+ are supported.
{ "language": "en", "url": "https://stackoverflow.com/questions/25789090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Reverse order of glDatePicker selectable date range Specs changed apparently and I can't get my head around why this won't work. I have a form with a #StartDate and #EndDate fields <ul> <li> <label for="StartDate">Start Date</label> <input type="text" id="StartDate" name="StartDate"> </li> <li> <label for="EndDate">End Date</label> <input type="text" id="EndDate" name="EndDate"> <input type="submit" id="Button6" value="foo" /> </li> <li> <label for="AllDates"><input type="checkbox" name="AllDates" id="AllDates" value="YES">All Dates</label> </li> </ul> Previously when the date range was from today - forward this worked: $("#bottomContent").on('focus', "#StartDate", function(){ var today = new Date(); var datelimit = new Date(today); datelimit.setDate(today.getDate() + 31); $(this).glDatePicker({ showAlways: false, allowMonthSelect: true, allowYearSelect: true, prevArrow:'<', nextArrow:'>', selectedDate:today, selectableDateRange: [{ from: today, to: datelimit }, ], onClick: function (target, cell, date, data) { target.val((date.getMonth() +1) + '/' + date.getDate() + '/' + date.getFullYear()); $('#AllDates').prop('checked', false); if (data != null) { alert(data.message + '\n' + date); } } }).glDatePicker(true); var to = $('#EndDate').glDatePicker({ showAlways: false, onClick: function (target, cell, date, data) { target.val((date.getMonth() +1) + '/' + date.getDate() + '/' + date.getFullYear()); $('#AllDates').prop('checked', false); if (data != null) { alert(data.message + '\n' + date); } } }).glDatePicker(true); $("#bottomContent").on('focus', "#EndDate", function(){ var dateFrom = new Date($("#StartDate").val()); var toLimit = new Date(); toLimit.setDate(dateFrom.getDate() + 31); to.options.selectableDateRange = [{ from: dateFrom, to: toLimit }, ], to.options.showAlways = false; to.render(); });}); I need to make it start from today = newDate() and have it go BACKWARDS (-365 days), but simply switching all the "+ 31" which makes it go forward a month to "- 365" is not working, any ideas? A: This did it, if anyone ever runs across the same problem. $("#bottomContent").on('focus', "#StartDate", function(){ var today = new Date("January 1, 2013"); var datelimit = new Date(); datelimit.setDate(today.getDate() +14); $(this).glDatePicker({ showAlways: false, allowMonthSelect: true, allowYearSelect: true, prevArrow:'<', nextArrow:'>', selectedDate:today, selectableDateRange: [{ from: today, to: datelimit }, ], onClick: function (target, cell, date, data) { target.val((date.getMonth() +1) + '/' + date.getDate() + '/' + date.getFullYear()); $('#AllDates').prop('checked', false); if (data != null) { alert(data.message + '\n' + date); } } }).glDatePicker(true); var to = $('#EndDate').glDatePicker({ showAlways: false, prevArrow:'<', nextArrow:'>', selectedDate:today, onClick: function (target, cell, date, data) { target.val((date.getMonth() +1) + '/' + date.getDate() + '/' + date.getFullYear()); $('#AllDates').prop('checked', false); if (data != null) { alert(data.message + '\n' + date); } } }).glDatePicker(true); $("#bottomContent").on('focus', "#EndDate", function(){ var dateFrom = new Date($("#StartDate").val()); var toLimit = new Date(); //toLimit.setDate(dateFrom.getDate()); to.options.selectableDateRange = [{ from: dateFrom, to: toLimit }, ], to.options.showAlways = false; to.render(); }); });
{ "language": "en", "url": "https://stackoverflow.com/questions/15388751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Matlab face alignment code I am attempting to do some face recognition and hallucination experiments and in order to get the best results, I first need to ensure all the facial images are aligned. I am using several thousand images for experimenting. I have been scouring the Internet for past few days and have found many different programs which claim to do so, however due to Matlabs poor backwards compatibility, many of the programs no longer work. I have tried several different programs which don't run as they are calling onto Matlab functions which have since been removed. The closest I found was using the SIFT algorithm, code found here http://people.csail.mit.edu/celiu/ECCV2008/ Which does help align the images, but unfortunately it also downsamples the image, so the result ends up quite blurry looking which would have a negative effect on any experiments I ran. Does anyone have any Matlab code samples or be able to point me in the right direction to code that actually aligns faces in a database. Any help would be much appreciated. A: You can find this recent work on Face Detection, Pose Estimation and Landmark Localization in the Wild. It has a working Matlab implementation and it is quite a good method. Once you identify keypoints on all your faces you can morph them into a single reference and work from there. A: The easiest way it with PCA and the eigen vector. To found X and Y most representative data. So you'll get the direction of the face. You can found explication in this document : PCA Aligment A: Do you need to detect the faces first, or are they already cropped? If you need to detect the faces, you can use vision.CascadeObjectDetector object in the Computer Vision System Toolbox. To align the faces you can try the imregister function in the Image Processing Toolbox. Alternatively, you can use a feature-based approach. The Computer Vision System Toolbox includes a number of interest point detectors, feature descriptors, and a matchFeatures function to match the descriptors between a pair of images. You can then use the estimateGeometricTransform function to estimate an affine or even a projective transformation between two images. See this example for details.
{ "language": "en", "url": "https://stackoverflow.com/questions/20242826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to sed output of a command and not supress output I have the example text stored in test.sh: echo 'Hello world 123' echo 'some other text' With the following command in a bash script: word123=$(./test.sh |sed -nr 's/Hello world (.*)/\1/p' ) This works correctly and outputs: 123 However, this does output Hello world 123 some other text Is there a way to capture the text in 123 and also output everything else in the file? A: With Linux, bash and tee: word123=$( ./test.sh | tee >&255 >(sed -nr 's/Hello world (.*)/\1/p') ) File descriptor 255 is a non-redirected copy of stdout. See: What is the use of file descriptor 255 in bash process
{ "language": "en", "url": "https://stackoverflow.com/questions/69577891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: multiple dex files define landroid/support/annotation/AnimRes The moment I added the android support annotations to my dependencies compile 'com.android.support:support-annotations:20.0.0' I got this error: Error Code: 2 Output: UNEXPECTED TOP-LEVEL EXCEPTION: com.android.dex.DexException: Multiple dex files define Landroid/support/annotation/AnimRes; at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.java:594) at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:552) at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:533) at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:170) at com.android.dx.merge.DexMerger.merge(DexMerger.java:188) at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:439) at com.android.dx.command.dexer.Main.runMonoDex(Main.java:287) at com.android.dx.command.dexer.Main.run(Main.java:230) at com.android.dx.command.dexer.Main.main(Main.java:199) at com.android.dx.command.Main.main(Main.java:103) build.gradle android { compileSdkVersion 19 buildToolsVersion '20.0.0' defaultConfig { minSdkVersion 10 targetSdkVersion 19 } } dependencies { compile 'com.android.support:support-v4:19.0.0' compile 'com.crashlytics.android:crashlytics:1.+' compile 'com.android.support:support-annotations:20.0.0' } Anybody else experienced this issue? I have tried the solutions from here. A: The problem is that android-support-annotations.jar used to be a separate library containing the android annotations, but for some reason these annotations are already included in recent versions of the android-support-v4.jar file. Deleting the annotations jar solved the issue. A: Solved this exact issue in a Cordova project that used the facebook plugin. I was able to successfully build by commenting out this line from platforms\android\project.properties, as shown: # cordova.system.library.1=com.android.support:support-v4:+ And by commenting out this line from platforms\android\build.gradle, as shown: // compile "com.android.support:support-v4:+" Then doing the build. The problem started when I installed (katzer/cordova-plugin-local-notifications) which added these lines, but it created a conflict since the library it was adding to the build was already part of the facebook plugin build. A: Build->clean Project ,and it worked A: As other users said, the first elements to troubleshoot are dependencies. Although, sometimes you can struggle for hours and you don't find any problem so you can focus on the build process instead. Changing the way in which the .dex files are produced sometimes solves the problem. You can go through these steps: * *Open your Build.gradle (app) file *Search for the task dexOptions *Change it to: dexOptions { incremental false } If you don't find the task in your file then you can add it. A: I deleted the android-support-v4.jar and it worked. A: For me the reason was the new data-binding lib com.android.databinding:dataBinder:1.0-rc2 it somehow used a conflicting version of the annotations lib, which I could not force with configurations.all { resolutionStrategy { force group: 'com.android.support', name: 'support-v4', version: '23.1.0' force group: 'com.android.support', name: 'appcompat-v7', version: '23.1.0' force group: 'com.android.support', name: 'support-annotations', version: '23.1.0' } } but the new rc3 and rc4 versions seem to have fixed it, so just use those versions A: I had the same problem , but i deleted build files from the build folder projectname/app/build and it removed all the related error. "can't clean the project" and also "dex errow with $anim" A: If this is cordova / ionic project this worked for me add these line to build.gradle under platforms/android after line number 22 i.e after apply plugin: 'android' configurations { all*.exclude group: 'com.android.support', module: 'support-v4' } A: I managed to fix this issue. The reason was that I included the android support library 19.0.0 as a dependency, but 19.1.0 is required. See here for more information So it has to be dependencies { compile 'com.android.support:support-v4:19.1.0' compile 'com.crashlytics.android:crashlytics:1.+' compile 'com.android.support:support-annotations:20.0.0' } A: If you import AppCompat as a library project and you also have android-support-annotations.jar in libs elsewhere, make sure to import everywhere AppCompat library only (it already includes this annotations lib). Then delete all android-support-annotations.jar to avoid merging multiple versions of this library. A: Updating Android SDK Tools fixed it for me, now it just sees the copy in android-support-v4.jar. I had the same problem when using ant, and the annotations library was being included automatically by an outdated sdk.dir/tools/ant/build.xml. A: Clean project works as a temporary fix, but the issue will reappear on next compilation error. To fix more reliably, I had to update the dependency to android support-v4 to com.android.support:support-v4:22.2.0. A: Put in your build.gradle the dependency of support-annotations according with your compileSdkVersion. For instance: A project with the compileSdkVersion 25 you can put the following dependence: compile 'com.android.support:support-annotations:25.0.1' This will solve your problem. A: In my case I had a file called cache.xml under /build/intermediates/dex-cache/cache.xml in the root project folder. I deleted this file, rebuild the project and it worked for me. A: I deleted the android-support-v4.jar and it worked. Explain - android-support-v4.jar is conflicting with my other .jar files of project\libs files ** specially when you are running with java 8 on AS. A: Put android-support-v4.jar in your libs folder in eclipse. Clean and build the project. It will resolve the issue. A: Another reason that messages such as these can come up in Android Studio when building and launching can be the cause of application tags in your libraries. If you have several Android Library projects that you imported as modules. Go into those projects and remove the <application> ... </application> tags and everything between them. These can cause issues in the build process along with the support library issues already mentioned. A: From /platforms/android/libs/ delete android-support-v4.jar. It works for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/26342444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: xcode 7.2.1+ os 10.11.9 + opencv3.1.0 Undefined symbols for architecture x86_64? I have a cpp file which works fine with g++ before upgrading these stuff. Currently, when running : g++ -I/usr/local/Cellar/opencv3/3.1.0_1/include -stdlib=libc++ bs.cpp -o bs Results are following: Undefined symbols for architecture x86_64: "cv::namedWindow(cv::String const&, int)", referenced from: _main in bs-df9f1a.o "cv::VideoCapture::read(cv::_OutputArray const&)", referenced from: processVideo(char*) in bs-df9f1a.o "cv::VideoCapture::release()", referenced from: processVideo(char*) in bs-df9f1a.o "cv::VideoCapture::VideoCapture(cv::String const&)", referenced from: processVideo(char*) in bs-df9f1a.o "cv::VideoCapture::~VideoCapture()", referenced from: processVideo(char*) in bs-df9f1a.o "cv::destroyAllWindows()", referenced from: _main in bs-df9f1a.o "cv::createBackgroundSubtractorMOG2(int, double, bool)", referenced from: _main in bs-df9f1a.o "cv::Mat::deallocate()", referenced from: cv::Mat::release() in bs-df9f1a.o "cv::Mat::copySize(cv::Mat const&)", referenced from: cv::Mat::operator=(cv::Mat const&) in bs-df9f1a.o "cv::String::deallocate()", referenced from: cv::String::~String() in bs-df9f1a.o "cv::String::allocate(unsigned long)", referenced from: cv::String::String(char const*) in bs-df9f1a.o cv::String::String(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in bs-df9f1a.o "cv::imread(cv::String const&, int)", referenced from: processImages(char*) in bs-df9f1a.o "cv::imshow(cv::String const&, cv::_InputArray const&)", referenced from: processVideo(char*) in bs-df9f1a.o processImages(char*) in bs-df9f1a.o "cv::putText(cv::_InputOutputArray const&, cv::String const&, cv::Point_<int>, int, double, cv::Scalar_<double>, int, int, bool)", referenced from: processVideo(char*) in bs-df9f1a.o processImages(char*) in bs-df9f1a.o "cv::waitKey(int)", referenced from: processVideo(char*) in bs-df9f1a.o processImages(char*) in bs-df9f1a.o "cv::fastFree(void*)", referenced from: cv::Mat::~Mat() in bs-df9f1a.o "cv::rectangle(cv::_InputOutputArray const&, cv::Point_<int>, cv::Point_<int>, cv::Scalar_<double> const&, int, int, int)", referenced from: processVideo(char*) in bs-df9f1a.o processImages(char*) in bs-df9f1a.o "cv::VideoCapture::get(int) const", referenced from: processVideo(char*) in bs-df9f1a.o "cv::VideoCapture::isOpened() const", referenced from: processVideo(char*) in bs-df9f1a.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) I do not know how to solve this problem, it's not related to source code or include, it's related to the linker. I see the post:C++ linking error after upgrading to Mac OS X 10.9 / Xcode 5.0.1 Which really sounds reasonable, but I do not know how to add -stdlib=libstdc++ to the linking command. ? Hope sb could help. Thanks in advance.
{ "language": "en", "url": "https://stackoverflow.com/questions/35856056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }