text
stringlengths
64
89.7k
meta
dict
Q: Parallel.For System.OutOfMemoryException We have a fairly simple program that's used for creating backups. I'm attempting to parallelize it but am getting an OutOfMemoryException within an AggregateException. Some of the source folders are quite large, and the program doesn't crash for about 40 minutes after it starts. I don't know where to start looking so the below code is a near exact dump of all code the code sans directory structure and Exception logging code. Any advice as to where to start looking? using System; using System.Diagnostics; using System.IO; using System.Threading.Tasks; namespace SelfBackup { class Program { static readonly string[] saSrc = { "\\src\\dir1\\", //... "\\src\\dirN\\", //this folder is over 6 GB }; static readonly string[] saDest = { "\\dest\\dir1\\", //... "\\dest\\dirN\\", }; static void Main(string[] args) { Parallel.For(0, saDest.Length, i => { try { if (Directory.Exists(sDest)) { //Delete directory first so old stuff gets cleaned up Directory.Delete(sDest, true); } //recursive function clsCopyDirectory.copyDirectory(saSrc[i], sDest); } catch (Exception e) { //standard error logging CL.EmailError(); } }); } } /////////////////////////////////////// using System.IO; using System.Threading.Tasks; namespace SelfBackup { static class clsCopyDirectory { static public void copyDirectory(string Src, string Dst) { Directory.CreateDirectory(Dst); /* Copy all the files in the folder If and when .NET 4.0 is installed, change Directory.GetFiles to Directory.Enumerate files for slightly better performance.*/ Parallel.ForEach<string>(Directory.GetFiles(Src), file => { /* An exception thrown here may be arbitrarily deep into this recursive function there's also a good chance that if one copy fails here, so too will other files in the same directory, so we don't want to spam out hundreds of error e-mails but we don't want to abort all together. Instead, the best solution is probably to throw back up to the original caller of copy directory an move on to the next Src/Dst pair by not catching any possible exception here.*/ File.Copy(file, //src Path.Combine(Dst, Path.GetFileName(file)), //dest true);//bool overwrite }); //Call this function again for every directory in the folder. Parallel.ForEach(Directory.GetDirectories(Src), dir => { copyDirectory(dir, Path.Combine(Dst, Path.GetFileName(dir))); }); } } The Threads debug window shows 417 Worker threads at the time of the exception. EDIT: The copying is from one server to another. I'm now trying to run the code with the last Paralell.ForEach changed to a regular foreach. A: Making a few guesses here as I haven't yet had feedback from the comment to your question. I am guessing that the large amount of worker threads is happening here as actions (an action being the unit of work carried out on the parallel foreach) are taking longer than a specified amount of time, so the underlying ThreadPool is growing the number of threads. This will happen as the ThreadPool follows an algorithm of growing the pool so that new tasks are not blocked by existing long running tasks e.g. if all my current threads have been busy for half a second, I'll start adding more threads to the pool. However, you are going to get into trouble if all tasks are long-running and new tasks that you add are going to make existing tasks run even longer. This is why you are probably seeing a large number of worker threads - possibly because of disk thrashing or slow network IO (if networked drives are involved). I am also guessing that files are being copied from one disk to another, or they are being copied from one location to another on the same disk. In this case, adding threads to the problem is not going to help out much. The source and destination disks only have one set of heads, so trying to make them do multiple things at once is likely to actually slow things down: The disk heads will be lurching all over the place. Your disk\OS caches may be frequently invalidated. This may not be a great problem for parallelization. Update In answer to your comment, if you are getting a speed-up using multiple threads on smaller datasets, then you could experiment with lowering the maximum number of threads used in your parallel foreach, e.g. ParallelOptions options = new ParallelOptions { MaxDegreeOfParallelism = 2 }; Parallel.ForEach(Directory.GetFiles(Src), options, file => { //Do stuff }); But please do bear in mind that disk thrashing may negate any benefits from parallelization in the general case. Play about with it and measure your results.
{ "pile_set_name": "StackExchange" }
Q: Create Dynamic Controls using retrieved data [Asp - Vb .Net] I'm trying to create multiple controls by using retrieved data from query, but preventing them from dissapearing on postback, allowing me to get and mantain their values, the problem I have is that I cannot create them on Init because the number of controls, their IDs, and other properties are only known after user selects an item on menu. Page loads a Menu with all its items and values (Data dependent), plus, a button is loaded too User clicks a item on menu. Selected value is used to perform a query by using a Dataset (This happens inside a function which is called from Menu_ItemClick event). Retrieved data from query is used to determine how many controls must be created (2, 4, 6, etc.). Each control has its unique ID, which is given according to data. Controls are created and placed into a Panel (named p). Now controls are visible and available for editing (RadioButtons, TextAreas) User clicks on button to save information from dynamic controls into a Database Problems I'm facing Controls dissapear on postback when clicking button, because they weren't created on Init. Placing button on UpdatePanel to prevent whole page postback, makes dynamic controls not accesible when trying this: For Each c In p.Controls ... Next The only control it gets is a single Literal control (Controls count is 1), as if other controls didn't exist. Thank you in advance. A: When you wrote "Controls dissapear on postback when clicking button, because they weren't created on Init", did you mean to say that "Controls dissapear on postback when clicking button, because they weren't re-created on Init"? If not, then that is likely a root cause of your problem - dynamically-created controls must always be recreated in response to a PostBack (cf. ASP.NET dynamically created controls and Postback). There may be other issues as well, as dynamic controls in Web Forms can provide a lot of challenges as your scenario gets more involved - here's one article that lays out many of them under various scenarios http://www.singingeels.com/Articles/Dynamically_Created_Controls_in_ASPNET.aspx (e.g., if the user can re-select from the DropDownList to generate a different set of dynamic controls). The canonical reference on all of this is http://msdn.microsoft.com/en-us/library/ms178472.aspx. Now, on PostBack you'll need some way to ascertain which controls were dynamically created so they can be dynamically re-created. As such, you'll need to store somewhere whatever information allowed you to dynamically create the controls. Since ViewState isn't available in Page_Init and there can be other issues introduced when using sessions, my suggestion is to simply declare a HiddenField that contains that state information. In Page_Init, you'll then need to get the HiddenField's value from Request.Form (since the value of your HiddenField won't be loaded until after Page_Init from ViewState) and go from there to re-create your controls. My final suggestion: try getting everything working with a regular Panel first and then try and introduce the UpdatePanel - no need to over-complicate the problem at first.
{ "pile_set_name": "StackExchange" }
Q: Text goes off screen after using .aspectRatio(contentMode: .fill) I am trying to make an app with a background image using SwiftUI. However, the image is not the same aspect ratio as the screen, making me use .aspectRatio(contentMode: .fill) to fill the entire screen with it. This works completely fine until I start adding text. When adding text, it now goes off the screen instead of wrapping like it normally should do. Here's my code: struct FeaturesView: View { var body: some View { ZStack { Image("background") .resizable() .aspectRatio(contentMode: .fill) .edgesIgnoringSafeArea(.all) VStack(alignment: .leading) { Text("Hello this is some sample text that i am writing to show that this text goes off the screen.") } .foregroundColor(.white) } } } And this is the preview: As you can see, the text goes off the screen. I have tried using ´.frame()´ and specifying a width and height to fix it, but that causes issues when using the view inside other views. I am using the Xcode 12 beta. I'm new to Swift and SwiftUI, so all help is appreciated :) A: Of course it goes, because image expands frame of container, ie ZStack, wider than screen width. Here is a solution - make image to live own life in real background and do not affect others. Tested with Xcode 12 / iOS 14. var body: some View { ZStack { // ... other content VStack(alignment: .leading) { Text("Hello this is some sample text that i am writing to show that this text goes off the screen.") } .foregroundColor(.white) } .frame(maxWidth: .infinity, maxHeight: .infinity) .background( Image("background") .resizable() .aspectRatio(contentMode: .fill) .edgesIgnoringSafeArea(.all) ) }
{ "pile_set_name": "StackExchange" }
Q: Get the POST data back on Web API I have a Web API server that runs in c#. The httpget methods works perfectly, but I am too new to Web Api to get the post to work, and my searches that I've done is a bit fruitless. This is my HttpPost in the ApiController [HttpPost] public bool UploadLogs(UploadLogsIn logs) { return true; } This is the model public class UploadLogsIn { public byte[] logData { get; set; } public int aLogs { get; set; } } In a c++ Application I try to post data to this method. I'm using Curl to do the post CURL *curl; CURLcode res; curl = curl_easy_init(); if (curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://192.168.56.109:9615/api/WebApiService/UploadLogs"); curl_easy_setopt(curl, CURLOPT_POSTFIELDS, R"({"UploadLogsIn": [{"aLogs: 10}]})"); curl_easy_setopt(curl, CURLOPT_POST, 1L); curl_easy_setopt(curl, CURLOPT_TIMEOUT, 20); curl_easy_perform(curl); } curl_easy_cleanup(curl); When I debug the Web Api, the method gets hit, but the parameter does not contain any data. UPDATE With wireshark, this is the information that is sent POST /api/WebApiService/UploadLogs HTTP/1.1 Host: 192.168.56.109:9615 Accept: */* Content-Length: 32 Content-Type: application/x-www-form-urlencoded Form item: "{"UploadLogsIn": [{"aLogs: 10}]}" = "" END UPDATE If I add a header struct curl_slist *headers = NULL; headers = curl_slist_append(headers, "Accept: application/json"); headers = curl_slist_append(headers, "Content-Type: application/json"); headers = curl_slist_append(headers, "charsets: utf-8"); then my parameter is null. The Wireshark dump for this is POST /api/WebApiService/UploadLogs HTTP/1.1 Host: 192.168.56.109:9615 Accept: application/json Content-Type: application/json charsets: utf-8 Content-Length: 32 Line-based text data: application/json {"UploadLogsIn": [{"aLogs: 10}]} I'm sure there is something dumb that I do not do right. Any help will be appreciated A: There is a mismatch between the class you want and the data being sent. Your desired class looks like this public class UploadLogsIn { public byte[] logData { get; set; } public int aLogs { get; set; } } But the data being sent looks like this {"UploadLogsIn": [{"aLogs": 10}]} Which when deserialized looks like this public class UploadLogsIn { public int aLogs { get; set; } } public class RootObject { public IList<UploadLogsIn> UploadLogsIn { get; set; } } See the difference? To get the model to bind to this action [HttpPost] public bool UploadLogs([FromBody]UploadLogsIn logs) { return true; } The data sent would have to look like this {"aLogs": 10}
{ "pile_set_name": "StackExchange" }
Q: Planetary nitrification: could it work? Imagine an alien species arrived on Earth, to find that the nitrogen in the air was toxic to them. Imagine they wanted to terraform our planet, and decided to fix this problem by nitrifying a lot of our atmosphere. They wouldn't want to completely halt the nitrogen cycle; they'd just want to convert enough nitrogen into nitrates so that they wouldn't die the minute they left their spaceship. So my question is, if they took this course of action, and, say, the nitrogen concentration in the atmosphere was halved, would that create some kind of ecological crisis that would only make their living here more problematic? And if it is so that converting so much nitrogen into nitrates would be pointless because denitrifying bacteria will only return the nitrogen to the atmosphere, could they convert the nitrogen into another unreactive compound without effecting all the plants that need nitrates for growth and protein production? Note: I know that there are actually two steps to convert nitrogen into nitrates, but I've just used 'nitrification' as a shortcut. A: I'm going to have to argue against your selected answer. The problem is, just how are yon aliens going to get rid of the nitrogen? You referred to "converting so much nitrogen into nitrates" which suggests that this is what you had in mind. So the N2 molecules get converted to NO3 ions, right? Mmm, no. In fixing the nitrogen you deplete the atmosphere of much more oxygen than you do nitrogen, specifically 3 times as much. So, since the earth's atmosphere is (roughly) 80% nitrogen and 20% oxygen, when about 8.3% of the 80% has been converted to NO3, ALL of the oxygen will be used up. The other alternative, ammonia, is much worse. Ammonia, NH4, takes 4 times as much hydrogen as nitrogen, and since the total free hydrogen in the atmosphere is only 55 parts per million (by volume, not weight) there's just not a lot of leverage you can get. Of course, if the aliens are examples of Clarke's Law ("Any sufficiently advanced technology is indistinguishable from magic), They can get around this by transmuting about 3/8 of the nitrogen to oxygen, binding it to 1/8 of the nitrogen, and thereby nitrify 1/2 of the total nitrogen. Is this a good idea? I'd suggest not. Let's ignore the energy release caused by converting oxygen to nitrogen - those aliens are really good at the "indistinguishable from magic" part. If we're not willing to overlook this, the surface of the earth turns into crispy critters, and this is generally considered to be A Bad Thing. NO3 is the standard measure of nitrogen in fertilizers, and dumping this much nitrate into the soil would constitute (essentially) massive overfertilization and would quickly kill most plants. The NO3 produced accounts for about half of the total atmosphere, and atmospheric pressure is 15 lb/sq in, so the total NO3 will amount to about 7 pounds per square inch, which is far beyond anything we have experience with. For a somewhat specialized perspective, see http://www.growweedeasy.com/nitrogen-toxicity-cannabis. About 70% of this nitrate would be directly deposited in the oceans, and water runoff would contain extraordinarily high levels of nitrogen, and this would quickly produce algal blooms in the world's oceans which would deplete the oxygen levels and kill off all surface fish, followed more slowly by the deeper species as the oxygen depletion worked its way downwards. The local, surface version of this phenomenon is already seen in places like the Gulf of Mexico, where fertizer runoff from farms along the Mississippi River cause "dead zones". Of course, this assumes that the nitrate levels would not directly kill the existing algae (not a good assumption, IMO, but I'm not an expert in the field). With all due respect, this is a good example of the principle that you should wait at least 24 hours before selecting an answer. A: It's fine-ish. It happens already, the aliens have friends here on Earth. Rhizobia and other bacteria convert atmospheric nitrogen for us higher order species on a regular basis, since we can't use atmospheric nitrogen anyway. Atmospheric Nitrogen constitutes a large portion of the atmosphere, so you will have a few years of climactic effects, but on the ground, not a huge difference (other than effects of climate change), because the vast majority of atmosphere is in the 5km-16km location, not the 1% or so that we live in. So brace yourselves for weather issues and very fertile growth. I hope the Aliens come hungry.
{ "pile_set_name": "StackExchange" }
Q: How can I save multiple images to a temporary folder to preview before confirming the upload? I have a site where I would like to let users select multiple images to upload to the site, show them a preview of the images they uploaded and then give them the option to either change any of the images or to go ahead and upload them. I would like for them to: Select the images Click a submit button and have the images save to a temporary location Reload the page showing the images they uploaded with a "Change Photo" button next each photo The user can click on the "Change Photo" button to have the page reload with an upload form so that the user can pick and upload a new photo to the temporary location and have it replace the photo in question Or the user can click on the "Upload Photos" button to confirm the upload and have it save it in a permanent location I am using php, but I don't mind a solution that involves javascript with php, as long as the above criteria is met. How can I save multiple images to a temporary folder to preview before confirming the upload? A: Just a suggestion, you can use jQuery uploadify to upload multiple images through ajax and save in some temp folder, and again display these uploaded image which when confirmed by user to be upload then you can move those images to your destination folder deleting it from temp folder.
{ "pile_set_name": "StackExchange" }
Q: the 'select' call keep find client socket could be read It's hard for me to describe my question, though, the title above maybe somehow not clear. I have a ordinary tcp server which have a listen socket, and can accept up to 32 clients, I have created sockets for each client, and I use select system call to monitor which client could be read, following code is a snippet of my program. where, rset is a member variable of my class and its type is fd_set. I have zeroed it using FD_ZERO in the constructor. timeo is also a member variable, type is struct timeval initialized with 10 seconds. this->sock is again a member variable, used for listening and accepting new clients. I have called FD_SET(this->sock, &rset) before. print_trace is just a macro which print the message and append a '\n'. while(1) { int count = select(FD_SETSIZE, &rset, /*&wset*/ NULL, NULL, &timeo); printf("%d fds\n", count); if(count) { if(FD_ISSET(this->sock, &rset)) { // new connection comes and now this line will not blocked if((csocks[sock_count] = accept(this->sock, NULL, NULL))) { FD_SET(csocks[sock_count], &rset); ++sock_count; } } else { print_trace("there are clients can be read."); for(int i = 0 ; i < sock_count ; ++i) { if(FD_ISSET(csocks[i], &rset)) { char buffer[512] = {0}; recv(csocks[i], buffer, 512, 0); printf("here client socket number: %d, i=%d, message: %s\n", csocks[i], i, buffer); } } } } timeo.tv_sec = 10; timeo.tv_usec = 0; } I know I am not re-enable the this->sock using FD_SET, for select will clear all the bits when timeout, but it have no concern with my trouble. my trouble is, when I run the server program, and in 10 seconds, I run a client program to connect to this server, the select returns 1 normally, so that the client socket be created and be added to rset, and then, the server goes to next loop, caution! now, right now, terminate the client program immediately, don't wait for the select returns. Okay, now the trouble will reappear, the server program keep printing the following info: 1 fds there are clients can be read. here client socket number: 6, i=2, message: 1 fds there are clients can be read. here client socket number: 6, i=2, message: 1 fds there are clients can be read. here client socket number: 6, i=2, message: 1 fds there are clients can be read. here client socket number: 6, i=2, message: ... ... I have use tcpdump monitor the connection, when terminate the client, it just send an FIN packet, and the server program just send an ACK packet, there is no any other data flowing the connection. why the select keep finding the client socket can be read while it just read empty message(as the printing message showing)? Any help will be appreciated. update: I know the usage of select method clearly, I thought I don't use this method properly so I have had spent about one hour to study this method, in order to solve my problem, but found no result still. A: You're ignoring the result returned by recv(), which is itself an error, and specifically you're ignoring the possibility that it's zero, which is end of stream, on which you should close the socket, so you don't just select on EOS again.
{ "pile_set_name": "StackExchange" }
Q: Unable to get provider com.facebook.internal.FacebookInitProvider: java.lang.ClassNotFoundException I am trying to share a photo from my application to facebook. I have added the Facebook SDK and done the initial setup. But when I run the application its crashing and I am getting the following exception. Here is my logcat: java.lang.RuntimeException: Unable to get provider com.facebook.internal.FacebookInitProvider: java.lang.ClassNotFoundException: Didn't find class "com.facebook.internal.FacebookInitProvider" on path: DexPathList[[zip file "/data/app/com.ignite.a01hw909350.kolamdemo-2/base.apk", zip file "/data/app/com.ignite.a01hw909350.kolamdemo-2/split_lib_slice_1_apk.apk"],nativeLibraryDirectories=[/data/app/com.ignite.a01hw909350.kolamdemo-2/lib/arm64, /data/app/com.ignite.a01hw909350.kolamdemo-2/base.apk!/lib/arm64-v8a, /data/app/com.ignite.a01hw909350.kolamdemo-2/split_lib_slice_1_apk.apk!/lib/arm64-v8a, /vendor/lib64, /system/lib64]] at android.app.ActivityThread.installProvider(ActivityThread.java:5267) at android.app.ActivityThread.installContentProviders(ActivityThread.java:4859) at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4799) at android.app.ActivityThread.access$1600(ActivityThread.java:168) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1434) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:148) at android.app.ActivityThread.main(ActivityThread.java:5609) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:797) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:687) Caused by: java.lang.ClassNotFoundException: Didn't find class "com.facebook.internal.FacebookInitProvider" on path: DexPathList[[zip file "/data/app/com.ignite.a01hw909350.kolamdemo-2/base.apk", zip file "/data/app/com.ignite.a01hw909350.kolamdemo-2/split_lib_slice_1_apk.apk"],nativeLibraryDirectories=[/data/app/com.ignite.a01hw909350.kolamdemo-2/lib/arm64, /data/app/com.ignite.a01hw909350.kolamdemo-2/base.apk!/lib/arm64-v8a, /data/app/com.ignite.a01hw909350.kolamdemo-2/split_lib_slice_1_apk.apk!/lib/arm64-v8a, /vendor/lib64, /system/lib64]] at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56) at java.lang.ClassLoader.loadClass(ClassLoader.java:511) at java.lang.ClassLoader.loadClass(ClassLoader.java:469) at android.app.ActivityThread.installProvider(ActivityThread.java:5252) at android.app.ActivityThread.installContentProviders(ActivityThread.java:4859)  at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4799)  at android.app.ActivityThread.access$1600(ActivityThread.java:168)  at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1434)  at android.os.Handler.dispatchMessage(Handler.java:102)  at android.os.Looper.loop(Looper.java:148)  at android.app.ActivityThread.main(ActivityThread.java:5609)  at java.lang.reflect.Method.invoke(Native Method)  at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:797)  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:687)  Suppressed: java.lang.ClassNotFoundException: com.facebook.internal.FacebookInitProvider at java.lang.Class.classForName(Native Method) at java.lang.BootClassLoader.findClass(ClassLoader.java:781) at java.lang.BootClassLoader.loadClass(ClassLoader.java:841) at java.lang.ClassLoader.loadClass(ClassLoader.java:504) ... 12 more Caused by: java.lang.NoClassDefFoundError: Class not found using the boot class loader; no stack trace available Here is my Manifest.xml <application android:name=".AppController" android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:largeHeap="true" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <meta-data android:name="com.facebook.sdk.ApplicationId" android:value="@string/facebook_app_id"/> <activity android:name=".MainActivity" /> <activity android:name=".ARCameraActivity" android:configChanges="orientation|screenSize" android:screenOrientation="fullSensor" /> <activity android:name=".RegistrationActivity" android:screenOrientation="portrait" /> <activity android:name=".LoginActivity" android:screenOrientation="portrait" /> <activity android:name=".SplashActivity" android:screenOrientation="portrait"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".MenuActivity" android:screenOrientation="portrait" /> <activity android:name=".ScanAndDrawActivity" android:screenOrientation="portrait" /> <activity android:name=".GalleryActivity" android:screenOrientation="portrait" /> <activity android:name=".PdfKolamActivity" android:screenOrientation="portrait" /> <service android:name=".BluetoothService" android:enabled="true" android:exported="true"> <intent-filter> <action android:name="com.ignite.a01hw909350.kolamdemo.BluetoothService" /> </intent-filter> </service> <receiver android:name=".MyScheduleReceiver" android:enabled="true"> <intent-filter> <action android:name="android.bluetooth.adapter.action.STATE_CHANGED" /> </intent-filter> </receiver> <activity android:name=".BotDialogActivity" android:launchMode="singleInstance" android:noHistory="true" android:theme="@style/Theme.AppCompat.Light.Translucent" /> <activity android:name=".ModelActivity" /> <activity android:name=".PanchangActivity" /> <receiver android:name=".MyStartServiceReceiver" android:exported="true"/> <service android:name=".services.AlarmService" android:enabled="true"> <intent-filter> <action android:name="NOTIFICATION_SERVICE" /> </intent-filter> </service> <receiver android:name=".BootReceiver" android:enabled="true"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <action android:name="android.intent.action.QUICKBOOT_POWERON" /> </intent-filter> </receiver> <provider android:authorities="com.facebook.app.FacebookContentProvider43234236033829" android:name="com.facebook.FacebookContentProvider" android:exported="true"/> </application> Here is my build.gradle: apply plugin: 'com.android.application' android { compileSdkVersion 25 buildToolsVersion "25.0.2" defaultConfig { applicationId "com.ignite.a01hw909350.kolamdemo" minSdkVersion 17 targetSdkVersion 25 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } aaptOptions { noCompress 'KARMarker' noCompress 'armodel' } repositories { jcenter() maven() } } dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile project(':KudanAR1') compile project(':eventbus-3.0.0') compile 'com.android.support:appcompat-v7:25.3.1' compile 'com.android.support.constraint:constraint-layout:1.0.2' compile 'com.android.volley:volley:1.0.0' compile 'com.scottyab:secure-preferences-lib:0.1.4' compile 'com.jrummyapps:animated-svg-view:1.0.1' compile 'me.zhanghai.android.materialprogressbar:library:1.3.0' compile 'io.palaima:smoothbluetooth:0.1.0' compile 'com.android.support:recyclerview-v7:25.3.1' compile 'com.afollestad.material-dialogs:core:0.9.3.0' compile 'com.flurgle:camerakit:0.9.13' compile 'com.github.zhukic:sectioned-recyclerview:1.0.0' compile 'com.android.support:support-vector-drawable:25.3.1' compile 'com.android.support:cardview-v7:25.3.1' compile 'com.prolificinteractive:material-calendarview:1.4.3' compile 'com.github.bumptech.glide:glide:3.7.0' compile 'com.android.support:design:25.3.1' compile 'com.github.barteksc:android-pdf-viewer:2.4.0' compile 'org.rajawali3d:rajawali:1.1.668@aar' compile 'com.tapadoo.android:alerter:1.0.8' compile 'com.google.android.gms:play-services-location:10.0.1' compile 'uk.co.chrisjenx:calligraphy:2.3.0' compile 'com.facebook.android:facebook-android-sdk:[4,5)' testCompile 'junit:junit:4.12' } Where is the problem coming from? A: I had the same error because a badly set multidex. The problem occurred on device with Android 4.4.2. This i had in build.gradle: defaultConfig { multiDexEnabled true } And I had to add to build.gradle: dependencies { compile 'com.android.support:multidex:1.0.2' } And this method to my application class: protected void attachBaseContext(Context base) { super.attachBaseContext(base); MultiDex.install(this); } Answer is originaly from here, which problem I had to solve when I avoided facebook problem by removing facebook SDK. So maybe can help another tip from there.
{ "pile_set_name": "StackExchange" }
Q: Show Button 5 seconds after program start Does someone know how to show a button (Javafx) 5 seconds after the program start. It is a button that allows to go to the next page. A: Just use PauseTransition and set event handler on finished transition. Start your transition on primary stage on shown event handler. Button delayedButton = new Button("Next"); delayedButton.setVisible(false); primaryStage.setOnShown(ev -> { PauseTransition pt = new PauseTransition(Duration.seconds(5)); pt.setOnFinished(e -> { delayedButton.setVisible(true); }); pt.play(); });
{ "pile_set_name": "StackExchange" }
Q: cloning a private git repo in jenkins, bitbucket I am having errors cloning a private bitbucket repo with Jenkins. I've followed the debug steps from here: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin Create ssh keys Added ssh key to bitbucket as a deployment key Successfully cloned that repo with that ssh key in my user account on the server Copied keys and known hosts into C:\Windows\SysWOW64\config\systemprofile.ssh Checked that the Jenkins service was running under local system account Start build and then -> Error What am I doing wrong? ERROR: Error cloning remote repo 'origin' : Could not clone ssh:///[email protected]:myUsername/myRepo.git hudson.plugins.git.GitException: Could not clone ssh:///[email protected]:myUsername/myRepo.git Caused by: hudson.plugins.git.GitException: Command "git.exe clone --progress -o origin ssh:///[email protected]:myUsername/myRepo.git C:\Program Files (x86)\Jenkins\workspace\myProject" returned status code 128: stdout: Cloning into 'C:\Program Files (x86)\Jenkins\workspace\myProject'... stderr: ssh: connect to host port 22: Bad file number fatal: The remote end hung up unexpectedly A: Ok, dumb fix. In jenkins I was putting in the reqpository url in the project configuration like their example ssh://[email protected]:me/project.git Which was incorrect, it should be [email protected]:me/project.git
{ "pile_set_name": "StackExchange" }
Q: Microsoft Unit Testing for ASP.NET with VS2012 I am ripping apart and putting back together a large website and want to take this opportunity to do some test driven development as the site is recreated. The issue that I am running into is how. A lot of the items that I need to test deal with session variables (or other variables) that are being set when a user logs in. But if I am testing an individual page I don't ever log in. For example: [TestMethod()] [HostType("ASP.NET")] [UrlToTest("http://localhost:64769/UsersDetail.aspx")] public void GetCompanyId_Test() { var testID = GetCompanyID(); Assert.AreEqual("123456789", testID); } Now, the problem is that in order for GetCompanyID to work, it has to have variables available that are set at login. Is this possible? Do I have to mock up the data in some way ? Thanks A: You'll probably have to change the GetCompanyID function to use some variables that are filled by the session in the livesystem, and by your testsetup in the unit test system. Alas, you can't mock HttpSessionState in ASP.NET - see here: How do I mock/fake the session object in ASP.Net Web forms? Another idea would be to actually do the loginaction in the testsetup.
{ "pile_set_name": "StackExchange" }
Q: Figure out smallest needed rotation to target rotation from current rotation I have the current rotation (where the smartphone is looking at) and a target angle I want the user to look at. Both are in the range of 0 to 360. int current = 340; int target = 45; How can I figure out the smallest needed rotation either left or right to the target angle? Simply substracting the values makes for an inefficient rotation. A rotation to the right should be a positive value, a rotation to the left should be negative. A: Solved it with the following one-liner: int neededRotation = (int) (-1* ((currentDirection - calculateAngle(x, y) + 540) % 360 - 180));
{ "pile_set_name": "StackExchange" }
Q: Dynamically display posts by one author AND from one category on the same page? I'm trying to edit my author.php wordpress template so that it shows posts by any one author, but only from one particular category. So far, I've been trying the query_posts function which fetches the category okay, but not the author. Depending on which way I do it, so far the posts either don't display at all or all posts in that category appear regardless of the author. This is the appropriate code which I've seen quoted by a wordpress.org admin, but it doesn't work for me and I can't find any other examples. Any ideas why that doesn't work? Thanks for your help in advance. //Gets author info to display on page and for use in query <?php $curauth = (get_query_var('author_name')) ? get_user_by('slug', get_query_var('author_name')) : get_userdata(get_query_var('author')); ?> //Queries by category and author and starts the loop <?php query_posts('category_name=blog&author=$curauth->ID;'); if ( have_posts() ) : while ( have_posts() ) : the_post(); ?> //HTML for each post <?php endwhile; else: ?> <?php echo "<p>". $curauth->display_name ."hasn't written any articles yet.</p>"; ?> <?php endif; ?> ============ ALSO TRIED ============ <?php new WP_Query( array( 'category_name' => 'blog', 'author' => $curauth->ID ) ); ?> This doesn't work either, however it does filter the posts by author, just not by category! What am I doing wrong? Thanks! A: This task can be done using pre_get_posts filter. By this way it's also possible to filter for author in addition than for category: // functions.php add_action( 'pre_get_posts', 'wpcf_filter_author_posts' ); function wpcf_filter_author_posts( $query ){ // We're not on admin panel and this is the main query if ( !is_admin() && $query->is_main_query() ) { // We're displaying an author post lists // Here you can set also a specific author by id or slug if( $query->is_author() ){ // Here only the category ID or IDs from which retrieve the posts $query->set( 'category__in', array ( 2 ) ); } } }
{ "pile_set_name": "StackExchange" }
Q: How to evaluate $\int\frac1{t^2}\exp(\int\frac1t\,\mathrm dt)\,\mathrm dt$? I want to evaluate the following integral:$$x(t)=\int\frac1{t^2}\exp\left(\int\frac1t\,\mathrm dt\right)\,\mathrm dt,$$ where $t$ is any value in $\mathbb{R}\backslash\{0\}$. The next step is $$x(t)=\int\frac1{t^2}\exp(\ln|t|)\,\mathrm dt=\int\frac{|t|}{t^2}\,\mathrm dt.$$ I am not sure how to continue from this step. The "correct solution" is $\ln(t)$, but no step is given. I do not see how to arrive at this solution. I am not given any additional information. Can I make some assumption? A: $\mathbb{R^*}=\mathbb{R}\setminus\{0\}$ $f:\mathbb{R^*}\to\mathbb{R}$ such that \begin{align} f(t)=\frac{\exp(\int\frac{1}{t}\ dt)}{t^2}=\frac{\exp(\ln(|t|)+c_0)}{t^2}=&\frac{c_1|t|}{t^2} \ & \mathbb{(c_0, c_1\in R).}\end{align} We will evaluate $\\[2ex]\int{f(t)}\,dt$. Remember: $t^2>0$ for all $t\in\mathbb{R} \setminus \{0\}$. Similarly, we know that $|t|=\sqrt{t^2} > 0$ for all $t\in\mathbb{R} \setminus \{0\}$. From those two characteristics, we will show that $0<\frac{|t|}{t^2}=\frac{1}{|t|}$ for all $t\in\mathbb{R} \setminus \{0\}$: \begin{align} \frac{|t|}{t^2}=\frac{\sqrt{t^2}}{t^2}=\frac{\sqrt{t^2}\times\sqrt{t^2}}{t^2\times\sqrt{t^2}}=\frac{t^2}{t^2\times|t|}=\frac{1}{|t|} \end{align} Note: $\frac{|t|}{t^2}=\frac{1}{t}$ only if we restrict our original domain to $\mathbb{R}_{>0}$, the set of positive real numbers. We will use the fact that for any given $t$ in our domain, $\\[2ex]\frac{t}{|t|},\frac{|t|}{t}\in\{-1,1\}$ and $\frac{t}{|t|}\times\frac{|t|}{t}=1$:\begin{align} \int{f(t)}\,dt= & \int{\frac{c_1|t|}{t^2}}\,dt \\ & =c_1\int{\frac{|t|}{t^2}}\,dt \\ & =c_1\int{\frac{1}{|t|}}\,dt \\ & =c_1 \frac{t}{|t|} \int{\frac{1}{|t|}\frac{|t|}{t}}\,dt \\ & =c_1 \frac{t}{|t|}\int{\frac{1}{t}}\,dt \\ & =c_1 \frac{t}{|t|} (\ln(|t|)+c_2)\end{align} If we assume $c_0=0$, then $c_1=1$ and we arrive at $\frac{t}{|t|}\ln(|t|)+C$.
{ "pile_set_name": "StackExchange" }
Q: Can we use requirejs in an angular application to manage/modularize only a part of the application? I have an existing angular web app which doesn't use require.js. I have to create a new business module in the existing application. Can I use require.js for the new module only? So that I don't have to touch the existing code? The existing index.html looks like this: <html> <head> ... </head> <body> ... <script src="http://cdn.gse.site/angular/1.2.9/angular.js"></script> <script src="js/angular-ui-router.js"></script> <script src="js/services/angDashboardService.js"></script> <script src="js/controllers/angDashboardController.js"></script> <--- More custom scripts here ---> </body> </html> I tried including require-main.js in the existing index.html file without removing any of the existing script tags. The require-main.js looks like this : require.config({ baseUrl: 'js', paths:{ 'angular' : '...' }, shim: { 'angular': {export: 'angular' }, 'new-module': { deps: ['angular'], export: 'new-module' } } }); require(['new-module'], function(){}); I am getting the error as following: Uncaught Error: [ng:btstrpd] App Already Bootstrapped with this Element '<body class="preload ng-scope" ng-app="angDashboard">' A: Can we use requirejs in an angular application to manage/modularize only a part of the application? Yes you can (but why...?). You will be hits by some serious headache if you are not ware of what are you doing Anyways to able to do this you must be fully understand the concept of angularJS and requireJS . <script src="http://cdn.gse.site/angular/1.2.9/angular.js"></script> <script src="js/angular-ui-router.js"></script> <script src="js/services/angDashboardService.js"></script> <script src="js/controllers/angDashboardController.js"></script> This mean you already had an angular app running. So you will not (should not) config or load angular anymore with requireJS require.config({ baseUrl: 'js', paths:{ 'async-module' : '...' }, // You won't use 'shim' with this structure // shim: {} }); async-module.js // assuming somewhere you have did this // var app = angular.module([...]); // // NOW you need to convert this 'app' to global variable. So you can use it it requirejs/define blocks // window.app = angular.module([...]); define(function(){ window.app.controller('asyncCtrl', function($scope){ // controller code goes here }); }); Then somewhere inside your app, when you want to load this async-module requirejs(['async-module'], function(){ console.log('asyncCtrl is loaded!'); }); SUMARY >> It is possible to do what you asked but it does not very effective. And will be like hell in code management. If this answer took you lesser than 5 mins to understand, you can give it a try. If it took you longer than 5 mins to understand. I am highly not recommending you to do this. Using requireJS with angularJS in common way (everything loaded by requireJS) is already complicated and tricky. And this use case even beyond that.
{ "pile_set_name": "StackExchange" }
Q: Android, "more details" UI best practice? I need to show some data to my user, in my Android app, which fit on one page; and, I want to give her the possibility to show "more details", i.e. more data. In my situation I have some textviews inside a vertical scrollview inside a relative layout. Is there a "best practice", or a recommendation, or just something you like - to implement the "more details" user control? To show an icon, a button, some text, ...? A: Yes, a button saying "More Details" would be fine, then just hide the button and show more details where the button was, at the end of the scroll view.
{ "pile_set_name": "StackExchange" }
Q: Twig roles checking is redirect When I trying to check user roles in the Twig with is_granted() not return boolean just redirect to the login path. {% if is_granted('ROLE_SUPER_ADMIN') == true %} # without == true tested. <a href="{{ path('foo_bar') }}">Foo Bar Link</a> {% endif %} Symfony: 4.1 A: My problem solved when I change the Authenticator before change: $isPasswordValid = $this->encoder->isPasswordValid($user, $token->getCredentials()); if ($isPasswordValid) { return new UsernamePasswordToken($user, $user->getPassword(), $providerKey, $user->getRoles()); } and after change it to: $isPasswordValid = $this->encoder->isPasswordValid($user, $token->getCredentials()); if ($isPasswordValid or $token->getUser() instanceof User) { return new UsernamePasswordToken($user, $user->getPassword(), $providerKey, $user->getRoles()); } I append $token->getUser() instanceof User to the conditation.
{ "pile_set_name": "StackExchange" }
Q: please help me for loop c++ i don't to run it i can't to write a code to for loop c++ I want it to be 1 21 321 4321 But I do not write that way. #include <iostream> using namespace std; int main() { int num; cin>>num; for(int i=1;i<=num;i++) { for(int j=1;j<=i;j++) { cout<<j; } cout<<endl; } cin.get(); } it outputs: 1 12 123 1234 12345 123456 1234567 12345678 123456789 A: Just change your second loop like this: for(int j=i; j>=1; j--) and it will work. DEMO
{ "pile_set_name": "StackExchange" }
Q: Conditional probability. under which conditions $E$ and $F$ are independent? In a village where there are $M$ women and $H$ men, $m$ women smoke and $h$ men smoke. A person is chosen at random. Let $E$ be the event "The chosen person is female" and $F$ the event "the chosen person smokes" under which conditions $E$ and $F$ are independent? My work: We know two $E$ and $F$ are independent if $P(E|F)=\frac{P(E\cap F)}{P(F)}=P(E)$ We need calculate that probability $P(E\cap F)=\frac{m}{M}$ $P(F)=\frac{mh}{M+H}$ $P(E)=\frac{M}{M+H}$ then $P(E|F)=\frac{P(E\cap F)}{P(F)}=\frac{m(M+H)}{M(mh)}\not =\frac{M}{M+H}$ then the events are dependent. Here i'm stuck. can someone help me? A: The events are independent if and only if $P(E\cap F)=P(E)P(F)$. Thus it has to satisfy the following $$ \frac{m}{M+H}=\frac{M}{M+H}\frac{m+h}{M+H} \iff m=\frac{M(m+h)}{M+H} $$
{ "pile_set_name": "StackExchange" }
Q: Symfony2 change default validator messages Is there a way to change the default messages of base constraints (NotBlank, MinLength, etc.) without translation? Thanks. A: No, this is done via the translator.
{ "pile_set_name": "StackExchange" }
Q: Turning on wifi tethering programmatically Is it possible to turn on the wifi hotspot programmatically, to enable tethering? I've tried the code here and here. Both examples execute without exception, but when I look in the "Tethering & portable hotspot" section in the wifi settings, the tethering is still disabled. Is this only possible for internal Google apps? EDIT: I'm using Android 5.1 and I'm trying to do this without having to root the phone. A: Try below code, to turning on wifi tethering programmatically. I have tested and it's working in my application. public class WifiAccessManager { private static final String SSID = "1234567890abcdef"; public static boolean setWifiApState(Context context, boolean enabled) { //config = Preconditions.checkNotNull(config); try { WifiManager mWifiManager = (WifiManager) context.getSystemService(Context.WIFI_SERVICE); if (enabled) { mWifiManager.setWifiEnabled(false); } WifiConfiguration conf = getWifiApConfiguration(); mWifiManager.addNetwork(conf); return (Boolean) mWifiManager.getClass().getMethod("setWifiApEnabled", WifiConfiguration.class, boolean.class).invoke(mWifiManager, conf, enabled); } catch (Exception e) { e.printStackTrace(); return false; } } public static WifiConfiguration getWifiApConfiguration() { WifiConfiguration conf = new WifiConfiguration(); conf.SSID = SSID; conf.allowedKeyManagement.set(WifiConfiguration.KeyMgmt.NONE); return conf; } } Usage: WifiAccessManager.setWifiApState(context, true); Permission Require: <uses-permission android:name="android.permission.CHANGE_WIFI_STATE" />
{ "pile_set_name": "StackExchange" }
Q: Calculating upload speed issue in Delphi I use delphi 2010 and clever internet suite component i upload a file and want to calculate the upload speed.. i tried this code but it gives me "INF" in the label + the wrong speed ! whats wrong in that code? private FBytesProceed : Int64; FTimeStamp : TDateTime; FSpeed : double; end; procedure TForm2.clHttp1SendProgress(Sender: TObject; ABytesProceed, ATotalBytes: Int64); var LTimeStamp : TDateTime; begin LTimeStamp := Now; if FBytesProceed < ABytesProceed then begin // calculating bytes per second FSpeed := ( ABytesProceed - FBytesProceed ) {bytes} / ( ( LTimeStamp - FTimeStamp ) {days} * 24 {hours} * 60 {minutes} * 60 {seconds} ); end; FBytesProceed := ABytesProceed; FTimeStamp := LTimeStamp; label1.Caption := Format(' speed %n Kbps',[FSpeed / 1024]); end; A: As you encountered, the resolution of the system timer isnt very good. I seem to recall that it can be as low as 50ms. Here's two ways to get around this, some of it depends on how your program is structured. One, you can use a regular TTimer set to 2 second or whatever interval you'd like. Each time that fires you get the byte count, compare it to the last time the timing event fired, and set the caption with the upload rate. This would obviously only work if you're dealing with non-blocking uploads. If you dont want to use a TTimer, you can also do this in a separate thread, and have that check the upload every couple of seconds. Another way is to keep doing what you are doing, but only update the upload rate after a second. What I'd recommend is using GetTickCount() instead of Now() (since you dont actually need the date, just a counter). GetTickCount() brings back an integer representing milliseconds, not a floating point value. Start a byte count at 0. For every chunk that gets uploaded, add that amount to the byte count. Then check the tick count. If a second has passed since the last caption update, update the caption and set the byte count back to zero and record what the tick count was for the next time a chunk is uploaded. (just some pseudo-code to illustrate what I'm talking about in the 2nd option) t := GetTickCount(); n := t - LastTick; if (n > 2000) then //2 seconds begin rate := ByteCount / n; caption := format(....); LastTick := t; ByteCount := 0; end;
{ "pile_set_name": "StackExchange" }
Q: Docker-compose does not work in docker-machine I have a cookiecutter-django application with docker that works fine if I run it locally with docker compose -f local.yml up. Now I am trying to deploy it so first I created a docker-machine in my computer (using macOS Catalina) and activated it. Now, inside the docker-machine, the docker-compose build works fine, but when I run it, the application crashes. Any idea what can be happening? I have been trying to solve this for almost a week now... This are my logs when I do docker-compose up in the docker-machine: Creating network "innovacion_innsai_default" with the default driver Creating innovacion_innsai_postgres_1 ... done Creating innovacion_innsai_django_1 ... done Creating innovacion_innsai_node_1 ... done Attaching to innovacion_innsai_postgres_1, innovacion_innsai_django_1, innovacion_innsai_node_1 postgres_1 | 2020-03-16 08:41:12.472 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres_1 | 2020-03-16 08:41:12.472 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres_1 | 2020-03-16 08:41:12.473 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres_1 | 2020-03-16 08:41:12.494 UTC [21] LOG: database system was shut down at 2020-03-16 08:31:09 UTC postgres_1 | 2020-03-16 08:41:12.511 UTC [1] LOG: database system is ready to accept connections django_1 | PostgreSQL is available django_1 | Traceback (most recent call last): django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute django_1 | return self.cursor.execute(sql, params) django_1 | psycopg2.errors.UndefinedTable: relation "innovation_sector" does not exist django_1 | LINE 1: ...n_sector"."id", "innovation_sector"."sector" FROM "innovatio... django_1 | ^ django_1 | django_1 | django_1 | The above exception was the direct cause of the following exception: django_1 | django_1 | Traceback (most recent call last): django_1 | File "manage.py", line 30, in <module> django_1 | execute_from_command_line(sys.argv) django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line django_1 | utility.execute() django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute django_1 | self.fetch_command(subcommand).run_from_argv(self.argv) django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 323, in run_from_argv django_1 | self.execute(*args, **cmd_options) django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 361, in execute django_1 | self.check() django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 390, in check django_1 | include_deployment_checks=include_deployment_checks, django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/migrate.py", line 65, in _run_checks django_1 | issues.extend(super()._run_checks(**kwargs)) django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 377, in _run_checks django_1 | return checks.run_checks(**kwargs) django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/checks/registry.py", line 72, in run_checks django_1 | new_errors = check(app_configs=app_configs) django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/checks/urls.py", line 40, in check_url_namespaces_unique django_1 | all_namespaces = _load_all_namespaces(resolver) django_1 | File "/usr/local/lib/python3.7/site-packages/django/core/checks/urls.py", line 57, in _load_all_namespaces django_1 | url_patterns = getattr(resolver, 'url_patterns', []) django_1 | File "/usr/local/lib/python3.7/site-packages/django/utils/functional.py", line 80, in __get__ django_1 | res = instance.__dict__[self.name] = self.func(instance) django_1 | File "/usr/local/lib/python3.7/site-packages/django/urls/resolvers.py", line 584, in url_patterns django_1 | patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) django_1 | File "/usr/local/lib/python3.7/site-packages/django/utils/functional.py", line 80, in __get__ django_1 | res = instance.__dict__[self.name] = self.func(instance) django_1 | File "/usr/local/lib/python3.7/site-packages/django/urls/resolvers.py", line 577, in urlconf_module django_1 | return import_module(self.urlconf_name) django_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module django_1 | return _bootstrap._gcd_import(name[level:], package, level) django_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import django_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load django_1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked django_1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked django_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module django_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed django_1 | File "/app/config/urls.py", line 18, in <module> django_1 | path("", include("innovacion_innsai.innovation.urls", namespace="innovation")), django_1 | File "/usr/local/lib/python3.7/site-packages/django/urls/conf.py", line 34, in include django_1 | urlconf_module = import_module(urlconf_module) django_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module django_1 | return _bootstrap._gcd_import(name[level:], package, level) django_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import django_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load django_1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked django_1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked django_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module django_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed django_1 | File "/app/innovacion_innsai/innovation/urls.py", line 2, in <module> django_1 | from innovacion_innsai.innovation import views django_1 | File "/app/innovacion_innsai/innovation/views.py", line 9, in <module> django_1 | from .analytics import alimentacion_cases, agro_cases, turismo_cases, movilidad_cases django_1 | File "/app/innovacion_innsai/innovation/analytics.py", line 17, in <module> django_1 | for case in Case.objects.filter(sector__sector=sectors[0]): django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 308, in __getitem__ django_1 | qs._fetch_all() django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 1242, in _fetch_all django_1 | self._result_cache = list(self._iterable_class(self)) django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 55, in __iter__ django_1 | results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1133, in execute_sql django_1 | cursor.execute(sql, params) django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 99, in execute django_1 | return super().execute(sql, params) django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 67, in execute django_1 | return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers django_1 | return executor(sql, params, many, context) django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute django_1 | return self.cursor.execute(sql, params) django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__ django_1 | raise dj_exc_value.with_traceback(traceback) from exc_value django_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute django_1 | return self.cursor.execute(sql, params) django_1 | django.db.utils.ProgrammingError: relation "innovation_sector" does not exist django_1 | LINE 1: ...n_sector"."id", "innovation_sector"."sector" FROM "innovatio... django_1 | ^ django_1 | innovacion_innsai_django_1 exited with code 1 node_1 | node_1 | > [email protected] dev /app node_1 | > gulp node_1 | node_1 | [08:41:22] Using gulpfile /app/gulpfile.js node_1 | [08:41:22] Starting 'default'... node_1 | [08:41:22] Starting 'styles'... node_1 | [08:41:22] Starting 'scripts'... node_1 | [08:41:22] Starting 'imgCompression'... node_1 | [08:41:22] gulp-imagemin: Minified 0 images This is my local.yml: version: '3' volumes: local_postgres_data: {} local_postgres_data_backups: {} services: django: build: context: . dockerfile: ./compose/local/django/Dockerfile image: innovacion_innsai_local_django depends_on: - postgres volumes: - .:/app env_file: - ./.envs/.local/.django - ./.envs/.local/.postgres ports: - "8000:8000" command: /start postgres: build: context: . dockerfile: ./compose/production/postgres/Dockerfile image: innovacion_innsai_production_postgres volumes: - local_postgres_data:/var/lib/postgresql/data - local_postgres_data_backups:/backups env_file: - ./.envs/.local/.postgres #Estas dos siguientes lineas las he añadido yo luego ports: - "5432:5432" node: build: context: . dockerfile: ./compose/local/node/Dockerfile image: innovacion_innsai_local_node depends_on: - django volumes: - .:/app # http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html - /app/node_modules command: npm run dev ports: - "3000:3000" # Expose browsersync UI: https://www.browsersync.io/docs/options/#option-ui - "3001:3001" And this is my .postgres file inside the .envs/local folder: # PostgreSQL # ------------------------------------------------------------------------------ POSTGRES_HOST=postgres POSTGRES_PORT=5432 POSTGRES_DB=innovacion_innsai POSTGRES_USER=debug POSTGRES_PASSWORD=debug A: File "/app/innovacion_innsai/innovation/analytics.py", line 17, in <module> django_1 | for case in Case.objects.filter(sector__sector=sectors[0]): It means that when django is initializing (the moment it loads the modules) it's doing DB queries on database that has no migrations applied. You need to transform whatever your code is doing on line 17 to run it lazy.
{ "pile_set_name": "StackExchange" }
Q: mailing list code for site that allows sign up and sign off I've set up a website, but I don't know a lot of HTML or PHP. I have managed to get a mailing list sign up function into it, but I would like people to be able to remove themselves too. Is this possible? There's only one mailing list so all it need do is accept an email address, for sign up or sign off. A: I'm assuming you're using a file system to save. In that case you may need to loop through every line to find matching string and delete it. Fortunately, since you're using PHP, it's probably easier to use a database like MySQL. Search for "PHP MySQL CRUD" or "PHP MySQL Tutorial" and you should find more help than you'll need. After then it's just something like this: $db = (MySQL Connection from the tutorials, usually PDO or mysqli); function saveEmail($db, $name, $email){ // Simple email validation, you will probably want to validate or sanitize other fields too if(!filter_var($email, FILTER_VALIDATE_EMAIL)){ return 'Email is not valid'; } // Straight query, you may want to look into prepared statements too // You may also wish to check for duplicate emails or to set the field as UNIQUE $sql = "INSERT INTO table (name, email) VALUES ('$name', '$email')"; if($db->query($sql)){ return true; }else{ return 'DB Insert Failed'; } } function deleteEmail($db, $email){ $sql = "DELETE FROM table WHERE email = '$email'"; if($db->query($sql)){ return true; }else{ return 'DB Delete Failed'; } }
{ "pile_set_name": "StackExchange" }
Q: How to join two tables on a condition that may not occur? I have two tables Table User, has ID field Table Options, has userId , optionName, value How to select users that have optionName = 'email' and value = 1 or have no entries for email in the Options table? A: Try this: select * from User u inner join Option o on u.id = o.userid where (o.optionName = 'email' and o.value = 1) or (o.optionName <> 'email')
{ "pile_set_name": "StackExchange" }
Q: fancybox 3 iframe doesn't work I have a problem with FancyBox 3, probably images works perfectly but iframe doesn't. I've followed all instructions from their site http://fancyapps.com/fancybox/3/docs/#setup But video just cant work, whatever I do, I'm getting this error: http://imgur.com/a/3LUR6. I hope some one already had same problems and know answer :D. thx anyway A: You have to move your code on some webserver or install something like apache on your computer to make it work. btw, ajax will not work for you developing this way, too.
{ "pile_set_name": "StackExchange" }
Q: Is there any special graph to stand for the Interface in argoUML tool? I use the argoUML tool to analyses the class architecture . It's easy to find the graph to stand for the Class or Package . But I can't find a special graph to stand for Interface. Is the graph which standing for the Class for Interface ? Thanks ! A: I had found the graph of the Interface . There are a number of different between class and interface. New class New interface
{ "pile_set_name": "StackExchange" }
Q: How should I store an object in an asp.net web service so that business objects can reference the object? I am building an ASP.NET web service that will be used internally in my company. Exception and trace/audit logging will be performed by the web service class as well as the business objects that the web service will call. The logging is handled by an instance of an internally developed log helper class. The log helper must be an instance as it tracks state and a reference guid that is used to relate the log messages into groups. In the past we handled this situation by passing references to the log helper instance from class to class using the method parameters. I am trying to find a reliable way to find a way to store and access the instance throughout the call without having to explicitly pass it around. I am attempting to store the instance in the HTTPContext during the early stages of the web service call. When my business objects need it later during the call, they will access it as a property of a base class that all my objects inherit from. Initially I tried storing the instance in the web service's Context.Cache. This seemed to work and my research led me to believe that the Cache would be thread safe. It wasn't until I started calling the web service from more than 3 concurrent sessions that the instance of the logger would be shared from call to call rather than be recreated new for each call. I tried Context.Application and found very similar results to the Cache storage. I was able to find what looks like a usable solution with Context.Session. This requires me to use EnableSession = true in the attributes of each method but it does seem to keep the instance unique per call. I do not have a need to track data between calls so I am not storing session cookies in the client space. Is session the optimal storage point for my needs? It seems a little heavy given that I don't need to track session between calls. I'm open to suggestions or criticism. I'm sure someone will suggest using the built in Trace logging or systems like Elmah. Those might be an option for the future but right now I don't have the time required to go down that path. Update: I should clarify that this service will need to run on .Net Framework 2.0. We are in the process of moving to 3.5/4.0 but our current production server is Win2000 with a max of 2.0. A: I take it that, in the past, you have used these business objects in a Windows Forms application? You should not have your business objects dependent on some ambient object. Instead, you should use either constructor injection or property injection to pass the logger object to the business objects. The logger should be represented by an interface, not by a concrete class. The business objects should be passed a reference to some class that implements this interface. They should never know where this object is stored. This will enable you to test the business objects outside of the web service. You can then store the logging object wherever you like. I'd suggest storing it in HttpContext.Current.Items, which is only valid for the current request. public interface ILogger { void Log(string message); } public class Logger : ILogger { public void Log(string message) {} } public class BusinessObjectBase { public BusinessObjectbase(ILogger logger) { Logger = logger; } protected ILogger Logger {get;set;} } public class BusinessObject : BusinessObjectBase { public void DoSomething() { Logger.Log("Doing something"); } }
{ "pile_set_name": "StackExchange" }
Q: iOS Swift get JSON data into tableView I have a JSON Data which I want to get into UITable. The data is dynamic so table should update every time view loads. Can anyone help? { data = ( { id = 102076330; name = "Vicky Arora"; } ) } A: try this.... When you receive response,get the whole array of dictionary if let arr = response["data"] as? [[String:String]] { YourArray = arr // Define YourArray globally } Then in tableview cell,cellForRowAtIndexPath method if let name = YourArray[indexpath.row]["name"] as? String{ label.text = name } //Same You can done with id And don't forget to set number of rows override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete method implementation. // Return the number of rows in the section. return YourArray.count }
{ "pile_set_name": "StackExchange" }
Q: Why is there a separate message for WM_ERASEBKGND I've never quite understood why erasing the background has a separate windows message. I looks a bit redundant to me. When I've created owner-drawn buttons, I've always ended up erasing the background from inside WM_PAINT. I have sometimes even done all the painting from inside WM_ERASEBKGND and left WM_PAINT empty. Both seem to work fine. Is there any advantage to separating the painting into 2 operations? A: This is entirely guesses: Back in the olden days, filling a rectangle with colour was a relatively slow operation. But filling one big rectangle was still a lot quicker than filling lots of little rectangles. I guess that if you had a window with a child window, and both had the same registered background brush, then Windows was smart enough to realise it didn't need to send a WM_ERASEBKGND to the child when it had already cleared the parent. With a moderately complex dialog box on a very slow PC, this might be a significant improvement.
{ "pile_set_name": "StackExchange" }
Q: how to compress more jquery file? is there any way to compress more jquery base file? what i have is about 56K and i need a more light file because of dial-up speed(56k). A: You can also check the The JavaScript CompressorRater and see how different tools will compress jQuery. The rule of thumb however is to enable GZIP compression for browsers that support it. A: jquery 1.4 compressed with JSMin is close to 56K. packer by dean edwards generally gives a little better compression but would take longer to decompress on client side. you can compare both at jscompress haven't seen jquery compressed with closure being used anywhere. Personally I'd go with JSMin and serve with gzip compression. That brings it down to ~23K
{ "pile_set_name": "StackExchange" }
Q: Messagebox comes up twice and emptying textbox with backspace gives error Thanks for your time on reading this. My problem is that if i use texbox.Clear(); that the messagebox shows 2 times. Here is my code: private void textBox3_TextChanged(object sender, EventArgs e) { int val = 0; bool res = Int32.TryParse(textBox3.Text, out val); if (res == true && val > -1 && val < 101) { // add record } else { textBox3.Clear(); MessageBox.Show("Please input 0 to 100 only."); return; } } AND if i use this: if (textBox3.Text == "") { MessageBox.Show("Vul aub iets in", "Lege invoer", MessageBoxButtons.OK, MessageBoxIcon.Warning); } Then I will get a System.FormatException error for using this straal = int.Parse(textBox3.Text); ALSO When I am in the textbox and I enter a number and enter backspace the alert comes up. A: Your problem is textBox3.Clear(). If there is text it is firing the TextChanged event again. I would suggest, you use the Validate event instead of the TextChanged event. This way you don't check the value until the user exits the textbox and changing the value in the event handler, won't fire the event again *Correction - I should have said "Validating event". Assuming you know how to assign an event handler to a specific event, you can use the same code with some small changes. Something like this should work: private void textBox3_Validating(object sender, CancelEventArgs e) { int val = 0; bool res = Int32.TryParse(textBox3.Text, out val); if (res == true && val > -1 && val < 101) { // add record e.Cancel = false; } else { textBox3.Clear(); e.Cancel = true; MessageBox.Show("Please input 0 to 100 only."); } } If you need help assigning the event handler, you can easily assign the handler and create a code stub, by finding the proper event in the events list, in the property grid, and double-clicking it. Simply copy the contents of this method into the new one. It looks like you're modifying the generated code directly. This is not a good idea. Use the method I described above to add the new event handler. To delete the existing one, erase the method name saved with that event in the events list, then delete the code itself if necessary. To get rid of the extra firings of the event handler, reset the textbox to a default acceptable value(textBox3.Text = "0"), instead of clearing it. On a side note. Have you considered using a NumericUpDown control instead?
{ "pile_set_name": "StackExchange" }
Q: FileInfo.MoveTo generating error in C#.Net script in SSIS package I have a C#.Net script that moves a file to a directory and adds an increment to the file name if the file already exists. It works perfectly in one of my packages, but I copied it for another package and it fails with the following error message: DTS SCript Task has encounter an exception in user code: Project name: ST_<blablabla> Exception has been thrown by the target of an invocation. at System.RuntimeMethodHandle.InvokeMethod(Object target,Object[] arguments,Signature si, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] paramters, Object[] arguments) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.RuntimeType.InvokeMember(String name, BindingsFlags bindingFlags, Binder binder, Object target, Object[] providedArgs, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParams) at Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTATaskScriptingEngine ExecuteScript() Here's the actual code: public void Main() { // TODO: Add your code here string fileName = Dts.Variables["LoopFiles"].Value.ToString(); System.IO.FileInfo file2 = new System.IO.FileInfo(Dts.Variables["FolderPath"].Value + fileName); int count = 1; string fullPath =Dts.Variables["FolderPath"].Value.ToString() + Dts.Variables["LoopFiles"].Value.ToString(); string fileNameOnly = Path.GetFileNameWithoutExtension(fullPath); string extension = Path.GetExtension(fullPath); string path = Path.GetDirectoryName(fullPath); string newFullPath = fullPath; while (File.Exists(newFullPath)) { string tempFileName = string.Format("{0}({1})", fileNameOnly, count++); newFullPath = Path.Combine(path, tempFileName + extension); } DialogResult button3 = MessageBox.Show(file2.ToString()); file2.MoveTo(newFullPath); DialogResult button5 = MessageBox.Show("Last Step"); Dts.TaskResult = (int)ScriptResults.Success; } As a note, button3 pops up as it should in the routine at runtime, but the error comes up before button5 displays. Any information as to why this is being difficult would help considerably. Thanks! A: So we have to see exception - rewrite code to catch it: try{ file2.MoveTo(newFullPath); DialogResult button5 = MessageBox.Show("Last Step"); }catch(Exception ex){ DialogResult button6 = MessageBox.Show(ex.ToString()); } and i think that it's bad idea to use MessageBoxes at all - it's much better to use console applications and write messages to StdOut
{ "pile_set_name": "StackExchange" }
Q: Regular expression syntax in python I try to write a python scripts to analys a data txt.I want the script to do such things: find all the time data in one line, and compare them.but this is my first time to write RE syntax.so I write a small script at 1st. and my script is: import sys txt = open('1.txt','r') a = [] for eachLine in txt: a.append(eachLine) import re pattern = re.compile('\d{2}:\d{2}:\d{2}') for i in xrange(len(a)): print pattern.match(a[i]) #print a and the output is always None. my txt is just like the picture: what's the problem? plz help me. thx a lot. and my python is python 2.7.2.my os is windows xp sp3. A: Didn't you miss one of the ":" in you regex? I think you meant re.compile('\d{2}:\d{2}:\d{2}') The other problems are: First, if you want to search in the hole text, use search instead of match. Second, to access your result you need to call group() in the match object returned by your search. Try it: import sys txt = open('1.txt','r') a = [] for eachLine in txt: a.append(eachLine) import re pattern = re.compile('\d{2}:\d{2}:\d{2}') for i in xrange(len(a)): match = pattern.search(a[i]) print match.group() #print a
{ "pile_set_name": "StackExchange" }
Q: What is this kanji ? Looks like 七 three times Tonight I went to a sushi restaurant. It's called かね喜{き} as is visible on their inkan and in the company name. However, on their website it seems calligraphied differently but maybe I am just bad at identifying handwriting. Even more strange is the sign there that does not look like 喜 at all. According to the website it seems to be the way the sushi bar of the chain are called, but that does not help with why it is written this way. Can anyone help me ? I have looked around in my phone (Aedict) on on Weblio, to no avail. Thanks a lot ! A: That is the [草書体]{そうしょたい} (= "cursive script") for the kanji 「喜」, meaning "happiness", "delight", etc. http://image.search.yahoo.co.jp/search?rkf=2&ei=UTF-8&p=%E5%96%9C+%E8%8D%89%E6%9B%B8%E4%BD%93 This is the reason that one's 77th birthday is called 「[喜寿]{きじゅ}」. More technically speaking, though, it is the "re-block-ized" and stylized form of the original cursive script for 「喜」. The original cursive is shown at top left of the web page above.
{ "pile_set_name": "StackExchange" }
Q: Hadoop: Where does the job first execute before the map tasks? This is a typical main method of a Hadoop Job: public class MyHadoopJobDriver extends Configured implements Tool { public static void main(String[] args) throws Exception { int exitCode = ToolRunner.run(new MyHadoopJobDriver(), args); System.exit(exitCode); } ... } When I run this job hadoop MyHadoopJobDriver, Is the code above executing in its own JVM on the task tracker? Then once the job is scheduled, the map tasks are distributed to the task trackers? A: Yes, usually. Note that if you "Debug -> as Java Application" that class in Eclipse then you can use the debugger for testing, setting breakpoints, etc. Note Note that even if you run the driver class and the mapper/reducer in Eclipse, you still need hadoop running on your machine in support of HDFS.
{ "pile_set_name": "StackExchange" }
Q: Python mkdir giving me wrong permissions I'm trying to create a folder and create a file within it. Whenever i create that folder (via Python), it creates a folder that gives me no permissions at all and read-only mode. When i try to create the file i get an IOError. Error: <type 'exceptions.IOError'> I tried creating (and searching) for a description of all other modes (besides 0770). Can anyone give me light? What are the other mode codes? A: After you create the folder you can set the permissions with os.chmod The mod is written in base 8, if you convert it to binary it would be 000 111 111 000 rwx rwx rwx The first rwx is for owner, the second is for the group and the third is for world r=read,w=write,x=execute The permissions you see most often are 7 read/write/execute - you need execute for directories to see the contents 6 read/write 4 readonly When you use os.chmod it makes most sense to use octal notation so os.chmod('myfile',0o666) # read/write by everyone os.chmod('myfile',0o644) # read/write by me, readable for everone else Remember I said you usually want directories to be "executable" so you can see the contents. os.chmod('mydir',0o777) # read/write by everyone os.chmod('mydir',0o755) # read/write by me, readable for everone else Note: The syntax of 0o777 is for Python 2.6 and 3+. otherwise for the 2 series it is 0777. 2.6 accepts either syntax so the one you choose will depend on whether you want to be forward or backward compatible. A: You've probably got a funky umask. Try os.umask(0002) before making your directory. A: The Python manual says: os.mkdir(path[, mode]) Create a directory named path with numeric mode mode. The default mode is 0777 (octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out. Availability: Unix, Windows. Have you specified a mode - which mode did you specify. Did you consider specifying a mode explicitly? And what is the program's umask value set to"
{ "pile_set_name": "StackExchange" }
Q: mina deploy not working I'm trying to deploy an app with Mina, but I'm getting this error: -----> Skipping asset precompilation $ cp -R "/home/deploy/integracao/current/public/assets" "./public/assets" cp: cannot create directory ‘./public/assets’: No such file or directory ! ERROR: Deploy failed. The first time I deployed, everything worked fine. On the second pass (and so on) I'm seeing the error above. This is my deploy.rb require 'mina/bundler' require 'mina/rails' require 'mina/git' require 'mina/rbenv' # for rbenv support. (http://rbenv.org) set :domain, '192.168.0.87' set :deploy_to, '/home/deploy/integracao' set :repository, 'https://github.com...' set :branch, 'master' set :rails_env, 'production' set :shared_paths, ['config/database.yml', 'log', 'config/application.yml'] set :user, 'deploy' # Username in the server to SSH to. task :environment do # If you're using rbenv, use this to load the rbenv environment. # Be sure to commit your .rbenv-version to your repository. invoke :'rbenv:load' # For those using RVM, use this to load an RVM version@gemset. # invoke :'rvm:use[ruby-1.9.3-p125@default]' end # Put any custom mkdir's in here for when `mina setup` is ran. # For Rails apps, we'll make some of the shared paths that are shared between # all releases. task :setup => :environment do queue! %[mkdir -p "#{deploy_to}/shared/log"] queue! %[chmod g+rx,u+rwx "#{deploy_to}/shared/log"] queue! %[mkdir -p "#{deploy_to}/shared/config"] queue! %[chmod g+rx,u+rwx "#{deploy_to}/shared/config"] queue! %[touch "#{deploy_to}/shared/config/database.yml"] queue %[echo "-----> Be sure to edit 'shared/config/database.yml'."] end desc "Deploys the current version to the server." task :deploy => :environment do deploy do # Put things that will set up an empty directory into a fully set-up # instance of your project. invoke :'git:clone' invoke :'deploy:link_shared_paths' invoke :'bundle:install' invoke :'rails:db_migrate' invoke :'rails:assets_precompile' to :launch do queue "touch #{deploy_to}/tmp/restart.txt" end end end A: I've just changed to invoke :'rails:assets_precompile:force' inside the deploy task and got it working
{ "pile_set_name": "StackExchange" }
Q: How to use quantile color scale in bar graph with drill-down? I'm using the following script to generate a bar chart with drill down capability. (source: http://mbostock.github.io/d3/talk/20111116/bar-hierarchy.html). What I am trying to do is - I want the bars to be different shades of a color depending on the data (pretty much what this question asks - D3.js: Changing the color of the bar depending on the value). Except, in my case... the graph is horizontal and not static so the answer may be different. So Ideally, at the parent node and all sub nodes except the child node, it will display lets say different shades of blue based on the data, and once it reaches the end after drilling down, the remaining bars will be grey. I've recently started using d3 and am kinda lost as to where to start. I tried adding different colors to the color range z but that did not work. Any help will be appreciated! Thanks. NOTE: in my case, I am assuming.. that after a transition, either all nodes will lead to subnodes OR no node will lead to subnodes. Basically, at no point in the graph will there be bars, where some will drill down further while some won't. This assumption is based on the type of data I want to show with my graph. <script> var m = [80, 160, 0, 160], // top right bottom left w = 1280 - m[1] - m[3], // width h = 800 - m[0] - m[2], // height x = d3.scale.linear().range([0, w]), y = 25, // bar height z = d3.scale.ordinal().range(["steelblue", "#aaa"]); // bar color var hierarchy = d3.layout.partition() .value(function(d) { return d.size; }); var xAxis = d3.svg.axis() .scale(x) .orient("top"); var svg = d3.select("body").append("svg:svg") .attr("width", w + m[1] + m[3]) .attr("height", h + m[0] + m[2]) .append("svg:g") .attr("transform", "translate(" + m[3] + "," + m[0] + ")"); svg.append("svg:rect") .attr("class", "background") .attr("width", w) .attr("height", h) .on("click", up); svg.append("svg:g") .attr("class", "x axis"); svg.append("svg:g") .attr("class", "y axis") .append("svg:line") .attr("y1", "100%"); d3.json("flare.json", function(root) { hierarchy.nodes(root); x.domain([0, root.value]).nice(); down(root, 0); }); function down(d, i) { if (!d.children || this.__transition__) return; var duration = d3.event && d3.event.altKey ? 7500 : 750, delay = duration / d.children.length; // Mark any currently-displayed bars as exiting. var exit = svg.selectAll(".enter").attr("class", "exit"); // Entering nodes immediately obscure the clicked-on bar, so hide it. exit.selectAll("rect").filter(function(p) { return p === d; }) .style("fill-opacity", 1e-6); // Enter the new bars for the clicked-on data. // Per above, entering bars are immediately visible. var enter = bar(d) .attr("transform", stack(i)) .style("opacity", 1); // Have the text fade-in, even though the bars are visible. // Color the bars as parents; they will fade to children if appropriate. enter.select("text").style("fill-opacity", 1e-6); enter.select("rect").style("fill", z(true)); // Update the x-scale domain. x.domain([0, d3.max(d.children, function(d) { return d.value; })]).nice(); // Update the x-axis. svg.selectAll(".x.axis").transition() .duration(duration) .call(xAxis); // Transition entering bars to their new position. var enterTransition = enter.transition() .duration(duration) .delay(function(d, i) { return i * delay; }) .attr("transform", function(d, i) { return "translate(0," + y * i * 1.2 + ")"; }); // Transition entering text. enterTransition.select("text").style("fill-opacity", 1); // Transition entering rects to the new x-scale. enterTransition.select("rect") .attr("width", function(d) { return x(d.value); }) .style("fill", function(d) { return z(!!d.children); }); // Transition exiting bars to fade out. var exitTransition = exit.transition() .duration(duration) .style("opacity", 1e-6) .remove(); // Transition exiting bars to the new x-scale. exitTransition.selectAll("rect").attr("width", function(d) { return x(d.value); }); // Rebind the current node to the background. svg.select(".background").data([d]).transition().duration(duration * 2); d.index = i; } function up(d) { if (!d.parent || this.__transition__) return; var duration = d3.event && d3.event.altKey ? 7500 : 750, delay = duration / d.children.length; // Mark any currently-displayed bars as exiting. var exit = svg.selectAll(".enter").attr("class", "exit"); // Enter the new bars for the clicked-on data's parent. var enter = bar(d.parent) .attr("transform", function(d, i) { return "translate(0," + y * i * 1.2 + ")"; }) .style("opacity", 1e-6); // Color the bars as appropriate. // Exiting nodes will obscure the parent bar, so hide it. enter.select("rect") .style("fill", function(d) { return z(!!d.children); }) .filter(function(p) { return p === d; }) .style("fill-opacity", 1e-6); // Update the x-scale domain. x.domain([0, d3.max(d.parent.children, function(d) { return d.value; })]).nice(); // Update the x-axis. svg.selectAll(".x.axis").transition() .duration(duration * 2) .call(xAxis); // Transition entering bars to fade in over the full duration. var enterTransition = enter.transition() .duration(duration * 2) .style("opacity", 1); // Transition entering rects to the new x-scale. // When the entering parent rect is done, make it visible! enterTransition.select("rect") .attr("width", function(d) { return x(d.value); }) .each("end", function(p) { if (p === d) d3.select(this).style("fill-opacity", null); }); // Transition exiting bars to the parent's position. var exitTransition = exit.selectAll("g").transition() .duration(duration) .delay(function(d, i) { return i * delay; }) .attr("transform", stack(d.index)); // Transition exiting text to fade out. exitTransition.select("text") .style("fill-opacity", 1e-6); // Transition exiting rects to the new scale and fade to parent color. exitTransition.select("rect") .attr("width", function(d) { return x(d.value); }) .style("fill", z(true)); // Remove exiting nodes when the last child has finished transitioning. exit.transition().duration(duration * 2).remove(); // Rebind the current parent to the background. svg.select(".background").data([d.parent]).transition().duration(duration * 2); } // Creates a set of bars for the given data node, at the specified index. function bar(d) { var bar = svg.insert("svg:g", ".y.axis") .attr("class", "enter") .attr("transform", "translate(0,5)") .selectAll("g") .data(d.children) .enter().append("svg:g") .style("cursor", function(d) { return !d.children ? null : "pointer"; }) .on("click", down); bar.append("svg:text") .attr("x", -6) .attr("y", y / 2) .attr("dy", ".35em") .attr("text-anchor", "end") .text(function(d) { return d.name; }); bar.append("svg:rect") .attr("width", function(d) { return x(d.value); }) .attr("height", y); return bar; } // A stateful closure for stacking bars horizontally. function stack(i) { var x0 = 0; return function(d) { var tx = "translate(" + x0 + "," + y * i * 1.2 + ")"; x0 += x(d.value); return tx; }; } </script> A: you can create a new scale to handle the "shades" of your colors, var shades = d3.scale.sqrt() .domain([your domain]) .clamp(true) .range([your range]); and create a variable to control the "depth" of your drill-down, so when you are going to color your bars, you simply set the level of the color "shade" with d3.lab (Doc), like this: function fill(d) { var c = d3.lab(colorScale(d.barAttr)); c.l = shades(d.depth); return c; }
{ "pile_set_name": "StackExchange" }
Q: How to append only once? jQuery What I am trying to do is to append text to .showError only once every time you click on the button. Right now it appends each time you click on the button and get multiple times text appended to .showError. How do I prevent this from happening? $("#send").on("click", function(ev) { $(".error-holder").empty(); var err = 0; $("form .form-control").each(function(i, v) { if ($(this).val() == "") { err = 1; } }); if (err == 1) { ev.preventDefault(); $(".showError").append("Bij dit product moeten er serienummers ingevuld worden! <br /><br />"); ($('#myModal').modal()); } }); A: Use .html(): $("#send").on("click", function(ev) { $(".error-holder").empty(); var err = 0; $("form .form-control").each(function (i,v) { if ($(this).val() == "") { err = 1; } }); if(err == 1){ ev.preventDefault(); $(".showError").html("Bij dit product moeten er serienummers ingevuld worden! <br /><br />"); ($('#myModal').modal()); }
{ "pile_set_name": "StackExchange" }
Q: Cordova Android white space between status bar and header I've done almost everything but seems like I cant get rid of the white space between header and status bar as seen in the image I am using Intel XDK version 3987 using Cordova plugin. Any help to eliminate this error will be highly appreciated! This is my source code: pastebin.com/fPWJmiD8 A: After several days of headbanging, trials and errors and detective work, I was able to find the real culprit. The problem was caused because jQuery Mobile framework automatically on the runtime added the following classes to the header div having data-role="header": class="ui-header ui-bar-inherit" Those classes created an unwanted margin between the status bar and the header. However, this problem was solved by removing those classes using jQuery during the runtime execution of the application.
{ "pile_set_name": "StackExchange" }
Q: SVG text cross-browser compatibility issue First of all, I have to say that I am not good at Illustrator. I just make some basic graphics for the websites that I am working on. Here is one that I made: I exported as SVG format and put it into a website. But when I change size of browser, the text size increases or decreases. This is how it looks like on a web page in Chrome: If I increase the size of browser, letters are coming into rectangle: I never had such a problem. It's so weird and I wasn't able to find a solution for this. Also, in Safari everything works perfectly. In Firefox, this is how it looks like: The text is completely different. Can you you tell me what's happening? Aren't SVG graphics just like images? Why am I having so many problems? Here's the SVG code for reference: <?xml version="1.0" encoding="utf-8"?> <!-- Generator: Adobe Illustrator 18.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) --> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 821.4 353.7" enable-background="new 0 0 821.4 353.7" xml:space="preserve"> <rect x="0" y="0" opacity="0.8" width="682.3" height="214.5" /> <text transform="matrix(0.8101 0 0 1 25.8074 165.8717)"> <tspan x="0" y="0" fill="#307FB7" font-family="'MyriadPro-Regular'" font-size="196.5103" letter-spacing="28.4">C</tspan> <tspan x="143.4" y="0" fill="#FFFFFF" font-family="'MyriadPro-Regular'" font-size="196.5103" letter-spacing="28.4">agdas</tspan> </text> <rect x="139.2" y="138.7" opacity="0.8" width="682.3" height="214.5" /> <text transform="matrix(0.8101 0 0 1 170.9687 304.5814)" opacity="0.8"> <tspan x="0" y="0" fill="#307FB7" font-family="'MyriadPro-Regular'" font-size="196.5103" letter-spacing="28.4">Y</tspan> <tspan x="135.8" y="0" fill="#FFFFFF" font-family="'MyriadPro-Regular'" font-size="196.5103" letter-spacing="28.4">onder</tspan> </text> </svg> A: The problem is that the SVG is using text as opposed to paths, the browser is then rendering that text with its own text rendering engine, which can lead to inconsistencies with font naming and scaling (the same as regular HTML & CSS text) One solution is to outline the text before exporting the SVG. Either right click the text and Create Outlines or go to Object -> Expand
{ "pile_set_name": "StackExchange" }
Q: Tool selection with mouse in emacs for Mac OS X After watching this video I decided to install the artist for emacs. I'm using http://emacsformacosx.com/ and I've been successful in using the tools provided in the artist install and they're awesome! However, I want to know if it's possible to change and select tools like the guy in the video does, i.e. right click -> select tool. When I right click in Emacs I see nothing. Is this possible? A: I don't know about on OSX but on my GNU/Linux machine, middle click is what brings up the tool selection menu. Is that insufficient? If so, you can manually bind artist-mouse-choose-operation to your key of choice.
{ "pile_set_name": "StackExchange" }
Q: Find and extract content of division of certain class using DomXPath I am trying to extract and save into PHP string (or array) the content of a certain section of a remote page. That particular section looks like: <section class="intro"> <div class="container"> <h1>Student Club</h1> <h2>Subtitle</h2> <p>Lore ipsum paragraph.</p> </div> </section> And since I can't narrow down using class container because there are several other sections of class "container" on the same page and because there is the only section of class "intro", I use the following code to find the right division: $doc = new DOMDocument; $doc->preserveWhiteSpace = FALSE; @$doc->loadHTMLFile("https://www.remotesite.tld/remotepage.html"); $finder = new DomXPath($doc); $intro = $finder->query("//*[contains(@class, 'intro')]"); And at this point, I'm hitting a problem - can't extract the content of $intro as PHP string. Trying further the following code foreach ($intro as $item) { $string = $item->nodeValue; echo $string; } gives only the text value, all the tags are stripped and I really need all those divs, h1 and h2 and p tags preserved for further manipulation needs. Trying: foreach ($intro->attributes as $attr) { $name = $attr->nodeName; $value = $attr->nodeValue; echo $name; echo $value; } is giving the error: Notice: Undefined property: DOMNodeList::$attributes in So how could I extract the full HTML code of the found DOM elements? A: I knew I was so close... I just needed to do: foreach ($intro as $item) { $h1= $item->getElementsByTagName('h1'); $h2= $item->getElementsByTagName('h2'); $p= $item->getElementsByTagName('p'); }
{ "pile_set_name": "StackExchange" }
Q: Presence indicators I want to add presence indicators to my custom webpart. I have found a blog post about it. string.Format("<li><a class=\"ms-imnlink\" href=\"javascript:;\"><img width=\"12\" height=\"12\" id=\"IMID12\" onload=\"IMNRC(&#39;", Contact.Email, "&#39;)\" alt=\"My SID\" src=\"/_layouts/images/imnoff.png\" border=\"0\" showofflinepawn=\"1\"/ sip=\" \"></a>&#160;{0}</li>" I know even how to get sip address. But isn't there an easier way to show the presence indicator? Doesn't Sharepoint API any webcontrols for that? A: If you're rolling your own web part, then yes you have to add them in. If you start from a list view, and the web application has the presence information enabled, then you can convert the list view web part to a dataview web part and the presense information should be included in the markup.
{ "pile_set_name": "StackExchange" }
Q: What is the best way to access elements of parent control from a child control? I have a parent control (main form) and a child control (user control). The child control has some code, which determines what functions the application can perform (e.g. save files, write logs etc.). I need to show/hide, enable/disable main menu items of the main form according to the functionality. As I can't just write MainMenu.MenuItem1.Visible = false; (the main menu is not visible from the child control), I fire an event in the child control and handle this event on the main form. The problem is I need to pass what elements of the menu need to be shown/hidden. To do this I created an enum, showing what to do with the item public enum ItemMode { TRUE, FALSE, NONE } Then I created my eventargs which have 6 parameters of type ItemMode (there are 6 menu items I need to manage). So any time I need to show the 1st item, hide the 2nd and do nothing with the rest I have to write something like this e = new ItemModeEventArgs(ItemMode.TRUE, ItemMode.FALSE, ItemMode.NONE, ItemMode.NONE, ItemMode.NONE, ItemMode.NONE); FireMyEvent(e); This seems like too much code to me and what's more, what if I need to manage 10 items in future? Then I will have to rewrite all the constructors just to add 4 more NONEs. I believe there's a better way of doing this, but I just can't figure out what it is. A: you could create an EventArgs which takes an ItemMode[] or a List<ItemMode> or a Dictionary<string, ItemMode> for those items (instead of the current 6 arguments) - that way you don't need to change much when adding more items...
{ "pile_set_name": "StackExchange" }
Q: Http session synchronization between webview and java http client in Android I am developing hybrid application by using webview and native both. I am using ajax post method in my webview and also using post method via HttpClient on my default android code. But even if i go to same server my session id's doesn't match with each other. Is there any way to make http request within same session in my application? Thanks for any advice. A: I have solved this issue: public void syncSession(final Context ctx){ new Thread(new Runnable(){ public void run(){ //Products will be stated in memory ProductManager pm = ProductManager.getInstance(); // HttpClient httpclient = new DefaultHttpClient(); HttpPost httpget = new HttpPost(UrlConstants.SERVICE_URL_SYNC); HttpResponse response; String result = null; try { response = httpclient.execute(httpget); //write db to } catch (ClientProtocolException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } List<Cookie> cookies = httpclient.getCookieStore().getCookies(); if (! cookies.isEmpty()){ CookieSyncManager.createInstance(ctx); CookieManager cookieManager = CookieManager.getInstance(); //sync all the cookies in the httpclient with the webview by generating cookie string for (Cookie cookie : cookies){ Cookie sessionInfo = cookie; String cookieString = sessionInfo.getName() + "=" + sessionInfo.getValue() + "; domain=" + sessionInfo.getDomain(); cookieManager.setCookie(UrlConstants.SERVICE_PRE_URL, cookieString); CookieSyncManager.getInstance().sync(); } } } }).start(); }
{ "pile_set_name": "StackExchange" }
Q: Filter Array using NSPredicate with Array of Ids I am attempting to filter an Array of dictionary objects using NSPredicate. I have an array of Ids that I want to filter this array against however I'm not sure on how to do so. This is the closest I have got to a working version. However NSPredicate is thinking that Self.id is a number when it is a NSString Guide NSPredicate *resultPredicate = [NSPredicate predicateWithFormat:@"SELF.id CONTAINS %@", UserIds]; However this errors out with the following: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Can't look for value (( "36eaec5a-00e7-4b10-b78e-46f7396a911b", "26fc5ea2-a14b-4535-94e9-6fb3d113838c", "0758a7d0-0470-4ff6-a96a-a685526bccbb", "10dae681-a444-469c-840d-0f16a7fb0871", "0280234c-36ae-4d5f-994f-d6ed9eff6b62", "89F1D9D4-09E2-41D3-A961-88C49313CFB4" )) in string (1f485409-9fca-4f2e-8ed2-f54914e6702a); value is not a string ' Any help you can provide would be great. A: You are going to want to use IN to compare the value of SELF.id to array values, instead of CONTAINS, which performs string matching. Solution: NSPredicate *resultPredicate = [NSPredicate predicateWithFormat:@"SELF.id IN %@", UserIds];
{ "pile_set_name": "StackExchange" }
Q: configuring maven beanstalker plugin beanstalk:wait-for-environment mojo I've successsfully deployed my app to elastic beanstalk using $ mvn beanstalk:upload-source-bundle beanstalk:create-application-version beanstalk:update-environment I'm able to watch in the AWS console that the environment is updated correctly. When I try to chain beanstalk:wait-for-environment, maven polls every 90 seconds and never detects that the environment is Ready. One thing I noticed is that it's waiting for 'environment null to get into Ready', and it's looking for the environment to have a specific domain **.elasticbeanstalk.com. I don't know how to change that, or disable that check $ mvn beanstalk:upload-source-bundle beanstalk:create-application-version beanstalk:update-environment beanstalk:wait-for-environment ... [INFO] Will wait until Thu Aug 22 10:59:37 PDT 2013 for environment null to get into Ready [INFO] ... as well as having domain ********.elasticbeanstalk.com [INFO] Sleeping for 90 seconds My plugin config in pom.xml looks like this (company confidential names hidden) <plugin> <groupId>br.com.ingenieux</groupId> <artifactId>beanstalk-maven-plugin</artifactId> <version>1.0.1</version> <configuration> <applicationName>********-web-testing</applicationName> <s3Key>********-2.0-${BUILDNUMBER}.war</s3Key> <s3Bucket>********-web-deployments</s3Bucket> <artifactFile>target/********-2.0-SNAPSHOT-${BUILDNUMBER}.war</artifactFile> <environmentName>********-web-testing-fe</environmentName> </configuration> </plugin> Does anyone have insight into using beanstalk:wait-for-environment to wait until the environment has been updated? A: wait-for-environment is only needed (in part) when you need a build pipeline involving zero downtime (with cname replication). It all boils down to cnamePrefix currently. You can basically ignore this warning if you're not concerned about downtime (which is not really needed for testing environments) Actually, the best way is to use fast-deploy. Use the archetype as an start: $ mvn archetype:generate -Dfilter=elasticbeanstalk
{ "pile_set_name": "StackExchange" }
Q: Apache beam: Timeout while initializing partition 'topic-1'. Kafka client may not be able to connect to servers I got this error when my Apache beam application connects to my Kafka cluster with ACL enabled. Please help me fix this issue. Caused by: java.io.IOException: Reader-4: Timeout while initializing partition 'test-1'. Kafka client may not be able to connect to servers. org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:128) org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:779) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159) org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1228) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:143) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:967) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) I have a Kafka cluster with 3 nodes on GKE. I created a topic with replication-factor 3 and partition 5. kafka-topics --create --zookeeper zookeeper:2181 \ --replication-factor 3 --partitions 5 --topic topic I set read permission on a topic test for test_consumer_group consumer group. kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181 \ --add --allow-principal User:CN=myuser.test.io --consumer \ --topic test --group 'test_consumer_group' In my Apache beam application, I set configuration group.id=test_consumer_group. Also testing with console consumer and it is not working as well. $ docker run --rm -v `pwd`:/cert confluentinc/cp-kafka:5.1.0 \ kafka-console-consumer --bootstrap-server kafka.xx.xx:19092 \ --topic topic --consumer.config /cert/client-ssl.properties [2019-03-08 05:43:07,246] WARN [Consumer clientId=consumer-1, groupId=test_consumer_group] Received unknown topic or partition error in ListOffset request for partition test-3 (org.apache.kafka.clients.consumer.internals.Fetcher) A: Seems like a communication issue between to your kafka readers Kafka client may not be able to connect to servers
{ "pile_set_name": "StackExchange" }
Q: mysql: update timestamp if timestamp is beyond 10 hours Wondering if there is a way to do this without using two hits to the sql database. If a person views content the timestamp is recorded. If the person views the same content again 2 hours later the timestamp is not updated. If the person views the same content 10 hours after first viewing the content, update the timestamp db table field. Any method of doing this via SQL and not doing a "select" than php comparison than an "update" ?? A: update mytable set lastvisited=now() where person='john' and lastvisited<(now()-interval 10 hour);
{ "pile_set_name": "StackExchange" }
Q: Validating that all Stored procedures are valid Background My application is backed up by an SQL Server (2008 R2), and have quite a few SP, triggers etc.. My goal is to make sure upon program start that all of those objects are still valid. For example, if I have a stored procedure A which calls stored procedure B, If someone changes the the name of B to C, I would like to get a notification when running my application in Debug environment. What have I tried? So, I figured using sp_refreshsqlmodule which according to the documentation returns 0 (success) or a nonzero number (failure): DECLARE @RESULT int exec @RESULT = sp_refreshsqlmodule N'A' --In this case A is the SP name SELECT @@ERROR SELECT @RESULT So I changed SP B name to C and ran the script. The results where: @@ERROR was 0 @RESULT was 0 I got a message of: The module 'A' depends on the missing object 'B'. The module will still be created; however, it cannot run successfully until the object exists. My question: Am I missing something here, shouldn't I get anon-zero number that indicates that something went wrong? A: Assuming that all of your dependencies are at least schema qualified, it seems like you could use sys.sql_expression_dependencies. For instance, running this script: create proc dbo.B as go create proc dbo.A as exec dbo.B go select OBJECT_SCHEMA_NAME(referencing_id),OBJECT_NAME(referencing_id), referenced_schema_name,referenced_entity_name,referenced_id from sys.sql_expression_dependencies go sp_rename 'dbo.B','C','OBJECT' go select OBJECT_SCHEMA_NAME(referencing_id),OBJECT_NAME(referencing_id), referenced_schema_name,referenced_entity_name,referenced_id from sys.sql_expression_dependencies The first query of sql_expression_dependencies shows the dependency as: (No Column name) (No Column name) referenced_schema_name referenced_entity_name referenced_id dbo A dbo B 367340373 And after the rename, the second query reveals: (No Column name) (No Column name) referenced_schema_name referenced_entity_name referenced_id dbo A dbo B NULL That is, the referenced_id is NULL. So this query may find all of your broken stored procedures (or other objects that can contain references): select OBJECT_SCHEMA_NAME(referencing_id),OBJECT_NAME(referencing_id) from sys.sql_expression_dependencies group by referencing_id having SUM(CASE WHEN referenced_id IS NULL THEN 1 ELSE 0 END) > 0
{ "pile_set_name": "StackExchange" }
Q: Flask gevent when downloading url do through proxy I have a simple flask application I am running through the gevent server. app = Flask(__name__) def console(cmd): p = Popen(cmd, shell=True, stdout=PIPE) while True: data = p.stdout.read(512) yield data if not data: break if isinstance(p.returncode, int): if p.returncode > 0: # return code was non zero, an error? print 'error:', p.returncode break @app.route('/mp3', methods=['POST']) def generate_large_mp3(): video_url = "url.com" title = 'hello' mp3 = console('command') return Response(stream_with_context(mp3), mimetype="audio/mpeg3", headers={"Content-Disposition": 'attachment;filename="%s.mp3"' % filename}) if __name__ == '__main__': http_server = WSGIServer(('', 5000), app) http_server.serve_forever() How would I be able to make it so when my my console function runs that it runs via a proxy to download the url instead of the ip of the server? A: Answered it, Needed to simply update the env variable in the Popen env = dict(os.environ) env['http_proxy'] = proxies[random.randrange(0, len(proxies))] env['https_proxy'] = proxies[random.randrange(0, len(proxies))]
{ "pile_set_name": "StackExchange" }
Q: How to load all view controllers storyboard, but skip the natural sequence and go direct to a specific view? Suppose I have three view controllers inside a storyboard, I want to load all of them into view controllers stack, but I want to choose which one will appear first to the user. As shown below I would like to show the third view on load, instead to show the first, any glue? A: Option 1. Using storyboards, you see the arrow pointing to your ViewController 1. Drag that to View Controller 2 or 3. Option 2. On load of your first view controller, you can instantiate whichever view you'd like in your viewDidLoad(), provided you have given each storyboard view an ID. let storyboard = UIStoryboard(name: "Main", bundle: nil) let controller = storyboard.instantiateViewController(withIdentifier: "YourVCIdentifier") self.present(controller, animated: true, completion: nil) Option 3. In your AppDelegate file, you can do this. func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { self.window = UIWindow(frame: UIScreen.main.bounds) let storyboard = UIStoryboard(name: "Main", bundle: nil) let initialViewController = storyboard.instantiateViewController(withIdentifier: "YourVCIdentifier") self.window?.rootViewController = initialViewController self.window?.makeKeyAndVisible() return true }
{ "pile_set_name": "StackExchange" }
Q: Setting a Jenkins Pipeline timeout for only a group of stages I know that one can set a timeout for the entire pipeline script or a specific stage using options, but is there a way to set a timeout for a group of stages? For example a total 10 minute timeout (not 10 minute each) for only 3 out of the 5 stages, and let the other 2 run freely. A: Sure, you can create nested stages and define the timeout option for the parent stage: pipeline { agent any stages{ stage('Stage A') { options{ timeout( time: 10, unit: 'SECONDS' ) } stages { stage('Stage A1') { steps { sleep( time: 4, unit: 'SECONDS' ) } } stage('Stage A2') { steps { sleep( time: 4, unit: 'SECONDS' ) } } stage('Stage A3') { steps { sleep( time: 4, unit: 'SECONDS' ) } } } } } } Stage A3 will never be executed because of the parent timeout. It will be marked as "aborted":
{ "pile_set_name": "StackExchange" }
Q: How to debug `Error while processing function` in `vim` and `nvim`? TL;DR How to find where exactly vim or nvim error started (which file?) when I'm interested in fixing the actual issue and not just removing the bad plugin? Anything better than strace and guesswork to find the error origin? Issue I often add a plugin to my vim or nvim config and end up getting errors on hooks (buffer open, close, write): "test.py" [New] 0L, 0C written Error detected while processing function 343[12]..272: line 8: E716: Key not present in Dictionary: _exec E116: Invalid arguments for function get(a:args, 'exec', a:1['_exec']) E15: Invalid expression: get(a:args, 'exec', a:1['_exec']) The problem is, I have no idea where those come from, I only get some line number of unknown file and it's not my vim/nvim config file. A: This particular plugin has been written in an object-oriented style. The 343[12]..272 refers to an anonymous (numbered) function in a Dictionary object. If you know the (recently installed) plugin, you can use the :breakadd file */pluginname.vim file in your ~/.vimrc to stop and then step through (with :next) it line-by-line. Alternatively, you can capture a full log of a Vim session with vim -V20vimlog. After quitting Vim, examine the vimlog log file for the error message and suspect commands before that.
{ "pile_set_name": "StackExchange" }
Q: Show $ \int_0^\infty\left(1-x\sin\frac 1 x\right)dx = \frac\pi 4 $ How to show that $$ \int_0^\infty\left(1-x\sin\frac{1}{x}\right)dx=\frac{\pi}{4} $$ ? A: Use $$ \int \left(1-x \sin\left(\frac{1}{x}\right)\right) \mathrm{d} x = x - \int \sin\left(\frac{1}{x}\right) \mathrm{d} \frac{x^2}{2} = x - \frac{x^2}{2}\sin\left(\frac{1}{x}\right) - \frac{1}{2} \int \cos\left(\frac{1}{x}\right) \mathrm{d}x $$ Integrating by parts again $\int \cos\left(\frac{1}{x}\right) \mathrm{d}x = x \cos\left(\frac{1}{x}\right) - \int \sin\left(\frac{1}{x}\right) \frac{\mathrm{d}x}{x} $: $$ \int \left(1-x \sin\left(\frac{1}{x}\right)\right) \mathrm{d} x = x - \frac{x^2}{2}\sin\left(\frac{1}{x}\right) - \frac{x}{2} \cos\left(\frac{1}{x}\right) + \frac{1}{2} \int \sin\left(\frac{1}{x}\right) \frac{\mathrm{d}x}{x} $$ Thus: $$ \begin{eqnarray} \int_0^\infty \left(1-x \sin\left(\frac{1}{x}\right)\right) \mathrm{d} x &=& \left[x - \frac{x^2}{2}\sin\left(\frac{1}{x}\right) - \frac{x}{2} \cos\left(\frac{1}{x}\right)\right]_{0}^{\infty} + \frac{1}{2} \int_0^\infty\sin\left(\frac{1}{x}\right) \frac{\mathrm{d}x}{x} = \\ &=& 0 + \frac{1}{2} \int_0^\infty \frac{\sin{u}}{u} \mathrm{d} u = \frac{\pi}{4} \end{eqnarray} $$ where the last integral is the Dirichlet integral. A: Sasha's answer concisely gets the answer in terms of the Dirichlet integral, so I will evaluate this integral in the same way that the Dirichlet integral is evaluated with contour integration. First, change variables to $z=1/x$: $$ \int_0^\infty\left(1-x\sin\left(\frac1x\right)\right)\,\mathrm{d}x =\int_0^\infty\frac{z-\sin(z)}{z^3}\,\mathrm{d}z\tag{1} $$ Since the integrand on the right side of $(1)$ is even, entire, and vanishes as $t\to\infty$ within $1$ of the real axis, we can use symmetry to deduce that the integral is $\frac12$ the integral over the entire line and then shift the path of integration by $-i$: $$ \int_0^\infty\frac{z-\sin(z)}{z^3}\,\mathrm{d}z =\frac12\int_{-\infty-i}^{\infty-i}\frac{z-\sin(z)}{z^3}\,\mathrm{d}z\tag{2} $$ Consider the contours $\gamma^+$ and $\gamma^-$ below. Both pass a distance $1$ below the real axis and then circle back along circles of arbitrarily large radius. $\hspace{4.4cm}$ Next, write $\sin(z)=\frac1{2i}\left(e^{iz}-e^{-iz}\right)$ and split the integral as follows $$ \frac12\int_{-\infty-i}^{\infty-i}\frac{z-\sin(z)}{z^3}\,\mathrm{d}z =\frac12\int_{\gamma^-}\left(\frac1{z^2}+\frac{e^{-iz}}{2iz^3}\right)\,\mathrm{d}z -\frac12\int_{\gamma^+}\frac{e^{iz}}{2iz^3}\,\mathrm{d}z\tag{3} $$ $\gamma^-$ contains no singularities so the integral around $\gamma^-$ is $0$. The integral around $\gamma^+$ is $\color{#00A000}{2\pi i}$ times $\color{#00A000}{-\dfrac{1}{4i}}$ times the residue of $\color{#C00000}{\dfrac{e^{iz}}{z^3}}$ at $\color{#C00000}{z=0}$; that is, $\color{#00A000}{-\dfrac\pi2}$ times the coefficient of $\color{#C00000}{\dfrac1z}$ in $$ \frac{1+iz\color{#C00000}{-z^2/2}-iz^3/6+\dots}{\color{#C00000}{z^3}}\tag{4} $$ Thus, the integral around $\gamma^+$ is $\color{#00A000}{\left(-\dfrac\pi2\right)}\color{#C00000}{\left(-\dfrac12\right)}=\dfrac\pi4$. Therefore, combining $(1)$, $(2)$, and $(3)$ yields $$ \int_0^\infty\left(1-x\sin\left(\frac1x\right)\right)\,\mathrm{d}x=\frac\pi4\tag{5} $$ As complicated as that may look at first glance, with a bit of practice, it is easy enough to do in your head. A: Let's start out with the variable change $\displaystyle x=\frac{1}{u}$ and then turn the integral into a double integral: $$\int_{0}^{\infty} {\left( {1 - \frac{\sin u}{u}} \right)\frac{1}{u^2}} \ du=$$ $$ \int_{0}^{\infty}\left(\int_{0}^{1} 1 - \cos (u a) \ da \right)\frac{1}{u^2} \ du=$$ By changing the integration order we get $$ \int_{0}^{1}\left(\int_{0}^{\infty} \frac{1 - \cos (a u)}{u^2} \ du \right)\ \ da=\int_{0}^{1} a \frac{\pi}{2} \ da=\frac{\pi}{4}.$$ Note that by using a simple integration by parts at $\displaystyle \int_{0}^{\infty} \frac{1 - \cos (a u)}{u^2} \ du$ we immediately get $\displaystyle a\int_{0}^{\infty} \frac{\sin(au)}{u} \ du = a\int_{0}^{\infty} \frac{\sin(u)}{u}\ du$ that is $\displaystyle a\frac{\pi}{2}$. The last integral is the famous Dirichlet integral. Hence the result follows and the proof is complete. Q.E.D. (Chris)
{ "pile_set_name": "StackExchange" }
Q: How to expose WhenAny etc I'm sure I've missed something or backed myself into some strange frustrated corner, but here is what I'm trying to do. I have a WPF application, using Unity as IoC. I have a number of services that have an interface. I deal with my services via the interfaces so the services can be swapped out easily or so that I can offer a choice to the end-user. All standard interface programming stuff. In reality, all my services implement ReactiveObject. I am now wanting to do some command handling stuff and am trying to get the CanExecute behaviour working. My basic problem is I cannot use WhenAny unless I cast the interface to a physical implementation (thus get the full type hierarchy for compilation, which can see WhenAny). However, this cast violates interfaces and means I lose the ability to swap out implementations. Is there a ReactiveUI interface that exposes WhenAny etc that I could derive my service interfaces from and thus be able to use the great features of ReactiveUI whilst remaining non-type specific? A: Why can't you use WhenAny on an instance that is an interface? As of ReactiveUI 4.x, WhenAny should be on every object. If you're still using 3.x, you can write your interfaces like this: interface ICoolInterface : IReactiveNotifyPropertyChanged { /* ... */ }
{ "pile_set_name": "StackExchange" }
Q: How to fix error C2664 which only occurs when namespaces used I am getting this error: main.cpp(10) : error C2664: 'lr::codec::codec(protocol_decoder *)' : cannot convert parameter 1 from 'proto::protocol_decoder *' to 'protocol_decoder *' If I remove the use of the proto namespace then this error goes away. How do I fix this and still retain the use of the proto namespace. Here is the code: main.cpp: #include "protocol_decoder_a.hpp" #include "codec.hpp" int main() { //factory function create protocol decoder proto::protocol_decoder* pro = new proto::protocol_decoder_a; lr::codec cdc(pro); return 0; } codec.hpp: #ifndef __CODEC_HPP__ #define __CODEC_HPP__ #include <map> #include <string> class protocol_decoder; //log replay namespace namespace lr { typedef bool (*c_f)(const char* id, unsigned char* rq, size_t rq_length, unsigned char*& response, size_t& resp_len); // generic codec interface will use specific class codec { public: codec(protocol_decoder* decoder); ~codec() {} bool get_response(const char* id, unsigned char* rq, size_t rq_length, unsigned char*& response, size_t& resp_len); const char* get_monitored_dn(const char* id, unsigned char* rq, size_t rq_length); void load_msgs_from_disk(); protocol_decoder* decoder_; }; } //namespace lr codec.cpp: #include "codec.hpp" using namespace lr; codec::codec(protocol_decoder* decoder) : decoder_(decoder) { load_msgs_from_disk(); } void codec::load_msgs_from_disk() { //use specific protocol decoder here } bool codec::get_response(const char* id, unsigned char* rq, size_t rq_length, unsigned char*& response, size_t& resp_len) { return true; } const char* codec::get_monitored_dn(const char* id, unsigned char* rq, size_t rq_length) { return 0; } protocol_decoder.hpp: #ifndef __PROTOCOL_DECODER_HPP__ #define __PROTOCOL_DECODER_HPP__ namespace proto { enum id_type { UNKNOWN_ID, INT_ID, STRING_ID }; struct msg_id { msg_id() : type(UNKNOWN_ID) {} id_type type; union { const char* s_id; size_t i_id; }; }; class protocol_decoder { public: virtual const char* get_monitored_dn(unsigned char* msg, size_t msg_len) = 0; virtual bool get_response(unsigned char* rq, size_t rq_len, unsigned char* response, size_t resp_len) = 0; virtual bool get_msg_id(unsigned char* rq, size_t rq_len, msg_id id) = 0; }; } //namespace proto #endif //__PROTOCOL_DECODER_HPP__ protocol_decoder_a.hpp: #ifndef __PROTOCOL_DECODER_A_HPP__ #define __PROTOCOL_DECODER_A_HPP__ #include "protocol_decoder.hpp" namespace proto { class protocol_decoder_a : public proto::protocol_decoder { public: virtual const char* get_monitored_dn(unsigned char* msg, size_t msg_len); virtual bool get_response(unsigned char* rq, size_t rq_len, unsigned char* response, size_t resp_len); virtual bool get_msg_id(unsigned char* rq, size_t rq_len, proto::msg_id id); }; } //namespace proto #endif //__PROTOCOL_DECODER_A_HPP__ protocol_decoder_a.cpp: #include "protocol_decoder_a.hpp" using namespace proto; const char* protocol_decoder_a::get_monitored_dn(unsigned char* msg, size_t msg_len) { //specific stuff here return 0; } bool protocol_decoder_a::get_response(unsigned char* rq, size_t rq_len, unsigned char* response, size_t resp_len) { return true; } bool protocol_decoder_a::get_msg_id(unsigned char* rq, size_t rq_len, proto::msg_id id) { return true; } A: You've accidentally declared two protocol_decoder classes. One in the global namespace, and one in the proto namespace. Change this declaration: class protocol_decoder; To this: namespace proto { class protocol_decoder; }
{ "pile_set_name": "StackExchange" }
Q: Is there a better way to concatenate variables in a string with no spaces? I have this (simplified) code: $hostname = "127.0.0.1" $aaa= "http://$hostname:8001" Write-Host $aaa Output is http:// The problem is the colon following the $hostname variable, so I fixed it this way: $hostname = "127.0.0.1" $aaa= "http://$hostname" + ":8001" Write-Host $aaa I was wondering if is there any better way of doing it using any PowerShell technology I am not aware of. A: Two way: "http://$($hostname):8001" or "http://$hostname`:8001" The colon is reserved in variable names: it associate the variable with a specific scope or namespace: $global:var or $env:PATH The part before the ':' can be a scope or a PSDrive.
{ "pile_set_name": "StackExchange" }
Q: Unable to locate the model you have specified: modelName I am trying to load this model: class Menu { function show_menu() { $obj =& get_instance(); $obj->load->helper('url'); $menu = anchor("start/hello/fred","Say hello to Fred |"); $menu .= anchor("start/hello/bert","Say hello to Bert |"); $menu .= anchor("start/another_function","Do something else |"); return $menu; } } This is where my controller is: function hello($name) { $this->load->model('Menu'); $mymenu = $this->Menu->show_menu(); } Why do I get this error? Unable to locate the model you have specified: menu A: CodeIgniter can't find the file of the model. If you named your model Menu, make sure the file name is menu.php and not something else like menu_model.php.
{ "pile_set_name": "StackExchange" }
Q: Android In-App Billing error You need to sign into your google account I am implementing in-app purchase using https://github.com/anjlab/android-inapp-billing-v3 . But while in app pop up open it shows "Error Authentication is required.You need to sign into your google account". I tested using different devices as same result. A: Improtant ! I've spent a lot of time trying to find out why I'm getting error "Error Authentication is required.You need to sign into your google account". And after a lot of hours I found out - I was trying to access wrong item id from the console. In the developer console subscription item had id "premium" and I've tried to access "premium_version" item. Stupid mistake but error from google is absolutely not informative. Hope this helps A: In my case what happened was that the right product was all set up at console, however it wasn't activated. A: If anybody getting the above popup you can re-check through the following steps because unfortunately this google popup is not enough informative for a clue. Make sure you are using the product ID correctly(should be same as what you've put on developer console) Make sure you've activated the product on developer console before testing. It may take a while, so wait till it's ready. Make sure your the version of your app is published state on Beta,Alpha or Production Remember to add testing emails under testers list(Settings -> Testers List -> Create List). Testing email should be different from publisher account. If nothing works in the above, try removing google account on your phone and adding back and clearing data in Play Store. Hope this helps :)
{ "pile_set_name": "StackExchange" }
Q: How to get all element which is inside certain character using js regular expression I have 1 string in variable a which contain array parsing like variable in string format as shown below in example , I want to get all those index which is bounded by [ and ] var a = 'info.name[0][1][5].data[0]', collect = []; a.split(']').reverse().forEach(function(a) { if (a.indexOf('[') !== -1) { var splits = a.split('['); collect.push(splits[splits.length - 1]) } }) console.log(collect); my code shown above works fine I know it fails sometime so ,I am looking more better program if possible please help me to solve this problem with regular expression. **Please Dont Use Jquery or any other libs to solve this problem ** A: You could use the match method: const a = 'info.name[0][1][5].data[0]'; const collect = a.match(/[^[]+?(?=\])/g) || []; console.log(collect); The regular expression consists of: [^[]+?: capture one or more characters that are not [. The ? makes that capturing stop as soon as the next condition is satisfied: (?=\]): require that the next character is a ], but don't capture it. The g modifier will ensure that all matches are returned. || [] is added for the case that there are no matches at all. In that case match returns null. This addition will output an empty array instead of that null, which may be more practical. NB: I am not sure why you used reverse, but if you really need the output array in reversed order, you can of course apply reverse to it.
{ "pile_set_name": "StackExchange" }
Q: In Sitecore 8 can I get a field value in the language version of the page I want to get a field value in the language version of the page. For example I have an item called Search Placeholder in en-us with the the field value "Select.." on the en-us page it shows that value. But using the code below when I create Search Placeholder in en-gb and I put in the value "Select2..." It shows up blank on the en-gb page. string fieldName = "Search Placeholder Text"; Sitecore.Data.Items.Item someItem = Sitecore.Context.Database.GetItem("/sitecore/content/site/shared-content/Search Placeholder"); Sitecore.Data.Fields.Field someField = someItem.Fields[fieldName]; string searchPlace = someField.Value; Is there a way to check if Search Placeholder has a language version for a page? A: First of all, you can pass chosen language to GetItem method: Sitecore.Context.Database.GetItem(path, language) Then you can check if item has any version in that language using: someItem.Versions.Count > 0 If item has more than 0 versions and the field is null it means that either this item has not been publish after field was added to the template or the field item itself has not been published.
{ "pile_set_name": "StackExchange" }
Q: Elmah does not work with asp.net mvc I spent countless hours trying to get Elmah working with asp.net mvc, but can't get it working 100%. Right now all the logging works fine, but the HttpHandlers are all screwy. Everytime I try and log into an admin account I automatically get redirected to Elmahs listings page. It makes no sense because the path for elmah is just elmah.axd (that's what I use for the httphandler in the web.config) and my admin path is something like /MyAdmin/login, so I don't see the connection. I have also setup the ignore routes thing in my routes table for elmah. \ To sum it up. Elmah logging works and so does the error display pages. When I try and log in to my admin account it automatically redirect to Elmahs error display page. I have no idea why. If I comment out routes.IgnoreRoute("elmah.axd"); my login works. IF I leave it in there it always redirects to elmah. A: I finally figured it out. No one would have got this one... I had a reference to RouteDebugger.dll which I got from the book "Asp.net MVC Framework Unleashed" and for some reason this dll messed up all my post requests if Elmah was enabled. It was pure dumb luck that I figured it out. I couldn't get the RouteDebugger working so I deleted the reference and added a different one and then everything worked.
{ "pile_set_name": "StackExchange" }
Q: How to show the current play time of a video file in JTextField? I calculate the time of the current play time of video: public show_time_of_vedio_file(MediaPanel mediaPanel,JFrame_of_subtitle frame) { // for(;;) { double second=mediaPanel.mediaPlayer.getMediaTime().getSeconds(); int second1=(int) second; int hour=second1/3600; second1=second1-hour*3600; int minute=second1/60; second1=second1-minute*60; double milisecond=(second-(int)second)*1000; int milisecond_1=(int) milisecond; String milisecond_string=String.valueOf(milisecond_1); String hour_string=String.valueOf(hour); String minute_string=String.valueOf(minute); String second_string=String.valueOf(second1); if(hour_string.length()==1) hour_string="0".concat(hour_string); if(minute_string.length()==1) minute_string="0".concat(minute_string); if(second_string.length()==1) second_string="0".concat(second_string); if(milisecond_string.length()==2) milisecond_string="0".concat(milisecond_string); else if(milisecond_string.length()==1) milisecond_string="0".concat("0".concat(milisecond_string)); frame.show_time_jTextField.setText(String.format("%s:%s:%s,%s", hour_string,minute_string,second_string,milisecond_string)); } } Now I want to show the this time in JTextField all the time when the video is playing and when the video is not play I want to show 00:00:00,000. Can anyone tell me how can I do this? A: ..want to show the this time in JTextField.. Use a JProgressBar for this instead. E.G. See How to Use Progress Bars for more details. ..not play I want to show 00:00:00,000. See JProgressBar.setString(String). The progress-bar in the upper right of this GUI shows use of a more 'media friendly' string.
{ "pile_set_name": "StackExchange" }
Q: textwrangler: how can I run without opening files? TextWrangler has always worked great, but now it's hanging everytime I open it. I suspect one of the files it's trying to open causes a problem. Is there a way to run it without opening any files? A: You can press and hold the Shift key while launching TextWrangler to suppress all normal startup actions, including reopening files. A: Your TextWrangler preferences are stored in ~/Library/Preferences/ in: the file com.barebones.textwrangler.plist the files in the directory com.barebones.textwrangler.PreferenceData I don't know which of the files contains the open documents. Move the files/folder to your desktop and try starting it, then put them back one after another to find the source of your problem.
{ "pile_set_name": "StackExchange" }
Q: Apache ExpiresDefault: can it reside in a directive? The 2.2 docs state that ExpiresDefault can be placed in server config, virtual host, directory, and .htaccess. It doesn't mention Location. I have a mod_perl server, and I'd like most, or all, of the non-dynamic content (jpg, css, js, etc.) to expire "infrequently". But I want all mod_perl generated pages to expire "now". My configuration appears to be working, but I want to make sure I'm not missing something, since it's undocumented. ExpiresActive on ExpiresDefault "access plus 1 month" <LocationMatch ^/app/.*> ExpiresDefault "now" </LocationMatch> A: <Location> falls under directory context. So, yes.
{ "pile_set_name": "StackExchange" }
Q: Floating floor invisible transition Can caulk or similar be used to create invisible transition between floating wood flooring and vinyl floor tiling - the floor under the tiling has been built up to match level of floating wood floor) Thanks for any advice. A: You are not supposed to secure a floating floor in any fashion, and caulking an edge may just do that. If it is a small section of floor, like a small 3X5 ft. powder room, you may get by, (not that I would put a floating floor in a powder room, this is just for example) but a larger floor, you need to use the transition or cap strip made for the laminate floor.
{ "pile_set_name": "StackExchange" }
Q: Javascript array of dates - not iterating properly (jquery ui datepicker) I have some code which builds an array of date ranges. I then call a function (from the jquery UI datepicker), passing it a date, and compare that date with dates in the array. I'm doing it this way because the dates are stored in a cms and this is the only way I can output them. Unfortunately my code only checks the first date range in the array - and I can't figure out why! I think it's probably something simple (/stupid!) - if anyone can shed some light on it I'd be extremely grateful! The code is below - the june-september range (ps1-pe1) works fine, the december to jan is totally ignored... <script type="text/javascript" language="javascript"> var ps1 = new Date(2010, 06-1, 18); // range1 start var pe1 = new Date(2010, 09-1, 03); // range1 end var ps2 = new Date(2010, 12-1, 20); // range2 start var pe2 = new Date(2011, 01-1, 02); // range2 end var peakStart = new Array(ps1,ps2); var peakEnd = new Array(pe1,pe2); function checkDay(date) { var day = date.getDay(); for (var i=0; i<peakStart.length; i++) { if ((date > peakStart[i]) && (date < peakEnd[i])) { return [(day == 5), '']; } else { return [(day == 1 || day == 5), '']; } } } </script> A: Yaggo is quite right, but apparently too terse. You want to move the second return statement outside of the loop. function checkDay(date) { var day = date.getDay(); for (var i=0; i<peakStart.length; i++) { if ((date > peakStart[i]) && (date < peakEnd[i])) { return [(day == 5), '']; } } // it's not during a peak period return [(day == 1 || day == 5), '']; }
{ "pile_set_name": "StackExchange" }
Q: Memory efficient sort of massive numpy array in Python I need to sort a VERY large genomic dataset using numpy. I have an array of 2.6 billion floats, dimensions = (868940742, 3) which takes up about 20GB of memory on my machine once loaded and just sitting there. I have an early 2015 13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7 processor. Just loading the array overflows to virtual memory but not to the point where my machine suffers or I have to stop everything else I am doing. I build this VERY large array step by step from 22 smaller (N, 2) subarrays. Function FUN_1 generates 2 new (N, 1) arrays using each of the 22 subarrays which I call sub_arr. The first output of FUN_1 is generated by interpolating values from sub_arr[:,0] on array b = array([X, F(X)]) and the second output is generated by placing sub_arr[:, 0] into bins using array r = array([X, BIN(X)]). I call these outputs b_arr and rate_arr, respectively. The function returns a 3-tuple of (N, 1) arrays: import numpy as np def FUN_1(sub_arr): """interpolate b values and rates based on position in sub_arr""" b = np.load(bfile) r = np.load(rfile) b_arr = np.interp(sub_arr[:,0], b[:,0], b[:,1]) rate_arr = np.searchsorted(r[:,0], sub_arr[:,0]) # HUGE efficiency gain over np.digitize... return r[rate_r, 1], b_arr, sub_arr[:,1] I call the function 22 times in a for-loop and fill a pre-allocated array of zeros full_arr = numpy.zeros([868940742, 3]) with the values: full_arr[:,0], full_arr[:,1], full_arr[:,2] = FUN_1 In terms of saving memory at this step, I think this is the best I can do, but I'm open to suggestions. Either way, I don't run into problems up through this point and it only takes about 2 minutes. Here is the sorting routine (there are two consecutive sorts) for idx in range(2): sort_idx = numpy.argsort(full_arr[:,idx]) full_arr = full_arr[sort_idx] # ... # <additional processing, return small (1000, 3) array of stats> Now this sort had been working, albeit slowly (takes about 10 minutes). However, I recently started using a larger, more fine resolution table of [X, F(X)] values for the interpolation step above in FUN_1 that returns b_arr and now the SORT really slows down, although everything else remains the same. Interestingly, I am not even sorting on the interpolated values at the step where the sort is now lagging. Here are some snippets of the different interpolation files - the smaller one is about 30% smaller in each case and far more uniform in terms of values in the second column; the slower one has a higher resolution and many more unique values, so the results of interpolation are likely more unique, but I'm not sure if this should have any kind of effect...? bigger, slower file: 17399307 99.4 17493652 98.8 17570460 98.2 17575180 97.6 17577127 97 17578255 96.4 17580576 95.8 17583028 95.2 17583699 94.6 17584172 94 smaller, more uniform regular file: 1 24 1001 24 2001 24 3001 24 4001 24 5001 24 6001 24 7001 24 I'm not sure what could be causing this issue and I would be interested in any suggestions or just general input about sorting in this type of memory limiting case! A: At the moment each call to np.argsort is generating a (868940742, 1) array of int64 indices, which will take up ~7 GB just by itself. Additionally, when you use these indices to sort the columns of full_arr you are generating another (868940742, 1) array of floats, since fancy indexing always returns a copy rather than a view. One fairly obvious improvement would be to sort full_arr in place using its .sort() method. Unfortunately, .sort() does not allow you to directly specify a row or column to sort by. However, you can specify a field to sort by for a structured array. You can therefore force an inplace sort over one of the three columns by getting a view onto your array as a structured array with three float fields, then sorting by one of these fields: full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0) In this case I'm sorting full_arr in place by the 0th field, which corresponds to the first column. Note that I've assumed that there are three float64 columns ('f8') - you should change this accordingly if your dtype is different. This also requires that your array is contiguous and in row-major format, i.e. full_arr.flags.C_CONTIGUOUS == True. Credit for this method should go to Joe Kington for his answer here. Although it requires less memory, sorting a structured array by field is unfortunately much slower compared with using np.argsort to generate an index array, as you mentioned in the comments below (see this previous question). If you use np.argsort to obtain a set of indices to sort by, you might see a modest performance gain by using np.take rather than direct indexing to get the sorted array: %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() x[idx] # 1 loops, best of 100: 148 µs per loop %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() np.take(x, idx, axis=0) # 1 loops, best of 100: 42.9 µs per loop However I wouldn't expect to see any difference in terms of memory usage, since both methods will generate a copy. Regarding your question about why sorting the second array is faster - yes, you should expect any reasonable sorting algorithm to be faster when there are fewer unique values in the array because on average there's less work for it to do. Suppose I have a random sequence of digits between 1 and 10: 5 1 4 8 10 2 6 9 7 3 There are 10! = 3628800 possible ways to arrange these digits, but only one in which they are in ascending order. Now suppose there are just 5 unique digits: 4 4 3 2 3 1 2 5 1 5 Now there are 2⁵ = 32 ways to arrange these digits in ascending order, since I could swap any pair of identical digits in the sorted vector without breaking the ordering. By default, np.ndarray.sort() uses Quicksort. The qsort variant of this algorithm works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted. Having fewer unique values means that, on average, more values will be equal to the pivot value on any given sweep, and therefore fewer sweeps are needed to fully sort the array. For example: %%timeit -n 1 -r 100 x = np.random.random_integers(0, 10, 100000) x.sort() # 1 loops, best of 100: 2.3 ms per loop %%timeit -n 1 -r 100 x = np.random.random_integers(0, 1000, 100000) x.sort() # 1 loops, best of 100: 4.62 ms per loop In this example the dtypes of the two arrays are the same. If your smaller array has a smaller item size compared with the larger array then the cost of copying it due to the fancy indexing will also be smaller. A: EDIT: In case anyone new to programming and numpy comes across this post, I want to point out the importance of considering the np.dtype that you are using. In my case, I was actually able to get away with using half-precision floating point, i.e. np.float16, which reduced a 20GB object in memory to 5GB and made sorting much more manageable. The default used by numpy is np.float64, which is a lot of precision that you may not need. Check out the doc here, which describes the capacity of the different data types. Thanks to @ali_m for pointing this out in the comments. I did a bad job explaining this question but I have discovered some helpful workarounds that I think would be useful to share for anyone who needs to sort a truly massive numpy array. I am building a very large numpy array from 22 "sub-arrays" of human genome data containing the elements [position, value]. Ultimately, the final array must be numerically sorted "in place" based on the values in a particular column and without shuffling the values within rows. The sub-array dimensions follow the form: arr1.shape = (N1, 2) ... arr22.shape = (N22, 2) sum([N1..N2]) = 868940742 i.e. there are close to 1BN positions to sort. First I process the 22 sub-arrays with the function process_sub_arrs, which returns a 3-tuple of 1D arrays the same length as the input. I stack the 1D arrays into a new (N, 3) array and insert them into an np.zeros array initialized for the full dataset: full_arr = np.zeros([868940742, 3]) i, j = 0, 0 for arr in list(arr1..arr22): # indices (i, j) incremented at each loop based on sub-array size j += len(arr) full_arr[i:j, :] = np.column_stack( process_sub_arrs(arr) ) i = j return full_arr EDIT: Since I realized my dataset could be represented with half-precision floats, I now initialize full_arr as follows: full_arr = np.zeros([868940742, 3], dtype=np.float16), which is only 1/4 the size and much easier to sort. Result is a massive 20GB array: full_arr.nbytes = 20854577808 As @ali_m pointed out in his detailed post, my earlier routine was inefficient: sort_idx = np.argsort(full_arr[:,idx]) full_arr = full_arr[sort_idx] the array sort_idx, which is 33% the size of full_arr, hangs around and wastes memory after sorting full_arr. This sort supposedly generates a copy of full_arr due to "fancy" indexing, potentially pushing memory use to 233% of what is already used to hold the massive array! This is the slow step, lasting about ten minutes and relying heavily on virtual memory. I'm not sure the "fancy" sort makes a persistent copy however. Watching the memory usage on my machine, it seems that full_arr = full_arr[sort_idx] deletes the reference to the unsorted original, because after about 1 second all that is left is the memory used by the sorted array and the index, even if there is a transient copy. A more compact usage of argsort() to save memory is this one: full_arr = full_arr[full_arr[:,idx].argsort()] This still causes a spike at the time of the assignment, where both a transient index array and a transient copy are made, but the memory is almost instantly freed again. @ali_m pointed out a nice trick (credited to Joe Kington) for generating a de facto structured array with a view on full_arr. The benefit is that these may be sorted "in place", maintaining stable row order: full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0) Views work great for performing mathematical array operations, but for sorting it is far too inefficient for even a single sub-array from my dataset. In general, structured arrays just don't seem to scale very well even though they have really useful properties. If anyone has any idea why this is I would be interested to know. One good option to minimize memory consumption and improve performance with very large arrays is to build a pipeline of small, simple functions. Functions clear local variables once they have completed so if intermediate data structures are building up and sapping memory this can be a good solution. This a sketch of the pipeline I've used to speed up the massive array sort: def process_sub_arrs(arr): """process a sub-array and return a 3-tuple of 1D values arrays""" return values1, values2, values3 def build_arr(): """build the initial array by joining processed sub-arrays""" full_arr = np.zeros([868940742, 3]) i, j = 0, 0 for arr in list(arr1..arr22): # indices (i, j) incremented at each loop based on sub-array size j += len(arr) full_arr[i:j, :] = np.column_stack( process_sub_arrs(arr) ) i = j return full_arr def sort_arr(): """return full_arr and sort_idx""" full_arr = build_arr() sort_idx = np.argsort(full_arr[:, index]) return full_arr[sort_idx] def get_sorted_arr(): """call through nested functions to return the sorted array""" sorted_arr = sort_arr() <process sorted_arr> return statistics call stack: get_sorted_arr --> sort_arr --> build_arr --> process_sub_arrs Once each inner function is completed get_sorted_arr() finally just holds the sorted array and then returns a small array of statistics. EDIT: It is also worth pointing out here that even if you are able to use a more compact dtype to represent your huge array, you will want to use higher precision for summary calculations. For example, since full_arr.dtype = np.float16, the command np.mean(full_arr[:,idx]) tries to calculate the mean in half-precision floating point, but this quickly overflows when summing over a massive array. Using np.mean(full_arr[:,idx], dtype=np.float64) will prevent the overflow. I posted this question initially because I was puzzled by the fact that a dataset of identical size suddenly began choking up my system memory, although there was a big difference in the proportion of unique values in the new "slow" set. @ali_m pointed out that, indeed, more uniform data with fewer unique values is easier to sort: The qsort variant of Quicksort works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted, so intuitively, the fewer unique values there are in the array, the smaller the number of swaps there are that need to be made. On that note, the final change I ended up making to attempt to resolve this issue was to round the newer dataset in advance, since there was an unnecessarily high level of decimal precision leftover from an interpolation step. This ultimately had an even bigger effect than the other memory saving steps, showing that the sort algorithm itself was the limiting factor in this case. Look forward to other comments or suggestions anyone might have on this topic, and I almost certainly misspoke about some technical issues so I would be glad to hear back :-)
{ "pile_set_name": "StackExchange" }
Q: in Immutablejs Map push into List I am a beginner developer who is studying redux. I am using immutablejs to add object type data to the state. When you press the button on the react component, the test data (Map ()) is pushed to the List(). But there is a problem. When the button is pressed, the following type of data is input, and when the page is refreshed, it is updated with normal data. Why is this happening? I really appreciate your help. Before Refresh After Refresh import { handleActions } from 'redux-actions' import axios from 'axios' import { Map, List } from 'immutable' let token = localStorage.token if (!token) token = localStorage.token = Math.random().toString(36).substr(-8) let instance = axios.create({ baseURL: 'http://localhost:5001', timeout: 1000, headers: {'Authorization': token} }) const GET_POST_PENDING = 'GET_POST_PENDING' const GET_ALL_POST_SUCCESS = 'GET_ALL_POST_SUCCESS' const CREATE_POST_SUCCESS = 'CREATE_POST_SUCCESS' const GET_POST_FAILURE = 'GET_POST_FAILURE' //actions export const getPost = (postId) => dispatch => { dispatch({type: GET_POST_PENDING}); return instance.get('/posts').then( response => { dispatch({ type: GET_ALL_POST_SUCCESS, payload: response }) } ).catch((error) => { dispatch({ type: GET_POST_FAILURE, payload: error }) }) } export const createPost = () => dispatch => { dispatch({type: GET_POST_PENDING}) return instance.post('/posts',{ id: Math.random().toString(36).substr(-10), timestamp: Date.now(), title: 'test title', body: 'test body', category: 'redux', author: 'minwoo', deleted: false, voteScore: 1 }).then( response => { console.log(response) //check data dispatch({ type:CREATE_POST_SUCCESS, payload: response }) } ).catch((error) => { dispatch({ type: GET_POST_FAILURE, payload: error }) }) } const initialState = Map({ posts: List([]), comments: List([]) }) I know that the console should not be here. However, when I press the button, I want to check if the response data is transmitted correctly. //reducer export default handleActions({ [GET_POST_PENDING]: (state, action) => { return state; }, [GET_ALL_POST_SUCCESS]: (state, action) => { console.log(action.payload.data)//for check data return state.set('posts', List([...action.payload.data])) }, [CREATE_POST_SUCCESS]: (state, action) => { const posts = state.get('posts') return state.set('posts', posts.push( Map(action.payload.date) )) }, [GET_POST_FAILURE]: (state, action) => { return state } }, initialState) The code below is the React component mentioned above. import React from 'react'; import PropTypes from 'prop-types' import { List } from 'immutable'; const PostList = ({posts, PostActions: {getPost}}) => { const postList = posts.map((post,i) => ( <div key={i}> {post.title} <button>edit</button> <button>delete</button> </div> )) return ( <div className="PostList"> {postList} </div> ) } PostList.proptypes = { posts: PropTypes.instanceOf(List), getPost: PropTypes.func } PostList.defaultProps = { posts:[], getPost: () => console.log('getPost is not defined') } export default PostList A: List does not deeply convert your data, so when you pass an array of objects to List you will get a List object containing plain JS object. Use fromJS instead of List in GET_ALL_POST_SUCCESS
{ "pile_set_name": "StackExchange" }
Q: check if context has some table then add to this table i have multiple DbContexts and each context has some DbSets like public class fooContext : DbContext { DbSet<fooA> fooA {get,set} DbSet<fooB> fooB {get,set} } public class barContext : DbContext { DbSet<barA> barA {get,set} DbSet<barB> barB {get,set} } and an excel file with multiple excelSheets structered properly for linqtosql to work with (having sheet names as fooA,fooB... ,first row is properties names and remaining rows are data) i can see that if i have which context has fooA i can use something like this function inside the context public DbSet Set(string "fooA") { return base.Set(Type.GetType("fooA")); } but i don't which context has fooA to add this to to clarify this ,, normally when you want to add fooARecord to fooA table in fooContext you fooContext.fooA.Add(fooARecord); but i only have fooA as string and fooARecord P.S: cant use linqtosql since oracle and i cant simply import excel to oracle coz too many tables and users need to be able to alter this data before this process A: To check is a fooContext has a DbSet of a specific type only by name, you can do this: var fooContext = new FooContext(); //the context to check var dbSets = fooContext.GetType().GetProperties() .Where(p => p.PropertyType.IsGenericType && p.PropertyType.GetGenericTypeDefinition() == typeof(DbSet<>)).ToArray(); //List Dbset<T> var fooA = "fooA"; //the table to search var dbSetProp = dbSets.SingleOrDefault(x=> x.PropertyType.GetGenericArguments()[0].Name == fooA); if(dbSetProp != null) { //fooContext has fooA table var dbSet = fooContex.Set(dbSetProp.PropertyType.GetGenericArguments()[0]); // or via dbSetProp.GetValue(fooContext) as DbSet dbSet.Add(fooARecord); }
{ "pile_set_name": "StackExchange" }
Q: Sample instant app requires newer SDK I'm keep getting the error at the bottom of the question even though I followed official emulator setup guide and sample project setup guide to the letter. Using: - Android Studio 3.0-Alpha7 - Pixel emulator with SDK 23 Provisioning succeeds and was able to enable instant apps in Settings > Google > Instant Apps Side loading instant app failed: Failure when trying to read bundle. Instant App com.instantappsample requires an SDK version which is newer than the one installed on device. Please update the SDK on the device. Error while Uploading and registering Instant App A: Creating an API 26 (aka O) emulator allowed me to successfully install the Instant App, while otherwise following the guide. Hat-tip to donly from Github project android-instant-apps Workarounds I tried unsuccessfully first: Uninstalling "Google Play Services for Instant Apps" (from the other answer) Downgrading to Android Studio 3.0 Canary 5 Using a physical device that can run instant apps (Galaxy S6 SM-G920V, Android 7.0)
{ "pile_set_name": "StackExchange" }
Q: .htaccess not triggered So I have the following .htaccess in my /var/www/site RewriteEngine on RewriteRule ^([^/]+)/?$ parser.php?id=$1 [QSA,L] I have allowed override in my vhost: <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/site> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> UPDATE: Now I got it to work, however when I visit site.com, it also redirects me to this parser.php, in which I don't want as this is my homepage. My homepage should be redirected to index.php and if I do mysite/NASKDj, it should be redirected to parser.php?pid=NASKDj. How do I fix this? A: You have 'AllowOverride None' in the '/var/www/site' directory - this will overrive the one specified in the '/' directory. If your site is in /var/www/site you need to change This one to All too.
{ "pile_set_name": "StackExchange" }
Q: Upload multiple files using AJAX and servlet I've tried almost everything available on the internet but nothing seems to be working. I have a HTML5 filereader code which will get me all the files read from client side directory var f = $('#fileUpload')[0].files; Next thing I want to upload all these files with an AJAX request to a JAVA servlet POST method. for that I tried below code- var data = new FormData(); $.each(f, function(key, value) { data.append(key, value); }); postFilesData(data); //some code.. function postFilesData(data) { $.ajax({ url: 'serv2', type: 'POST', //enctype: 'multipart/form-data', data: data, cache: false, processData: false, mimetyep: 'multipart/form-data', contentType: 'multipart/form-data', success: function(data) { //success }, error: function(textStatus) { console.log('ERRORS: ' + textStatus); } }); } servlet code doPOst method- System.out.println("Hi what request:"+ServletFileUpload.isMultipartContent(request)); System.out.println("hi bro"); // awsUpload.uploadData(foldername); System.out.println("outside aws"); DiskFileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); String uuidValue = ""; FileItem itemFile = null; try { // parses the request's content to extract file data List formItems = upload.parseRequest(request); Iterator iter = formItems.iterator(); // iterates over form's fields to get UUID Value while (iter.hasNext()) { FileItem item = (FileItem) iter.next(); if (item.isFormField()) { } // processes only fields that are not form fields if (!item.isFormField()) { itemFile = item; } } } catch (Exception e) { // TODO: handle exception } //System.out.println(path); // response.sendRedirect(path+"/user"+"/home.html"); // System.out.println("done"); if(itemFile==null) { System.out.println("File Empty Found"); } System.out.println("The File Name is"+itemFile.getName()); } HTML code: <form method="POST" enctype="multipart/form-data" > <input type="file" class="input-file" name="file[]" id="fileUpload" onchange="fileChanged();" multiple mozdirectory="" webkitdirectory="" directory=""/> <br/> it prints "File Empty Found" and crashes on below line with NullPointerException. I understand it's not getting any data. Could you please point out piece of code which is wrong or code missing which needs to be added. A: Thank you for your response guys..!! To answer @BalusC's questions, Yes it is multipart upload. It enters in while loop but no data was transferred from ajax call and code just broke at line- System.out.println("The File Name is"+itemFile.getName()); since there was no item to get FileName of. Only exceptions I got was on console "NullPointerException" and on UI-console(while debugging in JS) "500 internal server error" I was able to fix the code and was able to transfer data through AJAX call to Servlet. Below is the code. pretty much changed/restructured code for AJAX call and servlet code-- AJAX request-- var fd = new FormData(); //fd.append( 'file', $('#fileUpload')[0].files);//.files[0]); $.each($('#fileUpload')[0].files, function(k, value) { fd.append(k, value); }); $.ajax({ url: 'serv2', data: fd, processData: false, contentType: false, type: 'POST', success: function(data){ alert(data); } }); Servlet code-doPost method-- if (!ServletFileUpload.isMultipartContent(request)) { PrintWriter writer = response.getWriter(); writer.println("Request does not contain upload data"); writer.flush(); return; } // configures upload settings DiskFileItemFactory factory = new DiskFileItemFactory(); factory.setSizeThreshold(THRESHOLD_SIZE); ServletFileUpload upload = new ServletFileUpload(factory); //upload.setFileSizeMax(MAX_FILE_SIZE); //upload.setSizeMax(MAX_REQUEST_SIZE); String uuidValue = ""; FileItem itemFile = null; try { // parses the request's content to extract file data List formItems = upload.parseRequest(request); Iterator iter = formItems.iterator(); // iterates over form's fields to get UUID Value while (iter.hasNext()) { FileItem item = (FileItem) iter.next(); if (item.isFormField()) { if (item.getFieldName().equalsIgnoreCase(UUID_STRING)) { uuidValue = item.getString(); } } // processes only fields that are not form fields if (!item.isFormField()) { itemFile = item; } } System.out.println("no of items: " + formItems.size()); System.out.println("FILE NAME IS : "+itemFile.getName()); } } I was able to print no of file objects passed from UI which were correct. Thank you for your time guys..!! :)
{ "pile_set_name": "StackExchange" }
Q: Access SmartSheet API behind corporate firewall .Net C# I have just started a development to update a smart sheet document using the API. Using the example (csharp-read-write-sheet) in the SDK reference I can update the the document as long as I am on an open internet connection, however, I cannot when I am connected to the company LAN as it is reporting a proxy authentication issue. This is the code from the SDK string accessToken = ConfigurationManager.AppSettings["AccessToken"]; if (string.IsNullOrEmpty(accessToken)) accessToken = Environment.GetEnvironmentVariable("SMARTSHEET_ACCESS_TOKEN"); if (string.IsNullOrEmpty(accessToken)) throw new Exception("Must set API access token in App.conf file"); // Get sheet Id from App.config file string sheetIdString = ConfigurationManager.AppSettings["SheetId"]; long sheetId = long.Parse(sheetIdString); // Initialize client SmartsheetClient ss = new SmartsheetBuilder().SetAccessToken(accessToken).Build(); // Load the entire sheet Sheet sheet = ss.SheetResources.GetSheet(sheetId, null, null, null, null, null, null, null); Console.WriteLine("Loaded " + sheet.Rows.Count + " rows from sheet: " + sheet.Name); Can you please advise how I can configure the API to provide a System.Net.WebProxy object to the Client API to provide authentication route through the company proxy A: @Steve Weil's answer does not allow for you to provide user credentials.... Further research based on it though ended me up at Is it possible to specify proxy credentials in your web.config? which has now solved my issues
{ "pile_set_name": "StackExchange" }
Q: when use two class in xml give error unbound prefix error I want to use two class in xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:wave="http://schemas.android.com/apk/res-auto" android:layout_width="fill_parent" android:layout_height="fill_parent"> <com.example.tesst.MaskableFrameLayout android:id="@+id/frm_mask_animated" android:layout_width="100dp" app:porterduffxfermode="DST_IN" app:mask="@drawable/animation_mask" android:layout_height="100dp"> <com.john.waveview.WaveView android:id="@+id/wave_view" android:layout_width="300dp" android:layout_height="300dp" wave:above_wave_color="@android:color/white" wave:blow_wave_color="@android:color/white" wave:progress="80" android:layout_gravity="center" wave:wave_height="little" wave:wave_hz="normal" wave:wave_length="middle" /> </com.example.tesst.MaskableFrameLayout> </FrameLayout> What is wrong with it? The error:Error parsing XML: unbound prefix shows up! I do not know what is the problem Help please A: You're using the "app:" prefix, and that's not defined. Change <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:wave="http://schemas.android.com/apk/res-auto" To <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:wave="http://schemas.android.com/apk/res-auto" xmlns:app="http://schemas.android.com/apk/res-auto"
{ "pile_set_name": "StackExchange" }
Q: Oracle: Overwrite values in a column with the longest string I’m running into a problem. Say, I have columns called “C1, C2, C3....” in a table. I’d like to use the longest string in C1 to replace every other cells in C1 column without disturbing other columns. I tired several ways but I cannot get my Oracle code run. Could someone please show me a sample code to do this problem? I typed my question using a cellphone so I apologize for not showing you my code. But I think my description is fine... Thank you! A: For example: SQL> select * from test; DEPTNO DNAME LOC ---------- -------------- ------------- 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO 40 OPERATIONS BOSTON SQL> update test set 2 dname = (select max(dname) --> MAX fixes TOO-MANY-ROWS because ACCOUNTING 3 from test -- and OPERATIONS have same length 4 where length(dname) = (select max(length(dname)) 5 from test 6 ) 7 ); 4 rows updated. SQL> select * from test; DEPTNO DNAME LOC ---------- -------------- ------------- 10 OPERATIONS NEW YORK 20 OPERATIONS DALLAS 30 OPERATIONS CHICAGO 40 OPERATIONS BOSTON SQL> [EDIT, GROUP BY] Another example is based on a different table, which reflects what you described in a comment. The DEPTNO (department number) is used to "group" employees, and I'm going to update the JOB column value to the longest job name within that department. Query is similar to the previous one; it just joins appropriate columns (DEPTNO) throughout code. Sample data: SQL> select * from test order by deptno, ename; DEPTNO ENAME JOB ---------- ---------- --------- 10 CLARK MANAGER KING PRESIDENT --> the longest in DEPTNO = 10 MILLER CLERK 20 ADAMS CLERK FORD ANALYST JONES MANAGER --> as long as ANALYST, but MAX(JOB) will return this value SCOTT ANALYST SMITH CLERK 30 ALLEN SALESMAN --> the longest in DEPTNO = 30 BLAKE MANAGER JAMES CLERK MARTIN SALESMAN TURNER SALESMAN WARD SALESMAN Update and the result: SQL> update test t set 2 t.job = (select max(t1.job) 3 from test t1 4 where t1.deptno = t.deptno 5 and length(t1.job) = (select max(length(t2.job)) 6 from test t2 7 where t2.deptno = t1.deptno 8 ) 9 ); 14 rows updated. SQL> select * from test order by deptno, ename; DEPTNO ENAME JOB ---------- ---------- --------- 10 CLARK PRESIDENT KING PRESIDENT MILLER PRESIDENT 20 ADAMS MANAGER FORD MANAGER JONES MANAGER SCOTT MANAGER SMITH MANAGER 30 ALLEN SALESMAN BLAKE SALESMAN JAMES SALESMAN MARTIN SALESMAN TURNER SALESMAN WARD SALESMAN
{ "pile_set_name": "StackExchange" }
Q: Query data that does not fulfill constraints in table in Oracle DB In Oracle I have a lot of tables in a given schema. Some of the table constraints in the schema have been disabled. I have to know for all tables in a given schema with any of the constraints deactivated, what are the (data) rows that would not fulfil the deactivated constraints. Question Is there a way to easily collect all the rows that would produce an error if I were to decide to enable the constraints again? So far, I would try to go constraint by constraint, create a query for each table and run the query. But I wonder whether there is anything that may do the work quickly and less error prone. If possible, the solution should use the information available in view USER_CONSTRAINTS or any similar one. I have checked the constraint documentation in Oracle, in which the constraint clause may use ENABLE NOVALIDATE, which allows for having non-compliant rows of data for the existing values until they are modified. However, I am looking for a solution that would allow me to fix the data before I re-enable the constraints. A: Reporting Constraint Exceptions You must create an appropriate exceptions report table to accept information from the EXCEPTIONS option of the ENABLE clause before enabling the constraint. You can create an exception table by executing the UTLEXCPT.SQL script or the UTLEXPT1.SQL script. The following statement attempts to validate the PRIMARY KEY of the dept table, and if exceptions exist, information is inserted into a table named EXCEPTIONS: ALTER TABLE dept ENABLE PRIMARY KEY EXCEPTIONS INTO EXCEPTIONS; If duplicate primary key values exist in the dept table and the name of the PRIMARY KEY constraint on dept is sys_c00610, then the following query will display those exceptions: SELECT * FROM EXCEPTIONS; The following exceptions are shown: fROWID OWNER TABLE_NAME CONSTRAINT ------------------ --------- -------------- ----------- AAAAZ9AABAAABvqAAB SCOTT DEPT SYS_C00610 AAAAZ9AABAAABvqAAG SCOTT DEPT SYS_C00610
{ "pile_set_name": "StackExchange" }
Q: How to use `quick-error` with boxed error types? Problem Description I am trying to use quick_error like this: #[macro_use] extern crate quick_error; use std::error::Error; use std::io; fn main() { quick_error!{ #[derive(Debug)] pub enum MyError { Io(err: io::Error) { cause(err) } Any(err: Box<Error>) { cause(err) } } } } Even though there are many other error variants, the one I am most interested in is one that can handle any kind of error by boxing it. However, the code above does not work for the boxed type: error[E0277]: the trait bound `std::error::Error + 'static: std::marker::Sized` is not satisfied --> src/main.rs:11:23 | 11 | cause(err) | ^^^ When looking at the generated code (using cargo expand), it becomes a bit more evident why that is: #[allow(unused)] impl ::std::error::Error for MyError { [...] fn cause(&self) -> Option<&::std::error::Error> { match *self { MyError::Io(ref err) => Some(err), MyError::Any(ref err) => Some(err), } } } A &Box<Error> does not automatically become a &Error, unless you call err.as_ref() on it specifically. Thus the code below works, and I tried it by just compiling the expanded, adjusted version myself using rustc. #[allow(unused)] impl ::std::error::Error for MyError { [...] fn cause(&self) -> Option<&::std::error::Error> { match *self { MyError::Io(ref err) => Some(err), // ------> note the *as_ref()* <------ MyError::Any(ref err) => Some(err.as_ref()), } } } Question What can I do to make the above work ? Additional Notes To me it would be viable to modify quick-error, however, it's not allowed to call .as_ref() on &std::io::Error for example, which seemed like a simple fix for quick-error: #[allow(unused)] impl ::std::error::Error for MyError { [...] fn cause(&self) -> Option<&::std::error::Error> { match *self { MyError::Io(ref err) => Some(err.as_ref()), MyError::Any(ref err) => Some(err.as_ref()), } } } The above causes this error: error: no method named `as_ref` found for type `&std::io::Error` in the current scope --> expanded.rs:91:50 | 91 | MyError::Io(ref err) => Some(err.as_ref()), | ^^^^^^ | = note: the method `as_ref` exists but the following trait bounds were not satisfied: `std::io::Error : core::convert::AsRef<_> And it really makes me wonder why I can't use &std::io::Error.as_ref() on any reference, considering it becomes a reference to an implemented trait automatically otherwise. Maybe it's syntax I am missing to state the intend. A: as_ref is provided by the AsRef trait. Notice that as_ref receives self by reference (noted &self), so in order to call it on a &T, T must implement AsRef. std::io::Error does not implement that trait. However, there is another trait that looks a lot like AsRef: Borrow. Borrow provides a single method, borrow, with the same signature as as_ref. Borrow also has a different set of implementors; notably, it has impl<T> Borrow<T> for T where T: ?Sized. This means that for every type, we can invoke a borrow method (provided that the Borrow trait is brought into scope with use) to obtain a reference to the same type. Borrow<T> is also implemented for Box<T>, so you can borrow a T from a Box<T>, and likewise for other pointer/smart pointer types. #[allow(unused)] impl ::std::error::Error for MyError { fn cause(&self) -> Option<&::std::error::Error> { use std::borrow::Borrow; match *self { MyError::Io(ref err) => Some(err.borrow()), MyError::Any(ref err) => Some(err.borrow()), } } }
{ "pile_set_name": "StackExchange" }
Q: Find length of a side from given mesurements Source: gradestack.com This is a problem I am trying to solve for a long time. But still not able to proceed. After spending some time, I got a doubt whether this question is correct. Because, in a rhombus, diagonals bisects each other. Here PA=PC. That means, P is the center point of diagonal AC. So PD must be equal to PB which is not the case. Why I am wrong here? Please give pointers in how to solve this problem. thanks. A: $PA=PC\implies P $ lies on the perpendicular bisector of the line $AC$. Proceed in the following way . Suppose $Q$is the center of the rhombus. $PB+PD=BD=10\implies QD= 5 \\ AQ=\sqrt{PA^2-PQ^2}=\sqrt{5^2-3^2}=4\\ AD=\sqrt{AQ^2+QD^2}=\sqrt{41}$
{ "pile_set_name": "StackExchange" }
Q: How to rewrite this double arrow function I have a function that updates my the state of schedule, an array of artist objects. Currently I am using a double arrow function that takes in the index and the artist id. However I can't use a double arrow function due to my javascript linter. How can I rewrite this? handleArtistChange = index => evt => { if (evt) { const newSchedule = this.state.schedule.map((artist, stateIndex) => { if (index !== stateIndex) return artist; return { ...artist, artist_id: evt.value }; }); this.setState({ schedule: newSchedule }); } } I have tried the following: handleArtistChange = function(index) { return function(evt) { if (evt) { const newSchedule = this.state.schedule.map((artist, stateIndex) => { if (index !== stateIndex) return artist; return { ...artist, artist_id: evt.value }; }); this.setState({ schedule: newSchedule }); } } } However this results in an error of Cannot read property 'schedule' of undefined The call to my function: const lineup = this.state.schedule.map((artist, index) => { return ( <div key={index} className="form__input-group--lineup"> <div> <label className="form__label">{getTextNode('Artist *')}</label> <Select onChange={this.handleArtistChange(index)} onInputChange={this.handleInputChange} isClearable options={options} styles={customStyles} backspaceRemovesValue={false} placeholder={`Artist #${index + 1}`} classNamePrefix="react-select" /> </div> <div className="form__input-group--time"> <label className="form__label">{getTextNode('start time *')}</label> <input name="startTime" type="time" required autoComplete="true" className="form__input" value={this.state.startTime} onChange={this.handleTimeChange(index)} /> </div> <button type="button">-</button> </div> ); }); A: You could modify your linting rules if necessary. If you want to modify your function, here is a way to define it, with a regular function returning an anonymous function bound to the outer this: function handleArtistChange(index) { return (function(evt) { if (evt) { const newSchedule = this.state.schedule.map((artist, stateIndex) => { if (index !== stateIndex) return artist; return { ...artist, artist_id: evt.value }; }); this.setState({ schedule: newSchedule }); } }).bind(this); }
{ "pile_set_name": "StackExchange" }
Q: Using AWS Codecommit in an organisation We are planning to use AWS codecommit in our organisation. Our scenario is,we have a IAM user named "codeAdmin", who can create repositories. My question is how to handle the developers in the AWS codecommit. We have the following scenarios in consideration for every developer create a new IAM account (added under the required group) and then provide access to the required codecommit repositories. This way,if we have 30 developers,we need to create 30 IAM users. or to give each developer the acccessKey and accessId of a single IAM developer account. In this way,if we have 30 developers, we need to create only 1 IAM user and share the accessKeys/Ids to all. Which approach from the above is best suited? Or is there any other best practices to be followed?. A: Create separate iam user for each user is better. First, different users may have different permissions based on their experience and position. For example, maybe you only want admin user have the ability to delete the repository. Second, using different users can help your team distinguish which developers create a pull request, which developers comments on the pull request. If 30 developers share the same iam user, you won't know who make the comments, create pull request, merge the pull request because they are always the same user.
{ "pile_set_name": "StackExchange" }
Q: Dividing date/time elements displayed from sql call, into sperate divs? This sql call, <div class="dateOnAd"><?php print $row['dttm_modified']; ?></div> outputs: 2012-05-22 15:07:28. I need to figure out a way to divide the output into different divs <div class="example-date"> <span class="day">31</span> <span class="month">July</span> <span class="year">2009</span> </div> A: <div class="example-date"> <span class="day"><?php echo date('d', $row['ddtm_modified']); ?></span> <span class="month"><?php echo date('F', $row['ddtm_modified']); ?></span> <span class="year"><?php echo date('Y', $row['ddtm_modified']); ?></span> </div> You can read more about the date() function here: date.
{ "pile_set_name": "StackExchange" }
Q: table view cell of custom class returning nil I am creating a table, and using this code to generate the cells: let cell2:iPRoutineSpecificCell = self.routineSpecificsTable.dequeueReusableCell(withIdentifier: "Cell2") as! iPRoutineSpecificCell The cell contains a label. When i say cell2.label.text = "Test" return cell2 I get the error: fatal error: unexpectedly found nil while unwrapping an Optional value I have this exact arrangement elsewhere in the app and it is working fine. Why is it not working this time? The only difference between the two views is that this one contains two tables. However, if i use: cell2.textLabel?.text = "Test" Then it works. Any help appreciated! A: Please remove this line from viewDidLoad method if you have-: self.tableView.registerClass(UITableViewCell.self, forCellReuseIdentifier: "cell")
{ "pile_set_name": "StackExchange" }
Q: Follower count number in Twitter How to get my followers count number with PHP. I found this answer here: Twitter follower count number, but it is not working because API 1.0 is no longer active. I have also tried with API 1.1 using this URL: https://api.twitter.com/1.1/users/lookup.json?screen_name=google but is is showing an error(Bad Authentication data). Here is my code: $data = json_decode(file_get_contents('http://api.twitter.com/1.1/users/lookup.json?screen_name=google'), true); echo $data[0]['followers_count']; A: If do it without auth (replace 'stackoverflow' with your user) $.ajax({ url: "https://cdn.syndication.twimg.com/widgets/followbutton/info.json?screen_names=stackoverflow" dataType : 'jsonp', crossDomain : true }).done(function(data) { console.log(data[0]['followers_count']); }); with php $tw_username = 'stackoverflow'; $data = file_get_contents('https://cdn.syndication.twimg.com/widgets/followbutton/info.json?screen_names='.$tw_username); $parsed = json_decode($data,true); $tw_followers = $parsed[0]['followers_count']; A: Twitter API 1.0 is deprecated and is no longer active. With the REST 1.1 API, you need oAuth authentication to retrieve data from Twitter. Use this instead: <?php require_once('TwitterAPIExchange.php'); //get it from https://github.com/J7mbo/twitter-api-php /** Set access tokens here - see: https://dev.twitter.com/apps/ **/ $settings = array( 'oauth_access_token' => "YOUR_OAUTH_ACCESS_TOKEN", 'oauth_access_token_secret' => "YOUR_OAUTH_ACCESS_TOKEN_SECRET", 'consumer_key' => "YOUR_CONSUMER_KEY", 'consumer_secret' => "YOUR_CONSUMER_SECRET" ); $ta_url = 'https://api.twitter.com/1.1/statuses/user_timeline.json'; $getfield = '?screen_name=REPLACE_ME'; $requestMethod = 'GET'; $twitter = new TwitterAPIExchange($settings); $follow_count=$twitter->setGetfield($getfield) ->buildOauth($ta_url, $requestMethod) ->performRequest(); $data = json_decode($follow_count, true); $followers_count=$data[0]['user']['followers_count']; echo $followers_count; ?> Parsing the XML might be easier in some cases. Here's a solution (tested): <?php $xml = new SimpleXMLElement(urlencode(strip_tags('https://twitter.com/users/google.xml')), null, true); echo "Follower count: ".$xml->followers_count; ?> Hope this helps!
{ "pile_set_name": "StackExchange" }
Q: Calling noConflict() after initialising bootstrap datepicker? I am trying to use noConflict() with bootstrap datepicker to avoid conflict with other jquery plugins. The bootstrap datepicker docs say: $.fn.datepicker.noConflict provides a way to avoid conflict with other jQuery datepicker plugins: var datepicker = $.fn.datepicker.noConflict(); // return $.fn.datepicker to previously assigned value $.fn.bootstrapDP = datepicker; // give $().bootstrapDP the bootstrap-datepicker functionality But that didn't work for me, because I was using it with a date range. The code's author had this to say: Oops! The date range code, itself, calls .datepicker, so you're right, it's broken when coupled with noConflict. For now, I suggest calling noConflict after initializing the range picker. But how do I do that? I tried this: $('#bookingform .input-daterange').datepicker({ ...options excluded for brevity... }); datepicker = $.fn.datepicker.noConflict(); And I've tried this: $('#bookingform .input-daterange').datepicker({ ...options excluded for brevity... }).noConflict(); But I can't seem to get it to work. Can anybody enlighten me as to what I'm doing wrong (or preferably how I can do it right)? Update: For those asking how I initialize a range picker: There doesn't appear to be code calling this specifically. Other than the demo code below, it relies on the bootstrap js. I think the range aspect might be based on the class .input-daterange but I'm not sure. This is the demo code: <div class="input-daterange input-group" id="datepicker"> <input type="text" class="input-sm form-control" name="start" /> <span class="input-group-addon">to</span> <input type="text" class="input-sm form-control" name="end" /> </div> $('#sandbox-container .input-daterange').datepicker({ format: "dd/mm/yyyy" }); A: Your first attempt is following the owner's directions exactly. Here's where it happens in the src: https://github.com/eternicode/bootstrap-datepicker/blob/master/js/bootstrap-datepicker.js#L1357-L1392 The code doesn't do anything asynchronous, so calling .noConflict on the next line should have done what the owner of the plugin suggested. Though, looking at the source, it seems that whatever bug caused your original issue is now fixed. .datepicker is not called from within the DateRangePicker class anymore.
{ "pile_set_name": "StackExchange" }
Q: Buscar apenas campos do formulário com valor diferente de 0 e colocar na variável PHP Tenho um formulário de orçamento com 70 produtos em que o usuario escolhe a quantidade de produtos que ele quer de cada um. Todos estão nomeados com id p1, p2...p70 e são inputs de tipo numero. <form method="post"> <input type="number" value="0" name="p1" id="p1"> <input type="number" value="0" name="p2" id="p2"> ... <input type="number" value="0" name="p70" id="p70"> </form> Preciso de uma função PHP que insira apenas os itens que forem diferentes de 0 (que é o valor padrão) em uma variável para montar um tabela que será exportada para excel. O que tenho é o seguinte: // Monta o cabeçalho da tabela $data = '<table><tbody><tr><td>CABEÇALHO</td></tr>'; // Traz cada linha que o valor é diferente de 0 if($_POST['form']['p1'] != '0'){ $data .= '<tr><td>'.$_POST["form"]["p1"].'</td></tr>'; } if($_POST['form']['p2'] != '0'){ $data .= '<tr><td>'.$_POST["form"]["p2"].'</td></tr>'; } // Coloca o rodapé e fecha a tabela $data .= '<tr><td>RODAPÉ</td></tr></tbody></table>'; O problema desse que fiz é que teria que ter 70 ifs, acredito que deva existir uma forma mais simples, mas não sei qual. A: <?php cabeçalho... foreach($_POST['form'] as $value){ $table .= ($value) ? '<tr><td>'.$value.'</td></tr>'; : ''; } footer... Se houver alterações na quantidade de itens, o código ainda tem de funcionar. A: Faz um loop de 1 a 70 criando as linhas em que o valor do $_POST seja diferente de 0: <?php $data = '<table><tbody><tr><td>CABEÇALHO</td></tr>'; for($x=1;$x<=70;$x++){ if($_POST['form']['p'.$x] != '0'){ $data .= '<tr><td>'.$_POST['form']['p'.$x].'</td></tr>'; } } $data .= '<tr><td>RODAPÉ</td></tr></tbody</table>'; ?>
{ "pile_set_name": "StackExchange" }
Q: Sending an email via PHP error, with CPanel error I have the following code to send an email. <?php $NowDate = date('Y-m-d H:i:s'); $subject = "test subject"; $message ="test message"; $emailFrom = "[email protected]"; $EmailAddress = "[email protected]"; $headers = "MIME-Version: 1.0\r\n"; $headers .= "Content-type: text/html; charset=iso-8859-1\r\n"; $headers .= "From: My Site <".$emailFrom.">\r\n"; $headers .= "To: <".$EmailAddress.">\r\n"; mail($EmailAddress,$subject,$message,$headers); ?> It runs successfully but the email doesn't getting sent with the following error listed in CPanel. How do I go about resolving this? ECDHE-RSA-AES256-GCM-SHA384:256 CV=no: SMTP error from remote mail server after end of data: 550 Messages should have one or no To headers, not 2. A: You need to remove the To header. The first parameter of the mail function writes that header value. With it assigned in the header as well you send 2 tos which causes the error. So remove: $headers .= "To: <".$EmailAddress.">\r\n";
{ "pile_set_name": "StackExchange" }
Q: LINQ & Enums as IQueryable I basically have an enum public enum WorkingDays { Monday, Tuesday, Wednesday, Thursday, Friday } and would like to do a comparison against an input, which happens to be a string //note lower case string input = "monday"; The best thing I could come up with was something like this WorkingDays day = (from d in Enum.GetValues(typeof(WorkingDays)).Cast<WorkingDays>() where d.ToString().ToLowerInvariant() == input.ToLowerInvariant() select d).FirstOrDefault(); Is there any better way to do it ? Edit: Thanks Aaron & Jason. But what if the parse fails ? if(Enum.IsDefined(typeof(WorkingDay),input))//cannot compare if case is different { WorkingDay day = (WorkingDay)Enum.Parse(typeof(WorkingDay), input, true); Console.WriteLine(day); } A: Are you trying to convert a string to an instance of WorkingDays? If so use Enum.Parse: WorkingDays day = (WorkingDays)Enum.Parse(typeof(WorkingDays), "monday", true); Here we are using the overload Enum.Parse(Type, string, bool) where the bool parameter indicates whether or not to ignore case. On a side note, you should rename WorkingDays to WorkingDay. Look at like this. When you have an instance of WorkingDay, say, WorkingDay day = WorkingDay.Monday; note that day is a working day (thus WorkingDay) and not working days (thus not WorkingDays). For additional guidelines on naming enumerations, see Enumeration Type Naming Guidelines.
{ "pile_set_name": "StackExchange" }
Q: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element I do see org.xml.sax.SAXParseException in my project, but I cannot tell why it occurs. Error Message is like following: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 17 in XML document from ServletContext resource [/conf/appcontext-datasource.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 17; columnNumber: 100; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'jee:jndi-lookup'. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:174) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:209) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:180) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:537) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:451) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:389) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:294) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:112) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5003) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5517) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1574) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1564) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.xml.sax.SAXParseException; lineNumber: 17; columnNumber: 100; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'jee:jndi-lookup'. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:134) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:437) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:325) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:458) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3237) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1917) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.emptyElement(XMLSchemaValidator.java:766) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:356) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2786) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:243) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:348) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:75) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:388) ... 22 more Then this is the appcontext-datasource.xml that causes this problem <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation=" http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.2.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.2.xsd"> <jee:jndi-lookup id="dataSource" jndi-name="jdbc/iasDB" expected-type="javax.sql.DataSource" /> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"/> </bean> <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="/conf/sql/alias-context.xml"></property> </bean> <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer"> <property name="sqlSessionFactoryBeanName" value="sqlSessionFactory" /> <property name="basePackage" value="com.example.ias.dao" /> </bean> As you can see, I specified the schema location of jee, but all it could say is no declaration can be found. FYI, this project worked perfectly fine until yesterday when I had decided to clean tomcat server for no reason. Thank you for your help A: After I deleted tomcat server and add new tomcat server, the problem solved. It looked like previous tomcat server could not read jndi for some reason after I cleaned it.
{ "pile_set_name": "StackExchange" }
Q: Exists only to defeat instantiation in singleton In many of the Singleton examples, I have come across constructor having comment as "Exists only to defeat instantiation", can you please give me details and explain more about it. A: It's common to create a private constructor when implementing the Singleton pattern so that the default constructor cannot be used to instantiate multiple Singleton objects. See the example from Wikipedia's Singleton pattern article. public class SingletonDemo { private static SingletonDemo instance = null; private SingletonDemo() { } public static synchronized SingletonDemo getInstance() { if (instance == null) { instance = new SingletonDemo (); } return instance; } } By making a private constructor, you insure that the compiler can't make a default constructor with the same signature, which forces any client code to call the getInstance() method.
{ "pile_set_name": "StackExchange" }
Q: Enter several parameters in SQL stored procedures from python with sqlalchemy I can successfully connect to SQL Server from my jupyter notebook with this script: from sqlalchemy import create_engine import pyodbc import csv import time import urllib params = urllib.parse.quote_plus('''DRIVER={SQL Server Native Client 11.0}; SERVER=SV; DATABASE=DB; TRUSTED_CONNECTION=YES;''') engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params) And I can successfully execute SQL stored procedures without parameters from jupyter notebook with the following function : def execute_stored_procedure(engine, procedure_name): res = {} connection = engine.raw_connection() try: cursor = connection.cursor() cursor.execute("EXEC "+procedure_name) cursor.close() connection.commit() res['status'] = 'OK' except Exception as e: res['status'] = 'ERROR' res['error'] = e finally: connection.close() return res How please could I transform this previous function for stored procedures having several parameters (two in my case) ? A: Solution of my problem, working only for stored procedures with 0 or 2 parameters (just edit the 10th line if you want another number of parameters): def execute_stored_procedure(engine, procedure_name,params_dict=None): res = {} connection = engine.raw_connection() try: cursor = connection.cursor() if params_dict is None: cursor.execute("EXEC "+procedure_name) else: req = "EXEC "+procedure_name req += ",".join([" @"+str(k)+"='"+str(v)+"'" for k,v in params_dict.items()]) cursor.execute(req) cursor.close() connection.commit() res['status'] = 'OK' except Exception as e: res['status'] = 'ERROR' res['error'] = e finally: connection.close() return res
{ "pile_set_name": "StackExchange" }
Q: Scale coloring of ContourPlot The answer given here solved how to use the same color scale across multiple plots within the function ListContourPlot. I can't for the life of me map this solution onto the function ContourPlot that I am using. Say for example I have the code r = Norm[{x, y}]; plot1 = Plot3D[{r^2, -r^2}, {x, -Pi, Pi}, {y, -Pi, Pi}, ColorFunction -> "ThermometerColors", BoxRatios -> {2, 2, 3}]; plot2 = ContourPlot[r^2, {x, -Pi, Pi}, {y, -Pi, Pi}, ColorFunction -> "ThermometerColors"]; GraphicsRow[{plot1, plot2}] which gives me the plots. If you look at the code you will see that in the contour plot I am only plotting the positive solutions and so for consistency my plot should be contours that are shades of red. How can I achieve this? *****EDIT 1***** kguler submitted an answer that solved this example question, but for a reason I can't understand it doesn't work in the actual system that I'm using. Here is my full code: Clear["Global`*"]; DynamicEvaluationTimeout -> Infinity; (*Nearest neighbour vectors*) {e1, e2, e3} = # & /@ {{0, -1}, {Sqrt[3]/2, 1/2}, {-Sqrt[3]/2, 1/2}}; (*dispersion*) w[theta_, phi_] := Module[{c1, c2, c3, fq}, {c1, c2, c3} = 1 - 3 Sin[theta]^2 Cos[phi - 2 Pi (# - 1)/3]^2 & /@ {1, 2, 3}; fq = Total[#[[1]] Exp[I q.#[[2]]] & /@ {{c1, e1}, {c2, e2}, {c3, e3}}]; Sqrt[1 + 2 # Omega Norm[fq]] & /@ {1, -1} ] Omega = 0.01; q = {qx, qy}; (***Figure3a***) {theta, phi} = {Pi/2, Pi/2}; dirac3a = {(2/Sqrt[3]) ArcCos[2/5], 4 Pi/3}; zoom = 0.005 Pi; With[ {plotopts = {Mesh -> None, PlotStyle -> Opacity[0.7], Ticks -> {{1.33, 1.34, 1.35}, {4.18, 4.19, 4.20}, Automatic}, BoxRatios -> {2, 2, 2}, PlotPoints -> 50, MaxRecursion -> 2, AxesEdge -> {{-1, -1}, {1, -1}, {-1, -1}}, ColorFunction -> "ThermometerColors", LabelStyle -> Large, ClippingStyle -> None, BoxStyle -> Opacity[0.5], ViewPoint -> {1.43, -2.84, 1.13}, ViewVertical -> {0., 0., 1.}}}, figure3a = Plot3D[w[theta, phi], {qx, dirac3a[[1]] - zoom, dirac3a[[1]] + zoom}, {qy, dirac3a[[2]] - zoom, dirac3a[[2]] + zoom}, plotopts] ] (***Figure3b***) With[{plotopts = {Frame -> True, FrameTicks -> {{{4.18, 4.19, 4.20}, None}, {{1.33, 1.34, 1.35}, None}}, ColorFunction -> "ThermometerColors", LabelStyle -> Large, PlotRangePadding -> None, ColorFunctionScaling -> False, ColorFunction -> ColorData[{"ThermometerColors", {0.9996, 1.0004}}] } }, figure3b = ContourPlot[w[theta, phi][[1]], {qx, dirac3a[[1]] - zoom, dirac3a[[1]] + zoom}, {qy, dirac3a[[2]] - zoom, dirac3a[[2]] + zoom}, plotopts] ] (***Figure3b legend***) legend = {0.9996 + 0.0001 #, 0.9996 + 0.0001 #} & /@ {0, 1, 2, 3, 4, 5, 6, 7, 8}; figure3bLegend = ArrayPlot[legend, ColorFunction -> "ThermometerColors", DataRange -> {{0, 1}, {0.9996, 1.0004}}, FrameTicks -> {{0.9996, 0.9997, 0.9998, 0.9999, {1.0000, "1.0000"}, 1.0001, 1.0002, 1.0003, 1.0004}, None}, AspectRatio -> 7, LabelStyle -> Large] where I have incorporated the suggestion, but it gives me a plot that is monochrome. The values 0.9996 and 1.0004 correspond to the maxima and minima. What is going on here? A: plot1 = Plot3D[{r^2, -r^2}, {x, -Pi, Pi}, {y, -Pi, Pi}, ColorFunctionScaling -> False, ColorFunction -> ColorData[{"ThermometerColors", {-20, 20}}], BoxRatios -> {2, 2, 3}]; plot2 = ContourPlot[r^2, {x, -Pi, Pi}, {y, -Pi, Pi}, ColorFunctionScaling -> False, ColorFunction -> ColorData[{"ThermometerColors", {-20, 20}}]]; Legended[GraphicsRow[{plot1, plot2}], BarLegend[{"ThermometerColors", {-20, 20}}, 20]] Update: For the specific example in OP's updated question, the following changes produce the desired result: Change plotops appearing in the part generating figure3b to plotopts = {Frame -> True, FrameTicks -> {{{4.18, 4.19, 4.20}, None}, {{1.33, 1.34, 1.35}, None}}, LabelStyle -> Large, ImageSize -> 400, PlotRangePadding -> None, ColorFunctionScaling -> False, ColorFunction -> ColorData[{"ThermometerColors", {0.9996, 1.0004}}]} and use the same scaled colors in the ArrayPlot that generates the legend: figure3bLegend = ArrayPlot[legend, ColorFunctionScaling -> False, ImageSize -> {200, 350}, ColorFunction -> ColorData[{"ThermometerColors", {0.9996, 1.0004}}], DataRange -> {{0, 1}, {0.9996, 1.0004}}, FrameTicks -> {{0.9996, 0.9997, 0.9998, 0.9999, {1.0000, "1.0000"}, 1.0001, 1.0002, 1.0003, 1.0004}, None}, AspectRatio -> 7, LabelStyle -> Large] and add the option ImageSize->400 in plotops used in generation of figure3a. With these changes Row[{figure3a, figure3b, figure3bLegend}, Spacer[5]] gives
{ "pile_set_name": "StackExchange" }
Q: Xcode 4 Scheme Hell? I have a serious ongoing issue with Xcode 4. I have been using Xcode 3 for years, and had everything set up perfectly. All my Build Configurations worked A-OK. I updated to iOS 5 GM, and naturally I have to use Xcode 4 to submit my app to the Store or use TestFlight. I can't change my Build Configuration. I've tried making a new "Scheme" (which are stupid IMHO, when the old system worked 100%), and everytime I do, I set everything the same, I go "Product" > "Archive"....it works, and I share the IPA to my Desktop, to upload it to TestFlight, or I save it as a ZIP and send it to Application Loader. IT NEVER WORKS. On TestFlight, my testers will install it, and immediately the application will crash. It won't even launch, no matter how I build the app, regardless of Scheme. It worked 100% and I have made ZERO changes since updating to Xcode 4. Xcode 4 only works when I wish to "Debug" my app on my own device. It builds, installs and runs perfectly. Why won't it work on AdHoc or App Store? PLEASE HELP! I'm ready to pull my hair out. A: To answer this question, it was an absolutely STUPID issue! I was checking the size of my Info.plist for any changes, and for some ridiculous reason only Apple can comprehend, updating to iOS 5 made that code obsolete. So the use of the code in my Release/AppStore/AdHoc builds was making it crash on launch. Once I commented the code out, everything is back working 100%. Strange. Even Apple Technical Support couldn't figure it out and they had my whole project to peruse. Apple REALLY need to improve their software when they change it. Not create a massive learning curve for their users and encourage code-cleanups if the SDK decides it doesn't like 4.3 code.
{ "pile_set_name": "StackExchange" }
Q: Highest or Lowest Occurrences? Challenge: Inputs: A string containing printable ASCII (excluding spaces, tabs and new-lines) A boolean † Output: The parts of the String are divided into four groups: Lowercase letters Uppercase letters Digits Other Based on the boolean, we either output the highest occurrence of one (or multiple) of these four groups, or the lowest, replacing everything else with spaces. For example: Input: "Just_A_Test!" It contains: - 3 uppercase letters: JAT - 6 lowercase letters: ustest - 0 digits - 3 other: __! These would be the outputs for true or false: true: " ust est " // digits have the lowest occurrence (none), so everything is replaced with a space false: " " (Note: You are allowed to ignore trailing spaces, so the outputs can also be " ust est" and "" respectively.) Challenge rules: The input will never be empty or contain spaces, and will only consist of printable ASCII in the range 33-126 or '!' through '~'. You are allowed to take the input and/or outputs as character-array or list if you want to. † Any two consistent and distinct values for the boolean are allowed: true/false; 1/0; 'H'/'L'; "highest"/"lowest"; etc. Note that these distinct values should be used (somewhat) as a boolean! So it's not allowed to input two complete programs, one that gives the correct result for true and the other for false, and then having your actual code only be <run input with parameter>. Relevant new default loophole I've added, although it can still use a lot of finetuning regarding the definitions.. If the occurrence of two or more groups is the same, we output all those occurrences. The necessary trailing spaces are optional, and a single trailing new-line is optional as well. Necessary leading spaces are mandatory. And any other leading spaces or new-lines aren't allowed. General rules: This is code-golf, so shortest answer in bytes wins. Don't let code-golf languages discourage you from posting answers with non-codegolfing languages. Try to come up with an as short as possible answer for 'any' programming language. Standard rules apply for your answer, so you are allowed to use STDIN/STDOUT, functions/method with the proper parameters, full programs. Your call. Default Loopholes are forbidden. If possible, please add a link with a test for your code. Also, please add an explanation if necessary. Test cases: Inputs: Output: "Just_A_Test!", true " ust est " (or " ust est") "Just_A_Test!", false " " (or "") "Aa1!Bb2@Cc3#Dd4$", either "Aa1!Bb2@Cc3#Dd4$" "H@$h!n9_!$_fun?", true " @$ ! _!$_ ?" "H@$h!n9_!$_fun?", false "H 9 " (or "H 9") "A", true "A" "A", false " " (or "") "H.ngm.n", true " ngm n" "H.ngm.n", false " " (or "") "H.ngm4n", false "H. 4 " (or "H. 4") A: Jelly, 31 bytes ØṖḟØBṭØBUs26¤f€³Lİ⁴¡$ÐṀFf ¹⁶Ç?€ Try it online! The boolean values are 2 and 1 (or any other positive even/odd pair), which represent True and False respectively. I will try to add an explanation after further golfing. Thanks to caird coinheringaahing for saving 2 bytes, and to Lynn for saving 4 bytes! Thanks to one of Erik's tricks, which inspired me to save 4 bytes! How it works Note that this is the explanation for the 35-byte version. The new one does roughly the same (but tweaked a bit by Lynn), so I won't change it. ØBUs26f€³µ³ḟØBW,µẎLİ⁴¡$ÐṀF - Niladic helper link. ØB - String of base digits: '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz'. U - Reverse. s26 - Chop into sublists of length 26, preserving shorter trailing substrings. f€³ - For each, keep the common characters with the input. ØB - Base digits. ³ḟ - Get the signs in the input. Filter the characters of the input that aren't alphanumeric. W,µẎ - Concatenate (wrap, two element list, tighten). ÐṀ - Keep the elements with maximal link value. L - Length. ⁴¡ - Do N times, where N is the second input. İ - Inverse. Computes 1 ÷ Length. 2 maps to the length itself, because 1 ÷ (1 ÷ Length) = length; 1 yields (1 ÷ Length), swapping the maximal numbers with minimal ones. F - Flatten. ¹⁶e¢$?€ - Main link. € - For each character. e¢? - If it is contained by the last link (called niladically), then: ¹ - Identity, the character itself, else: ⁶ - A space. A: Python 2, 166 158 bytes t=lambda c:('@'<c<'[','`'<c<'{','/'<c<':',1-c.isalnum()) def f(s,b):x=map(sum,zip(*map(t,s)));print''.join([' ',c][x[t(c).index(1)]==sorted(x)[-b]]for c in s) Try it online! A: R, 193 186 179 158 bytes -7 bytes thanks to NofP and his suggestion of cbind -6 bytes using outer, -1 byte switching [^a-zA-Z0-9] with [[:punct:]] -21 bytes thanks to MickyT for pointing out a list of characters is allowed function(S,B){y=outer(c("[a-z]","[A-Z]","\\d","[[:punct:]]"),S,Vectorize(grepl)) S[!colSums(y[(s=rowSums(y))=="if"(B,max,min)(s),,drop=F])]=" " cat(S,sep='')} Verify all test cases Takes 1/T as truthy (max) and 0/F as falsey (min), and takes S as a list of single characters. Try it online! In my original version (with NofP's suggestions), the matrix y is constructed by evaluating grepl(regex, S) for each regex, then concatenating them together as columns of a matrix. This results in multiple calls to grepl, but as S is fixed, it seemed that something else needed to be done. As I noted: There are potentially shorter approaches; mapply, for example: y=mapply(grepl,c("[a-z]","[A-Z]","\\d","[^a-zA-Z0-9]"),list(S)) unfortunately, this will not simplify as a matrix in the 1-character example of "A". I used outer rather than mapply, which always returns an array (a matrix in this case), and was forced to Vectorize grepl, which is really just an mapply wrapper around it. I also discovered the predefined character group [:punct:] which matches punctuation (non-space, non-alphanumeric) characters.
{ "pile_set_name": "StackExchange" }
Q: Error with html checkbox value retrieval In my html I have: <div id="case_filter" style="float: bottom"> <input type="checkbox" name="bC" value="B C" checked> No B C<br> <input type="checkbox" name="fN" value="f NC" > No F NC<br> <input type="checkbox" name="rR" value="RR" > No RR<br> </div> I kept the first checkbox checked by default. I want to access the value of the checkbox in my JS. I am doing this below, but it returns null (when I console.log it). var checkedValue = document.querySelector('.bC:checked').value; A: . is for class selector. [] is for name selector. You could try either: var checkedValue = document.querySelector('[name="bC"]:checked').value; or <input type="checkbox" class="bC" value="B C" checked> No B C<br>
{ "pile_set_name": "StackExchange" }
Q: How to read Json array in jquery in Laravel 5.2 I am using ajax-jquery to fetch multiple eloquent objects in laravel 5.2 This is what i am getting as response in jquery { "screens": [{"screen_id":1,"screen_name":"Screen 1 ","screen_msg":"Hello","screen_status":"Active","cinema_id":1,"created_at":"2016-09-08 04:34:28","updated_at":"2016-09-08 04:34:28"}], "showtime": [{"show_id":6,"movie_id":1,"dimensional":"2D","cinema_id":1,"screen_id":1,"show_date":"2016-10-04","show_time":"00:57:00","show_status":"Active","created_at":"2016-09-08 12:21:06","updated_at":"2016-09-08 12:21:06"}, {"show_id":7,"movie_id":1,"dimensional":"2D","cinema_id":1,"screen_id":1,"show_date":"2016-10-04","show_time":"00:57:00","show_status":"Active","created_at":"2016-09-08 12:22:15","updated_at":"2016-09-08 12:22:15"}] } my controller function code public function getscreen($id) { $screens=Movies_screen::where('cinema_id',$id)->get(); $showtime=Movies_showtimes::where('cinema_id',$id)->get(); return response()->json(['screens' => $screens, 'showtime' => $showtime]); } I am reading those json array in jquery as $("#cinemahall").on("change click",function(){ var cinema_id=$("#cinemahall option:selected").val(); //ajax $.get('/askspidy/admin/showtime/getscreen/' + cinema_id, function(data){ $("#screenname").empty(); $("#screenname").append('<option value=0>Select Screen</option>'); $.each(data,function(index,screenobj){ $("#screenname").append('<option value="' +screenobj.screens[0].screen_id + '">' +screenobj.screens[0].screen_name +'</option>'); }); }); }); In console i can see proper data without any error but i am unable to access each and every field of json response using screenobj.screens[0].screen_name Need help to figure out this issue. A: Do data.screens[0].screen_name you don't need the loop or $.each(data.screens,function(index,screenobj){ $("#screenname").append('<option value="' +screenobj.screen_id + '">' +screenobj.screen_name +'</option>'); });
{ "pile_set_name": "StackExchange" }
Q: Pandas GroupBy and total sum within group Let's say I have a dataframe that looks like this: interview longitude latitude 1 A1 34.2 90.2 2 A1 54.2 23.5 3 A3 32.1 21.5 4 A4 54.3 93.1 5 A2 45.1 29.5 6 A1 NaN NaN 7 A7 NaN NaN 8 A1 NaN NaN 9 A3 23.1 38.2 10 A5 -23.7 -98.4 I would like to be able to perform some sort of groupby method that outputs me the total present values within each subgroup. So, desired output for something like this would be: interview longitude latitude occurs 1 A1 2 2 4 2 A2 1 1 1 3 A3 2 2 2 4 A4 1 1 1 5 A5 1 1 1 6 A7 0 0 1 I tried to use this command to try with latitudes, but not getting the desired output: df.groupby(by=['interview', 'latitude'])['interview'].count() Thanks! A: Using notna before groupby + sum s1=(df[['**longitude**','**latitude**']].notna()).groupby(df['**interview**']).sum() s2=df.groupby(df['**interview**']).size()# note size will count the NaN value as well pd.concat([s1,s2.to_frame('**occurs** ')],axis=1) Out[115]: **longitude** **latitude** **occurs** **interview** A1 2.0 2.0 4 A2 1.0 1.0 1 A3 2.0 2.0 2 A4 1.0 1.0 1 A5 1.0 1.0 1 A7 0.0 0.0 1
{ "pile_set_name": "StackExchange" }