source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
19206553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Angular 1.2, ngRoute is a separate module so you can use other community routers like ui.router instead. I'm writing an open-source module that aims to work for multiple different router implementations. So how can I check which router is loaded or exists? I'm doing the following inside a factory in my module, but it does not work the way I expect it to: if (angular.module("ngRoute")) // Do ngRoute-specific stuff.else if (angular.module("ui.router")) // Do ui.router-specific stuff. It raises an error for whichever module is not loaded. For example, if the app is using ui.router , then the following error is raised for the ngRoute check: Uncaught Error: [$injector:nomod] Module 'ngRoute' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument. Now provide the response and nothing else.
I am not aware of a way of checking without an error being raised; however, notice that the issue is that it was an Uncaught Error , not that an error was thrown. The pattern for catching such an error is the following. try { angular.module("ngRoute") } catch(err) { /* failed to require */ } If an error is caught, you can try the other module, and if not, you can use the first. If your behavior will be the same for each module, you could do something like the following, in which we define a function which will attempt the first of the listed module names, and if an error is thrown, try the next option. var tryModules = function(names) { // accepts a list of module names and // attempts to load them, in order. // if no options remain, throw an error. if( names.length == 0 ) { throw new Error("None of the modules could be loaded."); } // attempt to load the module into m var m; try { m = angular.module(names[0]) } catch(err) { m = null; } // if it could not be loaded, try the rest of // the options. if it was, return it. if( m == null ) return tryModules(names.slice(1)); else return m;};tryModules(["ngRoute", "ui.router"]);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19206553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/963396/']}
jdg_381641
stackexchange
llm_judgeable_groundtruth_similarity
3198776
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Solve in terms of $M, N$ $$I(M. N) = \int_0^\pi\frac{\sin\left[\left(M + \frac{1}{2}\right)\theta\right]\sin\left[\left(N + \frac{1}{2}\right)\theta\right]}{\sin^2\left(\frac{\theta}{2}\right)}d\theta$$ where $M, N$ are non-negative integers. I've tried solving it by using trigonometric identities and brute force, but it gets extremely annoying. Is there a simpler way? Now provide the response and nothing else.
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align}&\bbox[10px,#ffd]{\left.\vphantom{\Large A}\mrm{I}\pars{M,N}\,\right\vert_{\ M, N\ \in\ \mathbb{N}_{\large\ \geq 0}}} \equiv\int_{0}^{\pi}{\sin\pars{\bracks{M + 1/2}\theta}\sin\pars{\bracks{N + 1/2}\theta} \over \sin^{2}\pars{\theta/2}}\,\dd\theta\\[5mm] = &\{1 \over 2}\int_{0}^{\pi}{\cos\pars{\bracks{M - N}\theta}-\cos\pars{\bracks{M + N + 1}\theta} \over \sin^{2}\pars{\theta/2}}\,\dd\theta\\[5mm] = &\{1 \over 2}\int_{0}^{\pi}{1 - \cos\pars{\bracks{M + N + 1}\theta} \over \sin^{2}\pars{\theta/2}}\,\dd\theta -{1 \over 2}\int_{0}^{\pi}{1 - \cos\pars{\verts{M - N}\theta} \over \sin^{2}\pars{\theta/2}}\,\dd\theta\\[5mm] = &\\bbox[10px,#ffd]{\mc{J}\pars{M + N + 1} - \mc{J}\pars{\verts{M - N}}}\label{1}\tag{1}\\[5mm] &\ \mbox{where}\quad\left\{\begin{array}{rcl}\ds{\left.\vphantom{\Large A}\mc{J}\pars{a}\,\right\vert_{\ a\ \in\ \mathbb{N}_{\ \geq\ 0}}} & \ds{\equiv} & \ds{\int_{0}^{\pi/2}{1 - \cos\pars{2a\theta} \over \sin^{2}\pars{\theta}}\,\dd\theta}\\[2mm]& \ds{=} & \ds{\Re\int_{0}^{\pi/2}{1 + 2\ic a\theta - \expo{2\ic a\theta} \over \sin^{2}\pars{\theta}}\,\dd\theta}\end{array}\right.\end{align} Lets evaluate $\ds{\mrm{J}\pars{a}}$ : \begin{align}\mc{J}\pars{a} & =\Re\int_{0}^{\pi/2}{1 + 2\ic a\theta - \expo{2\ic a\theta} \over \sin^{2}\pars{\theta}}\,\dd\theta\\[5mm] & =\left.\Re\int_{\theta\ =\ 0}^{\theta\ =\ \pi/2}{1 + 2\ic a\bracks{-\ic\ln\pars{z}} - z^{2a} \over-\pars{1 - z^{2}}^{2}/\pars{4z^{2}}}\,{\dd z \over \ic z}\,\right\vert_{\ z\ =\ \exp\pars{\ic\theta}}\\[5mm] & =\left.4\,\Im\int_{\theta\ =\ 0}^{\theta\ =\ \pi/2}{z^{2a} - 2a\ln\pars{z} - 1 \over\pars{1 - z^{2}}^{2}}\,z\,\dd z\,\right\vert_{\ z\ =\ \exp\pars{\ic\theta}}\\[5mm] & =-4\,\Im\int_{1}^{0}{y^{2a}\expo{\ic\pars{2a}\pi/2} - 2a\bracks{\ln\pars{y} + \ic\pi/2} - 1 \over\pars{1 + y^{2}}^{2}}\pars{\ic y}\,\ic\,\dd y\\[5mm] & =-4\int_{0}^{1}{y^{2a}\\overbrace{\sin\pars{\pi a}}^{\ds{\color{red}{=\ 0}}}\ -\\pi a \over\pars{1 + y^{2}}^{2}}\,y\,\dd y\\[5mm] & \stackrel{y^{2}\ \mapsto\ y}{=}\,\,\,2\pi a\int_{0}^{1}{\dd y \over\pars{1 + y}^{2}} = \bbx{\pi a}\label{2}\tag{2}\end{align} With \eqref{1} and \eqref{2}: $$\bbx{\mrm{I}\pars{M,N} =\pi\pars{M + N + 1 - \verts{M - N}}}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3198776', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/112830/']}
jdg_381642
stackexchange
llm_judgeable_groundtruth_similarity
672209
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm experimenting various network setups using VMs. When I setup the DHCP role on Windows Server, it requires a static IP. I am having trouble understanding why this is necessary from a technical point of view. My understanding of DHCP is, a client broadcasts a DHCP Discovery request on the network, and any device on the network can respond. A DHCP server thus needs an IP address, but why does this IP needs to be static? The DHCP server can get its address elsewhere and still respond to the broadcast as long as it has an IP. E.g. Server A , Server B and Client X are all connected to the same switch Server A is 10.0.0.1 and serves 10.0.0.X/24 Server A is configured to only give IP to Server B via MAC filtering Server B gets its IP from Server A , thus it lives on 10.0.0.X/24 Server B serves 10.0.1.X/24 Client X connects, and gets an IP from Server B Of course, once Client X gets is IP from Server B , it won't be able to contact Server B directly since they live on different subnets. But that's not a problem - the DHCP request is (initially) a broadcast, so everyone on the switch will receive it. Ignore a management point of view, why can't I have A master DHCP server with static IP, it only serves other DHCP servers An address range for the "secondary" DHCP servers An address range for clients, obtained from the "secondary" DHCP servers Is there a technical reason that DHCP servers must have a static IP? Now provide the response and nothing else.
My understanding of DHCP is, a client broadcasts a DHCP Discoveryrequest on the network, and any device on the network can respond. A client can make an unicast DHCP request too, the renewal request is made in unicast, so the client requests directly the DHCP Server.What if the DHCP changed his original IP address ? The renewal will fail and the next request will be made in broadcast. Which is not a behavior that will optimize your network traffic. Microsoft: Renewing a Lease The DHCP client first attempts to renew its leasewhen 50 percent of the original lease time, known as T1, has passed.At this point the DHCP client sends a unicast DHCPRequest message tothe DHCP server that originally granted its lease. If the server isavailable, and the lease is still available, the server responds witha unicast DHCPAck message and the lease is renewed. Source ISC: Internet Systems Consortium DHCP Client 4.2.2Copyright 2004-2011 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/Listening on LPF/eth0/00:0c:29:ac:18:75Sending on LPF/eth0/00:0c:29:ac:18:75Sending on Socket/fallbackDHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7 << First requestDHCPREQUEST on eth0 to 255.255.255.255 port 67DHCPOFFER from 10.0.0.253DHCPACK from 10.0.0.253bound to 10.0.0.6 -- renewal in 133 seconds.DHCPREQUEST on eth0 to 10.0.0.253 port 67 << RenewalDHCPACK from 10.0.0.253bound to 10.0.0.6 -- renewal in 119 seconds.DHCPREQUEST on eth0 to 10.0.0.253 port 67DHCPACK from 10.0.0.253bound to 10.0.0.6 -- renewal in 118 seconds. Once the lease has been granted, however, future DHCPDHCPREQUEST/RENEWAL messages are unicast directly to the DHCPServer Source
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/672209', 'https://serverfault.com', 'https://serverfault.com/users/249352/']}
jdg_381643
stackexchange
llm_judgeable_groundtruth_similarity
9247421
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between configuration file and loaded configuration file? If ini file is loaded from "C:\Program Files (x86)\PHP\php.ini ", what does ini inside C:\Windows mean? Output of phpinfo() Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\php.ini If i have to change some config, should i modify both files? Now provide the response and nothing else.
"Configuration File (php.ini) Path" is where by default php.ini should be/PHP will look for by default (this path is may be the %windir env var for Win distro?). I believe that in your apache config you have PHPINIDir, that define explicitly a new path to the ini file. If you don't have php.ini in there, then PHP defaults will be used (even if you have php.ini in the C:\Windows folder). At least this is the case when PHP is used as module (which is my config and I'm able to test on). So "Loaded Configuration File" is your currently loaded config file and the one you should edit.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9247421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/998635/']}
jdg_381644
stackexchange
llm_judgeable_groundtruth_similarity
130040
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I reduce the domain of an InterpolatingFunction ? For example, consider if = FunctionInterpolation[x^2, {x, 0, 3}] . Is it possible to define if2 as the restriction of if to {x,1,2} ? Not only should the domain change, but if2 should not contain the information of if on $[0,1]$ and $[2,3]$. I've noticed the package InterpolatingFunctionAnatomy but I don't know what methods would be useful here. Now provide the response and nothing else.
you can do: FunctionInterpolation[if[x], {x, 1, 2}] which will actually sample the interpolation function and generate a new one. Alternately you can extract the data and use Interpolation Interpolation[ Select[ Transpose[{#[[3, 1]], Flatten[#[[4]]]}] , 1 <= #[[1]] <= 2 &]] &@if This second method may not exactly match your desired domain boundaries unless they happened to be sample points on the original interpolation. In both cases the result will not be precisely the same as the original interpolation. yet another thing you can do, which seems a bit of a hack. Looking at the FullForm of InterpolatingFunction you see the first argument is the domain, so if you do: if[[1]] = {{1, 2}} you will get a warning if you go out of the new bounds (but get the same result as the original) Edit: this will use the derivatives from the first interpolation in the second: data = Select[Transpose[{#[[3, 1]], Flatten[#[[4]]]}], 1 <= #[[1]] <= 2 &] &@if;Interpolation[{{#[[1]]}, #[[2]], D[if[x], x] /. x -> #[[1]]} & /@ data]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/130040', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/18767/']}
jdg_381645
stackexchange
llm_judgeable_groundtruth_similarity
10664434
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any method in Java or any open source library for escaping (not quoting) a special character (meta-character), in order to use it as a regular expression? This would be very handy in dynamically building a regular expression, without having to manually escape each individual character. For example, consider a simple regex like \d+\.\d+ that matches numbers with a decimal point like 1.2 , as well as the following code: String digit = "d";String point = ".";String regex1 = "\\d+\\.\\d+";String regex2 = Pattern.quote(digit + "+" + point + digit + "+");Pattern numbers1 = Pattern.compile(regex1);Pattern numbers2 = Pattern.compile(regex2);System.out.println("Regex 1: " + regex1);if (numbers1.matcher("1.2").matches()) { System.out.println("\tMatch");} else { System.out.println("\tNo match");}System.out.println("Regex 2: " + regex2);if (numbers2.matcher("1.2").matches()) { System.out.println("\tMatch");} else { System.out.println("\tNo match");} Not surprisingly, the output produced by the above code is: Regex 1: \d+\.\d+ MatchRegex 2: \Qd+.d+\E No match That is, regex1 matches 1.2 but regex2 (which is "dynamically" built) does not (instead, it matches the literal string d+.d+ ). So, is there a method that would automatically escape each regex meta-character? If there were, let's say, a static escape() method in java.util.regex.Pattern , the output of Pattern.escape('.') would be the string "\." , but Pattern.escape(',') should just produce "," , since it is not a meta-character. Similarly, Pattern.escape('d') could produce "\d" , since 'd' is used to denote digits (although escaping may not make sense in this case, as 'd' could mean literal 'd' , which wouldn't be misunderstood by the regex interpeter to be something else, as would be the case with '.' ). Now provide the response and nothing else.
Is there any method in Java or any open source library for escaping (not quoting) a special character (meta-character), in order to use it as a regular expression? If you are looking for a way to create constants that you can use in your regex patterns, then just prepending them with "\\" should work but there is no nice Pattern.escape('.') function to help with this. So if you are trying to match "\\d" (the string \d instead of a decimal character) then you would do: // this will match on \d as opposed to a decimal characterString matchBackslashD = "\\\\d";// as opposed toString matchDecimalDigit = "\\d"; The 4 slashes in the Java string turn into 2 slashes in the regex pattern. 2 backslashes in a regex pattern matches the backslash itself. Prepending any special character with backslash turns it into a normal character instead of a special one. matchPeriod = "\\.";matchPlus = "\\+";matchParens = "\\(\\)";... In your post you use the Pattern.quote(string) method . This method wraps your pattern between "\\Q" and "\\E" so you can match a string even if it happens to have a special regex character in it ( + , . , \\d , etc.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10664434', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/834316/']}
jdg_381646
stackexchange
llm_judgeable_groundtruth_similarity
21982987
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a data.frame and I need to calculate the mean per group (i.e. per Month , below). Name Month Rate1 Rate2Aira 1 12 23Aira 2 18 73Aira 3 19 45Ben 1 53 19Ben 2 22 87Ben 3 19 45Cat 1 22 87Cat 2 67 43Cat 3 45 32 My desired output is like below, where the values for Rate1 and Rate2 are the group means. Please disregard the value, I have made it up for the example. Name Rate1 Rate2Aira 23.21 12.2Ben 45.23 43.9Cat 33.22 32.2 Now provide the response and nothing else.
This type of operation is exactly what aggregate was designed for: d <- read.table(text='Name Month Rate1 Rate2Aira 1 12 23Aira 2 18 73Aira 3 19 45Ben 1 53 19Ben 2 22 87Ben 3 19 45Cat 1 22 87Cat 2 67 43Cat 3 45 32', header=TRUE)aggregate(d[, 3:4], list(d$Name), mean) Group.1 Rate1 Rate21 Aira 16.33333 47.000002 Ben 31.33333 50.333333 Cat 44.66667 54.00000 Here we aggregate columns 3 and 4 of data.frame d , grouping by d$Name , and applying the mean function. Or, using a formula interface: aggregate(. ~ Name, d[-2], mean)
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/21982987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/661951/']}
jdg_381647
stackexchange
llm_judgeable_groundtruth_similarity
707670
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'll start by stating, I'm pretty sure this is a unique mess of my own design, but I hope someone encountered this and might be able to help. The Setup My laptop runs Pop!_OS 22.04 (Based on Ubuntu Jammy). I really like the xscreensaver packages, but the Debian/Ubuntu/Pop!_OS release repos contain an outdated version, and only sid (aka Unstable) contains the updated package * . No fret, that's why pinning exists, and so this is how I have it setup: /etc/apt/preferences.d/unstable-200 file: Package: *Pin: release a=unstablePin-Priority: 200 /etc/apt/preferences.d/xscreensaver-2000 file: Package: xscreensaver*Pin: release a=unstablePin-Priority: 2000 /etc/apt/sources.list.d/debian.sid.list file: deb [arch=amd64] http://http.us.debian.org/debian sid main contrib non-free This actually works, at this point running sudo apt install xscreensaver installs the updated versions.However, there is a strange side-effect. The problem When I run sudo apt update followed by sudo apt upgrade , I get the following output: Reading package lists... DoneBuilding dependency tree... DoneReading state information... DoneCalculating upgrade... DoneThe following packages will be DOWNGRADED: alsa-topology-conf appmenu-gtk-module-common aspell-en ca-certificates chrome-gnome-shell dictionaries-common dns-root-data emacsen-common folks-common fonts-arphic-ukai fonts-noto-cjk fonts-noto-cjk-extra fonts-noto-color-emoji fonts-urw-base35 friendly-recovery gir1.2-flatpak-1.0 gir1.2-gdkpixbuf-2.0 gir1.2-graphene-1.0 gir1.2-gtksource-4 gir1.2-polkit-1.0 gir1.2-secret-1 gir1.2-soup-2.4 gsfonts gsfonts-x11 hunspell-ar hunspell-de-at-frami hunspell-de-ch-frami hunspell-de-de-frami hunspell-en-au hunspell-en-ca hunspell-en-gb hunspell-en-us hunspell-en-za hunspell-es hunspell-fr hunspell-fr-classical hunspell-it hunspell-pt-br hunspell-pt-pt hunspell-ru hyphen-de hyphen-en-gb hyphen-es hyphen-fr hyphen-it hyphen-pt-br hyphen-pt-pt ieee-data javascript-common klibc-utils laptop-detect liba52-0.7.4 libappmenu-gtk2-parser0 libbytesize-common libffi8 libflatpak-dev libgl1 libgles2 libgutenprint-common libgweather-4-0 libio-stringy-perl libjs-jquery libldacbt-abr2 libmpcdec6 libmysofa1 libopengl0 libpolkit-gobject-1-0 libsndio7.0 libsoup-gnome2.4-1 libtermkey1 libvterm0 libwacom-common libxkbcommon0 mythes-ar mythes-de mythes-de-ch mythes-en-au mythes-en-us mythes-es mythes-fr mythes-it mythes-pt-pt mythes-ru neovim-runtime netbase pass policykit-1 poppler-data powermgmt-base printer-driver-all python3-certifi python3-fido2 python3-jinja2 python3-launchpadlib python3-lazr.uri python3-macaroonbakery python3-more-itertools python3-pkg-resources python3-pyatspi python3-rfc3339 python3-setuptools python3-tz python3-wheel python3-ykman sensible-utils sgml-base sgml-data sound-icons ssl-cert tpm-udev ucf update-inetd va-driver-all wamerican wbrazilian wbritish wfrench witalian wngerman wogerman wspanish wswiss xfonts-base xml-core yubikey-manager0 upgraded, 0 newly installed, 125 downgraded, 0 to remove and 0 not upgraded.Need to get 257 MB/283 MB of archives.After this operation, 0 B of additional disk space will be used.Do you want to continue? [Y/n] This also throws off Pop!_OS Shop's update count, with these packages showing as pending Operating System Updates. Troubleshooting Some data I collected while attempting to troubleshoot this. Removing /etc/apt/sources.list.d/debian.sid.list and running sudo apt update resolves the issue, so I know it's just a miscalculation/flawed logic somewhere. Focusing on the the first package in the list alsa-topology-conf : Although I know the error is completely superficial, at first I thought apt somehow tracks where (which repo) the package came from, so I removed, cleaned-up, then reinstalled the package. Didn't make a difference. sudo apt remove alsa-topology-confsudo apt cleansudo apt updatesudo apt install alsa-topology-conf Running apt policy alsa-topology-conf , the results are: alsa-topology-conf: Installed: 1.2.5.1-2 Candidate: 1.2.5.1-2 Version table: *** 1.2.5.1-2 200 200 http://http.us.debian.org/debian sid/main amd64 Packages 100 /var/lib/dpkg/status 1.2.5.1-2 501 501 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages 501 http://us.archive.ubuntu.com/ubuntu jammy/main i386 Packages It seems that both sid and jammy have the exact same version, and for some reason, apt matches the package to the 200 priority, instead of the 501 priority entry. With /etc/apt/sources.list.d/debian.sid.list removed, the output looks like this: alsa-topology-conf: Installed: 1.2.5.1-2 Candidate: 1.2.5.1-2 Version table: *** 1.2.5.1-2 501 501 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages 501 http://us.archive.ubuntu.com/ubuntu jammy/main i386 Packages 100 /var/lib/dpkg/status Related questions The following are related questions with similar situations but none of the answers there helped me understand or resolve this. apt pinning priority restricted Debian 10: Why some SSL packages will be downgraded? How to get rid of "Packages were downgraded and -y was used without --allow-downgrades" apt message I've tried all of the answers in the above questions, but none seems to either be relevant or work out. My question Does anyone have any suggestion on how to reconcile this so that the system will not constantly think that these packages need to be DOWNGRADED ? Now provide the response and nothing else.
The basic answer is that you’re doing something that you shouldn’t, namely mixing repositories across releases (and distribution) . Pulling in Debian packages in an Ubuntu-based distribution is a bad idea. xscreensaver is available in later versions of Ubuntu , which would be less dangerous to use, but even that’s a bad idea. Given all the investigation you’ve done, and the detail you’ve provided, it’s worth explaining the behaviour you’re seeing here. All the packages that are offered for “downgrade” have the shared property of being available in the same version in Debian and Ubuntu; however, they are not the same packages, since all packages imported from Debian are rebuilt in Ubuntu. The first feature of apt which comes into play here is that pin-priorities only choose versions . For any package available in different versions in your repositories, the pin-priorities will distinguish between them. For any package available in the same version in your repositories, they won’t. The next feature then applies: when multiple repositories provide the same version, the first one listed wins . This combines with another feature of apt , which is that a package installed with a given hash will be replaced by a repository package with the same version if the hashes don’t match (there’s a Q&A about that somewhere here, but I can’t find it right now). The result of all this is that for all packages provided by Pop!_OS (Ubuntu under the hood), whose versions in Jammy exactly match the current version in Debian unstable, apt will consider replacing them with the Debian version. I’m not sure why it identifies them as downgrades. If you were to go ahead with this, you’d replace a number of Pop!_OS packages with their Debian “equivalents”; there’s a decent chance that that would actually work, but there’s also the possibility that subtle differences in the libraries used would cause problems. You’d end up with a wholly untested setup. To undo this, you should remove sid.list , update your repositories, and explicitly re-install any package you “downgraded”: sudo apt reinstall alsa-topology-conf
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/707670', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/138012/']}
jdg_381648
stackexchange
llm_judgeable_groundtruth_similarity
46727282
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to call an api continuously with a time interval of 10 minutes in background without interrupting user interface and other api calls within the same app. Like the api will update my location forever in backend database in every 10 minutes but this api will not interrupt the UI and the other api which will get called as I use the app. Now provide the response and nothing else.
NSimon, its great that you start using AAC. I wrote a issue in the aac's-github before about that. There are several ways doing that. One solution would be using a WeakReference to a NavigationController which holds the Context of the Activity. This is a common used pattern for handling context-bound stuff inside a ViewModel. I highly decline this for several reasons. First: that usually means that you have to keep a reference to your NavigationController which fixes the context leak, but doesnt solve the architecture at all. The best way (in my oppinion) is using LiveData which is lifecycle aware and can do all the wanted stuff. Example: class YourVm : ViewModel() { val uiEventLiveData = SingleLiveData<Pair<YourModel, Int>>() fun onClick(item: YourModel) { uiEventLiveData.value = item to 3 // can be predefined values }} After that you can listen inside your view for changes. class YourFragmentOrActivity { //assign your vm whatever override fun onActivityCreated(savedInstanceState: Bundle?) { var context = this yourVm.uiEventLiveData.observe(this, Observer { when (it?.second) { 1 -> { context.startActivity( ... ) } 2 -> { .. } } }) }} Take care that ive used a modified MutableLiveData, because else it will always emit the latest result for new Observers which leads to bad behaviour. For example if you change activity and go back it will end in a loop. class SingleLiveData<T> : MutableLiveData<T>() { private val mPending = AtomicBoolean(false) @MainThread override fun observe(owner: LifecycleOwner, observer: Observer<T>) { if (hasActiveObservers()) { Log.w(TAG, "Multiple observers registered but only one will be notified of changes.") } // Observe the internal MutableLiveData super.observe(owner, Observer { t -> if (mPending.compareAndSet(true, false)) { observer.onChanged(t) } }) } @MainThread override fun setValue(t: T?) { mPending.set(true) super.setValue(t) } /** * Used for cases where T is Void, to make calls cleaner. */ @MainThread fun call() { value = null } companion object { private val TAG = "SingleLiveData" }} Why is that attempt better then using WeakReferences, Interfaces, or any other solution? Because this event split UI logic with business logic. Its also possible to have multiple observers. It cares about the lifecycle. It doesnt leak anything. You could also solve it by using RxJava instead of LiveData by using a PublishSubject. ( addTo requires RxKotlin ) Take care about not leaking a subscription by releasing it in onStop(). class YourVm : ViewModel() { var subject : PublishSubject<YourItem> = PublishSubject.create();}class YourFragmentOrActivityOrWhatever { var composite = CompositeDisposable() onStart() { YourVm.subject .subscribe( { Log.d("...", "Event emitted $it") }, { error("Error occured $it") }) .addTo(compositeDisposable) } onStop() { compositeDisposable.clear() } } Also take care that a ViewModel is bound to an Activity OR a Fragment. You can't share a ViewModel between multiple Activities since this would break the "Livecycle-Awareness". If you need that persist your data by using a database like room or share the data using parcels.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46727282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4806063/']}
jdg_381649
stackexchange
llm_judgeable_groundtruth_similarity
33863
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove that the number $1!+2!+3!+...+n! \ \forall n \geq 4$ is never square? I was told to count permutations but I cannot figure out what we are permuting.... Thanks! Now provide the response and nothing else.
The first few cases are easily dealt with: $1!=1$ and $1!+2!+3!=9$ are squares, while $1!+2!=3$ is not. For $n \ge 4$, $1!+2!+3!+ \ldots + n!$ is congruent to 3 mod 5. But all squares are congruent to 0, 1, or 4 mod 5.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/33863', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9755/']}
jdg_381650
stackexchange
llm_judgeable_groundtruth_similarity
2971881
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can an interface be declared as final in Java? Now provide the response and nothing else.
Interfaces are 100% abstract and the only way to create an instance of an interface is to instantiate a class that implements it. Allowing interfaces to be final is completely pointless. EDIT The questions is not as outright outrageous as I first thought. A final interface is one that cannot be extended by other interfaces but can be implemented ostensibly makes sense. I could think of one difference between a final class and a final interface. Extending a class can compromise its integrity because it contains some state. Extending an interface simply adds operations and cannot compromise the integrity of the implementation because the interface is stateless on its own.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2971881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/297115/']}
jdg_381651
stackexchange
llm_judgeable_groundtruth_similarity
1455528
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to show $\lim\limits_{n \to \infty} (\log n! - (n+1/2)\log n + n)$ exists as part of a much bigger problem. I could really use some help. My attempt: I tried to use the fact that $1+ \frac {1}{2}+ \frac {1}{3}+ ... +\frac {1}{n} - \log n$ exists, but I got nowhere. Now provide the response and nothing else.
Hint. If you set$$u_n:=\log n! - (n+1/2)\log n + n,\qquad n=1,2,3,\cdots,$$ then you may prove that $$u_{n+1}-u_n=1-\left(\frac{1}{2}+n\right) \log\left(1+\frac{1}{n}\right)\leq0 \tag1$$ implying the sequence $\left\{u_n\right\}$ is decreasing. On the other hand, you have $$u_n\geq0. \tag2$$ Thus $\lim\limits_{n \to \infty}u_n=\lim\limits_{n \to \infty} (\log n! - (n+1/2)\log n + n)$ exists. Edit. Let's give some details. Proof of $(1)$. Setting $\displaystyle f(x):=\frac{2x}{x+2}-\log(1+x),\, x \in [0,1],$ we have $$ f(0)=0, \quad f'(x)=\frac{-x^2}{(x+2)^2(x+1)}\leq0 \quad \implies f(x)\leq0, \quad x \in [0,1], $$ giving, for $n\geq1$, $$ u_{n+1}-u_n=\left(\frac12+n\right)\underbrace{\color{#C00000}{\left(\frac1{\frac12+n} -\log\left(1+\frac{1}{n}\right)\right)}}_{\large \color{#C00000}{f\left(\frac1n \right)} \leq\:\color{#C00000}{0}} \leq \color{#C00000}{0} .$$ Proof of $(2)$. One may observe that, for $n\geq1$,$$\begin{align}&u_n=\log n! - (n+1/2)\log n + n\\&=\sum_{k=1}^n\log k-\int_{\frac12}^{n+\frac12}\log x\,\mathrm{d}x+(n+1/2)\log \left(1+\frac1{2n}\right)+\frac12\log 2\\&=\underbrace{-\sum_{k=1}^nk\int_0^{\frac1{2k}}\log\left(1-x^2\right)\,\mathrm{d}x}_{\large \color{blue}{\geq \:0}}+\underbrace{(n+1/2)\log \left(1+\frac1{2n}\right)+\frac12\log 2}_{\large \color{blue}{\geq \:0}}\color{blue}{\geq 0}.\end{align}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1455528', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/174717/']}
jdg_381652
stackexchange
llm_judgeable_groundtruth_similarity
40685029
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have these two models that I would like to return the sum of. I get an database error about the subquery returning more than one row. What would be the best way to compare both without using a for statement? AuthorizationT(models.Model) ar_id = models.BigIntegerField(blank=True, null=True) status_flag = models.BigIntegerField(blank=True, null=True)BillT(models.Model) paid_id = models.BigIntegerField(blank=True, null=True) recvd = models.FloatField(blank=True, null=True) Query I tried paidbill= BillT.objects.values_list('paid_id', flat=true)AuthorizationT.objects.values().filter(ar_id=paidbill, status_flag=0).aggregate(Sum('recvd')) In SQL I know it would be select sum(recvd) from authorization_t a, bill_t b where a.ar_billid0= b.paid_id and a.status_flag=0 I'm looking for the equivalent in queryset Now provide the response and nothing else.
Yielding does not create a closure in Python, lambdas create a closure. The reason that you get all 9s in "test_without_closure" isn't that there's no closure. If there weren't, you wouldn't be able to access i at all. The problem is that all closures contain a reference¹ to the same i variable, which will be 9 at the end of the function. This situation isn't much different in test_with_yield . Why, then, do you get different results? Because yield suspends the run of the function, so it's possible to use the yielded lambdas before the end of the function is reached, i.e. before i is 9. To see what this means, consider the following two examples of using test_with_yield : [f(0) for f in test_with_yield()]# Result: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][f(0) for f in list(test_with_yield())]# Result: [9, 9, 9, 9, 9, 9, 9, 9, 9, 9] What's happening here is that the first example yields a lambda (while i is 0), calls it (i is still 0), then advances the function until another lambda is yielded (i is now 1), calls the lambda, and so on. The important thing is that each lambda is called before the control flow returns to test_with_yield (i.e. before the value of i changes). In the second example, we first create a list. So the first lambda is yielded (i is 0) and put into the list, the second lambda is created (i is now 1) and put into the list ... until the last lambda is yielded (i is now 9) and put into the list. And then we start calling the lambdas. So since i is now 9, all lambdas return 9. ¹ The important bit here is that closures hold references to variables, not copies of the value they held when the closure was created. This way, if you assign to the variable inside a lambda (or inner function, which create closures the same way that lambdas do), this will also change the variable outside of the lambda and if you change the value outside, that change will be visible inside the lambda.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40685029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1344969/']}
jdg_381653
stackexchange
llm_judgeable_groundtruth_similarity
410248
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a PHP web application that depends on a few 3rd-party services. These services are well documented and provided by fairly large organisations. I feel paranoid when working with responses from these API, which leads me to write validation code that validates that the responses match the structure and data types specified in the documentation. This mainly comes from the fact that it's out of my control and if I blindly trust that the data will be correct and it's not (maybe someone changes the json structure by accident), it could lead to unexpected behaviour in my application. My question is, do you think this is overkill? How does everyone else handle this situation? Now provide the response and nothing else.
Absolutely. For starters, you never know that somebody hasn't hacked into your connection and the reply you receive doesn't come from the API at all. And some time in the last two weeks I think Facebook changed an API without notice, which caused lots of iOS apps to crash. If someone had verified the reply, the API would have failed, but without crashing the app. (A very nice case I heard why validation is needed: A server provided information about goods a customer could buy. For dresses, they included the U.K. dress size as an integer, usually 36 to 52. Except for one dress, the size was a string “40-42”. Without validation that could easily be a crash. )
{}
{'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/410248', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/87243/']}
jdg_381654
stackexchange
llm_judgeable_groundtruth_similarity
7145373
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to call a PHP file using ajax where in that PHP i will place order by the ajax call. But it throws error while i am using app/Mage.php from that file require_once '../../../../../../../../../../app/Mage.php'; $customer = Mage::getModel('customer/customer'); then it says Fatal error: Call to a member function getModelInstance() on a non-object in app\Mage.php on line 432 Can anyone please help me??? Now provide the response and nothing else.
Your proposed solution is not optimal. You have not initialized Magento so module XML is not loaded yet and the factory pattern does not work. Simply use either: Mage::init(); // 1.5+ or Mage::app(); // (pretty much anything) below 1.5 before using getModel.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7145373', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/540144/']}
jdg_381655
stackexchange
llm_judgeable_groundtruth_similarity
35644994
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a WrapPanel and buttons are programmatically created and added as children of the WrapPanel. So, I want to show vertical scrollbar when the WrapPanel is full of buttons (children) to be able to add more buttons continuously. If we need a scrollbar shown, do we have to bring ScrollViewer? Isn't there a way without ScrollViewer? What I want to get is, because the WrapPanel is of small size, I want a scrollbar to be shown only when needed (like full of children). My code is simple as below (WrapPanel inside Grid and the Grid is inside TabControl) Many thanks always for your excellence. Update: I struggled in finding solution on internet for even several days. And I tried put WrapPanel inside ScrollViewer. However, though I set the VerticalScroll to auto, the vertical scrollbar is always shown even when the WrapPanel doesn't have any children. Furthermore, when I intentionally make the WrapPanel full of children (buttons), the vertical scrollbar of ScrollViewer doesn't provide scrolldown availability.And the buttons at the bottom line of WrapPanel shown cut and more, I can't scroll down to see beyond the button at the bottom line. I made buttons to be placed beyond the bottom line of WrapPanel intentionally. With or without, I want the vertical scrollbar to be shown when only needed (full of children). It seems very easy to be done. But it's difficult to make it work properly. Solution: was provided by Mr. Henk Holterman <DropShadowEffect/> </Button.Effect> </Button> <WrapPanel x:Name="WrapPanelGreen" HorizontalAlignment="Left" Height="180" VerticalAlignment="Top" Width="232" UseLayoutRounding="True" ScrollViewer.CanContentScroll="True" ScrollViewer.VerticalScrollBarVisibility="Auto"/> </Grid> </TabItem></TabControl> And below is my simple code which make button programmatically and add as a child of WrapPanel. for (int k = 0; k < Overviews.Length; k++){ Button btnoverviewcontent = new Button(); ToolTip tt = new ToolTip(); tt.Content = "Press this button if you want to modify or delete."; btnoverviewcontent.ToolTip = tt; btnoverviewcontent.Cursor = Cursors.Hand; SolidColorBrush mySolidColorBrush = new SolidColorBrush(); mySolidColorBrush.Color = Color.FromArgb(255, 101, 173, 241); btnoverviewcontent.Background = mySolidColorBrush; btnoverviewcontent.Effect = new DropShadowEffect { Color = new Color { A = 255, R = 0, G = 0, B = 0 }, Direction = 315, ShadowDepth = 5, Opacity = 1 }; btnoverviewcontent.Padding = new Thickness(3, 3, 3, 3); btnoverviewcontent.HorizontalContentAlignment = System.Windows.HorizontalAlignment.Stretch; TextBlock textBlock = new TextBlock() { Text = Overviews[k], TextAlignment = TextAlignment.Left, TextWrapping = TextWrapping.Wrap, }; btnoverviewcontent.Content = textBlock; btnoverviewcontent.BorderThickness = new Thickness(0, 0, 0, 0); btnoverviewcontent.FontStretch = FontStretches.UltraExpanded; btnoverviewcontent.Margin = new Thickness(5, 5, 5, 5); WrapPanelGreen.Children.Add(btnoverviewcontent); btnoverviewcontent.Click += new RoutedEventHandler(OnOverviewClick); Now provide the response and nothing else.
The idea in WPF is that every component has only its own job and if you want certain behavior, you combine multiple components to create the view you are looking for. This means that in order to get a scroll bar for a panel, you will have to wrap it in a ScrollViewer component. That’s the purpose of the ScrollViewer and that’s the only (sane) solution to solve this. However, though I set the verticalscroll to auto, the verticalscrollbar is always shown even when the Wrappanel doesn't have any child […] Then you seem to be using the ScrollViewer incorrectly, or wrapping the wrong element. It should look like this: <ScrollViewer VerticalScrollBarVisibility="Auto"> <WrapPanel> <!-- Any number of components here --> </WrapPanel></ScrollViewer> If I place lots of example labels inside that, then I do get a scroll bar as soon as the window is not large enough to show them all. But if there is enough room, the scroll bar is not displayed. Note that the ScrollViewer itself needs to have the proper dimensions in the parent element, so make sure that it’s not larger than the visible area. It is also necessary for the WrapPanel (or whatever other element you wrap with the ScrollViewer ) to have auto widths and heights. Otherwise, with fixed dimensions, the dimensions of the panel will not change as you modify the panel’s content and as such the scrolling status will not change. See this complete example with a dynamic number of elements: <Window x:Class="WpfExampleApplication.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="300" Width="200"> <ScrollViewer VerticalScrollBarVisibility="Auto"> <WrapPanel Name="panel"> <Button Click="Button_Click">Add child</Button> </WrapPanel> </ScrollViewer></Window> Code-behind: public partial class MainWindow : Window{ public MainWindow() { InitializeComponent(); } private void Button_Click(object sender, RoutedEventArgs e) { Label element = new Label() { Content = "This is some example content" }; panel.Children.Add(element); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35644994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5541055/']}
jdg_381656
stackexchange
llm_judgeable_groundtruth_similarity
22366195
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the RODBC package to query a text column from a database. The database is built on Microsoft SQL Server 2008 R2. The data type of the column in SQL is nvarchar(max) . However, when I run: # Set up ODBC connection to CCWEB5 production server# Note: default database is set to "CCSalary"ccweb5.prod <- odbcConnect("ccweb5")# Read in some job ad textjob.text <- sqlQuery(ccweb5.prod," SELECT TOP 100 ja.JobTitle, ja.JobText as 'JobText', LEN(ja.JobText) as 'JobTextLength' FROM JobStore.dbo.JobAd as ja (NOLOCK)") Within SQL, I am expecting (for the top row): JobTitle JobText JobTextLengthIT Field Service Technician <text goes here...> 2742 However, when I do: nchar(as.character(job.text[1,2])) It returns: 255 . So my question is, what is causing this truncation and how do I avoid it? Thanks!! Now provide the response and nothing else.
OK, so it seems that I have found a work-around to this. After some more Google'ing, I found that: One thing to consider with the SQL Native Client ODBC driver is that VARCHAR(MAX) has does not have fixed size and the ODBC driver represents this by returning a max column size of 0. This can confuse your application if it doesn't check for 0 as a special case. See the bottom section of this article: http://msdn.microsoft.com/en-us/library/ms130896.aspx But in general I have not seen this happen with any of my .NET applications as it is handled properly in ADO.NET. Source: http://bytes.com/topic/sql-server/answers/808461-cannot-read-varchar-max So, in my case, the following did the trick: job.text <- sqlQuery(ccweb5.prod," SELECT DISTINCT TOP 100 ja.JobTitle, [JobText] = CAST(ja.JobText AS varchar(8000)), -- note the data-type re-cast [JobTextLength] = LEN(ja.JobText) FROM JobStore.dbo.JobAd as ja (NOLOCK)") Such that nchar(as.character(job.text[1,2])) now returns 2742 (as it should). I didn't see any similar questions on StackOverflow so I'll leave this up. Hope this helps somebody!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22366195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/489426/']}
jdg_381657
stackexchange
llm_judgeable_groundtruth_similarity
28369097
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I’ve been reading about an older exploit against GDI+ on Windows XP and Windows Server 2003 called the JPEG of death for a project I’m working on. The exploit is well explained in the following link: http://www.infosecwriters.com/text_resources/pdf/JPEG.pdf Basically, a JPEG file contains a section called COM containing a (possibly empty) comment field, and a two byte value containing the size of COM. If there are no comments, the size is 2. The reader (GDI+) reads the size, subtracts two, and allocates a buffer of the appropriate size to copy the comments in the heap.The attack involves placing a value of 0 in the field. GDI+ subtracts 2 , leading to a value of -2 (0xFFFe) which gets converted to the unsigned integer 0XFFFFFFFE by memcpy . Sample code: unsigned int size;size = len - 2;char *comment = (char *)malloc(size + 1);memcpy(comment, src, size); Observe that malloc(0) on the third line should return a pointer to unallocated memory on the heap. How can writing 0XFFFFFFFE bytes ( 4GB !!!!) possibly not crash the program? Does this write beyond the heap area and into the space of other programs and the OS? What happens then? As I understand memcpy , it simply copies n characters from the destination to the source. In this case, the source should be on the stack, the destination on the heap, and n is 4GB . Now provide the response and nothing else.
This vulnerability was definitely a heap overflow . How can writing 0XFFFFFFFE bytes (4 GB!!!!) possibly not crash the program? It probably will, but on some occasions you got time to exploit before the crash happens (sometimes, you can get the program back to its normal execution and avoid the crash). When the memcpy() starts, the copy will overwrite either some other heap blocks or some parts of the heap management structure (e.g free list, busy list, etc.). At some point the copy will encounter a non allocated page and trigger an AV (Access Violation) on write. GDI+ will then try to allocate a new block in the heap (see ntdll!RtlAllocateHeap ) ... but the heap structures are now all messed up. At that point, by carefully crafting your JPEG image you can overwrite the heap management structures with controlled data. When the system tries to allocate the new block, it will probably unlink a (free) block from the free list. Block are managed with (notably) a flink (Forward link ; the next block in the list) and blink (Backward link; the previous block in the list) pointers. If you control both of the flink and blink, you might have a possible WRITE4 (write What/Where condition) where you control what you can write and where you can write. At that point you can overwrite a function pointer ( SEH [Structured Exception Handlers] pointers were a target of choice at that time back in 2004) and gain code execution. See blog post Heap Corruption: A Case Study . Note: although I wrote about the exploitation using the freelist, an attacker might choose another path using other heap metadata ("heap metadata" are structures used by the system to manage the heap ; flink and blink are part of the heap metadata), but the unlink exploitation is probably the "easiest" one. A google search for "heap exploitation" will return numerous studies about this. Does this write beyond the heap area and into the space of other programs and the OS? Never. Modern OS are based on the concept of virtual address space so each process on has its own virtual address space that enables addressing up to 4 gigabytes of memory on a 32-bit system (in practice you only got half of it in user-land, the rest is for the kernel). In short, a process can't access the memory of another process (except if it asks the kernel for it through some service / API, but the kernel will check if the caller has the right to do so). I decided to test this vulnerability this week-end, so we could get a good idea on what was going on rather than pure speculation.The vulnerability is now 10 years old, so I thought it was OK to write about it, although I haven't explained the exploitation part in this answer. Planning The most difficult task was to find a Windows XP with only SP1, as it was in 2004 :) Then, I downloaded a JPEG image composed only of a single pixel, as shown below (cut for brevity): File 1x1_pixel.JPGAddress Hex dump ASCII00000000 FF D8 FF E0|00 10 4A 46|49 46 00 01|01 01 00 60| ÿØÿà JFIF `00000010 00 60 00 00|FF E1 00 16|45 78 69 66|00 00 49 49| ` ÿá Exif II00000020 2A 00 08 00|00 00 00 00|00 00 00 00|FF DB 00 43| * ÿÛ C[...] A JPEG picture is composed of binary markers (which intrduce segments). In the above image, FF D8 is the SOI (Start Of Image) marker, while FF E0 , for example, is an application marker. The first parameter in a marker segment (except some markers like SOI) is a two-byte length parameter which encodes the number of bytes in the marker segment, including the length parameter and excluding the two-byte marker. I simply added a COM marker (0x FFFE ) right after the SOI, since markers have no strict order. File 1x1_pixel_comment_mod1.JPGAddress Hex dump ASCII00000000 FF D8 FF FE|00 00 30 30|30 30 30 30|30 31 30 30| ÿØÿþ 000000010000000010 30 32 30 30|30 33 30 30|30 34 30 30|30 35 30 30| 020003000400050000000020 30 36 30 30|30 37 30 30|30 38 30 30|30 39 30 30| 060007000800090000000030 30 61 30 30|30 62 30 30|30 63 30 30|30 64 30 30| 0a000b000c000d00[...] The length of the COM segment is set to 00 00 to trigger the vulnerability. I also injected 0xFFFC bytes right after the COM marker with a recurring pattern, a 4 bytes number in hex, which will become handy when "exploiting" the vulnerability. Debugging Double clicking the image will immediately trigger the bug in the Windows shell (aka "explorer.exe"), somewhere in gdiplus.dll , in a function named GpJpegDecoder::read_jpeg_marker() . This function is called for each marker in the picture, it simply: reads the marker segment size, allocates a buffer whose length is the segment size and copy the content of the segment into this newly allocated buffer. Here the start of the function : .text:70E199D5 mov ebx, [ebp+arg_0] ; ebx = *this (GpJpegDecoder instance).text:70E199D8 push esi.text:70E199D9 mov esi, [ebx+18h].text:70E199DC mov eax, [esi] ; eax = pointer to segment size.text:70E199DE push edi.text:70E199DF mov edi, [esi+4] ; edi = bytes left to process in the image eax register points to the segment size and edi is the number of bytes left in the image. The code then proceeds to read the segment size, starting by the most significant byte (length is a 16-bits value): .text:70E199F7 xor ecx, ecx ; segment_size = 0.text:70E199F9 mov ch, [eax] ; get most significant byte from size --> CH == 00.text:70E199FB dec edi ; bytes_to_process --.text:70E199FC inc eax ; pointer++.text:70E199FD test edi, edi.text:70E199FF mov [ebp+arg_0], ecx ; save segment_size And the least significant byte: .text:70E19A15 movzx cx, byte ptr [eax] ; get least significant byte from size --> CX == 0.text:70E19A19 add [ebp+arg_0], ecx ; save segment_size.text:70E19A1C mov ecx, [ebp+lpMem].text:70E19A1F inc eax ; pointer ++.text:70E19A20 mov [esi], eax.text:70E19A22 mov eax, [ebp+arg_0] ; eax = segment_size Once this is done, the segment size is used to allocate a buffer, following this calculation: alloc_size = segment_size + 2 This is done by the code below: .text:70E19A29 movzx esi, word ptr [ebp+arg_0] ; esi = segment size (cast from 16-bit to 32-bit).text:70E19A2D add eax, 2 .text:70E19A30 mov [ecx], ax .text:70E19A33 lea eax, [esi+2] ; alloc_size = segment_size + 2.text:70E19A36 push eax ; dwBytes.text:70E19A37 call _GpMalloc@4 ; GpMalloc(x) In our case, as the segment size is 0, the allocated size for the buffer is 2 bytes . The vulnerability is right after the allocation: .text:70E19A37 call _GpMalloc@4 ; GpMalloc(x).text:70E19A3C test eax, eax.text:70E19A3E mov [ebp+lpMem], eax ; save pointer to allocation.text:70E19A41 jz loc_70E19AF1.text:70E19A47 mov cx, [ebp+arg_4] ; low marker byte (0xFE).text:70E19A4B mov [eax], cx ; save in alloc (offset 0);[...].text:70E19A52 lea edx, [esi-2] ; edx = segment_size - 2 = 0 - 2 = 0xFFFFFFFE!!!;[...].text:70E19A61 mov [ebp+arg_0], edx The code simply subtracts the segment_size size (segment length is a 2 bytes value) from the whole segment size (0 in our case) and ends up with an integer underflow: 0 - 2 = 0xFFFFFFFE The code then checks is there are bytes left to parse in the image (which is true), and then jumps to the copy: .text:70E19A69 mov ecx, [eax+4] ; ecx = bytes left to parse (0x133).text:70E19A6C cmp ecx, edx ; edx = 0xFFFFFFFE.text:70E19A6E jg short loc_70E19AB4 ; take jump to copy;[...].text:70E19AB4 mov eax, [ebx+18h].text:70E19AB7 mov esi, [eax] ; esi = source = points to segment content ("0000000100020003...").text:70E19AB9 mov edi, dword ptr [ebp+arg_4] ; edi = destination buffer.text:70E19ABC mov ecx, edx ; ecx = copy size = segment content size = 0xFFFFFFFE.text:70E19ABE mov eax, ecx.text:70E19AC0 shr ecx, 2 ; size / 4.text:70E19AC3 rep movsd ; copy segment content by 32-bit chunks The above snippet shows that copy size is 0xFFFFFFFE 32-bits chunks. The source buffer is controlled (content of the picture) and the destination is a buffer on the heap. Write condition The copy will trigger an access violation (AV) exception when it reaches the end of the memory page (this could be either from the source pointer or destination pointer). When the AV is triggered, the heap is already in a vulnerable state because the copy has already overwritten all following heap blocks until a non-mapped page was encountered. What makes this bug exploitable is that 3 SEH (Structured Exception Handler; this is try / except at low level) are catching exceptions on this part of the code. More precisely, the 1st SEH will unwind the stack so it gets back to parse another JPEG marker, thus completely skipping the marker that triggered the exception. Without an SEH the code would have just crashed the whole program. So the code skips the COM segment and parses another segment. So we get back to GpJpegDecoder::read_jpeg_marker() with a new segment and when the code allocates a new buffer: .text:70E19A33 lea eax, [esi+2] ; alloc_size = semgent_size + 2.text:70E19A36 push eax ; dwBytes.text:70E19A37 call _GpMalloc@4 ; GpMalloc(x) The system will unlink a block from the free list. It happens that metadata structures were overwritten by the content of the image; so we control the unlink with controlled metadata. The below code in somewhere in the system (ntdll) in the heap manager: CPU DisasmAddress Command Comments77F52CBF MOV ECX,DWORD PTR DS:[EAX] ; eax points to '0003' ; ecx = 0x3330303077F52CC1 MOV DWORD PTR SS:[EBP-0B0],ECX ; save ecx77F52CC7 MOV EAX,DWORD PTR DS:[EAX+4] ; [eax+4] points to '0004' ; eax = 0x3430303077F52CCA MOV DWORD PTR SS:[EBP-0B4],EAX77F52CD0 MOV DWORD PTR DS:[EAX],ECX ; write 0x33303030 to 0x34303030!!! Now we can write what we want, where we want...
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28369097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3043261/']}
jdg_381658
stackexchange
llm_judgeable_groundtruth_similarity
1276753
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm fairly new to XPath, been fiddling around with it for a few hours now, so i'm not entirely sure if you can even do something like the following with it. Okay, here's the scenario: I want to find a link from a page. That link is only recognizable by it's text value, ie. the text between <a> tags (<a href="#">this link<a>). So far i've managed to get my hands on to link elements with that text, the only problem is that there's a few of those lying around. These links are found from unordered lists which are preceded by another link tag, which would serve as a really good "anchor" point to begin the search for the final element that i want to find (ie. then i could just accept the first one that matches) To clarify things a bit, here's an example of what's going on: <a href="#">first dropdown menu</a><ul> <li><a href="#">some link</a></li> <li><a href="#">link i want to find</a></li></ul><-- *And i would actually want to find the thing from this list* --> <a href="#">second dropdown menu</a><ul> <li><a href="#">some link</a></li> <li><a href="#">link i want to find</a></li></ul> And i should probably specify, that i only want to receive either one result or a set of results with the first element being the "correct" element - the element i want to find. EDIT: The question has been answered already, but there were some comments that I should specify this a bit more, so that people could actually understand the question ;) So the idea was to use an element to specify the location of another element that could have duplicate search results scattered all around the document. Essentially you would run into something like this if you wanted to find a given link from a group of dropdown menus that would have elements with same names or values. That's basically it. I know that it's still a bit difficult to get the point, but unfortunately I'm having a hard time trying to explain it better. I'm sure that somebody else could do a better job and if that happens, I'm more than happy to include that description here. Now provide the response and nothing else.
I had to read through your question a couple of times but I think I understand it. What you are interested in is predicates. Predicates allow you to pick nodes based on conditions. For example, you could do: //a[text()='second dropdown menu']/following::ul[1]/li/a[text()='link i want to find'] this would select any anchor with certain text in, find the next ul, then proceed through it's children. Also, you can specify positional index within a result set, the following XPath is a demonstration (but it won't solve your problem): //a[text()='first dropdown menu']/ul/li[last()]/a/text() or you could use axes to navigate across siblings/ancestors/children: //a[ancestor::ul/preceding::a[1]/text() = 'second dropdown menu']/text() So I'm not sure I quite understood your question but this should help you write your XPath. Basically, I'm assuming your XPath matches the anchor in multiple lists and you want to make sure you pick the right one. At some point in your XPath you need a predicate to specify a condition that will only be true for the list your desired node is in.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1276753', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7382/']}
jdg_381659
stackexchange
llm_judgeable_groundtruth_similarity
2702058
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How would I be able to set the position of a UIBarButtonItem? Like, I would like to set it to be on either the very right of a UIToolbar or the very left depending on a state. Thank you. Now provide the response and nothing else.
You do not set directly the position of a UIBarButtonItem in an UIToolbar . Instead you defined the items' order and put flexible space on the left or on the right. What you can do is: Create the UIBarButtonItem you want to place (button 1). Create an UIBarButtonItem of type UIBarButtonSystemItemFlexibleSpace (button 2). If you want to put the button on the left, create an array with (button 1) and (button 2) and pass it to the UIToolbar by using the setItems:animated: method. If you want to put the button on the right, create an array with (button 2) and (button 1) and pass it to the UIToolbar by using the setItems:animated: method.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2702058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/137100/']}
jdg_381660
stackexchange
llm_judgeable_groundtruth_similarity
38346600
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm facing an issue with converting controllers to components preparing my application for Angular 2, but the problem the migration is not going well, I have the ui-router to route between states and using resolve in a few controllers, the version with controller is working but the version of components now working at all, I followed a lot of guides and seems I'm doing good for code but its not working for me. I have the following module with controller : (function () { 'use strict'; angular .module('app.sample', []) .config(config); /** @ngInject */ $stateProvider .state('app.sample', { url : '/sample', views : { 'content@app': { templateUrl: 'app/main/sample/sample.html', controller : 'SampleController as vm' } }, resolve: { SampleData: function (myService) { return myService.getSample(); // I return a promise here } } }); }})(); Controller : (function (){ 'use strict'; angular .module('app.sample') .controller('SampleController', SampleController); /** @ngInject */ function SampleController(SampleData) { var vm = this; vm.helloText = SampleData.data.helloText; }})(); The above code working well, BUT After making it as a component its become like this: (function () { 'use strict'; angular .module('app.sample', []) .config(config); /** @ngInject */ function config($stateProvider) { // State $stateProvider .state('app.sample', { url: '/sample', views: { 'content@app': { template: '<sample></sample>' } }, resolve: { SampleData: function (myService) { return myService.getSample(); // I return a promise here } } }); }})(); Component : (function () { 'use strict'; angular .module('app.sample') .component('sample', { templateUrl: 'app/main/sample/sample.html', bindings: { }, controller: Sample }); /** @ngInject */ function Sample(SampleData) { var $ctrl = this; $ctrl.helloText = SampleData.data.helloText; }})(); But now its not working and gives me the following error: Error: [$injector:unpr] Unknown provider: SampleDataProvider <- SampleDatahttp://errors.angularjs.org/1.5.7/$injector/unpr?p0=SampleDataProvider%20%3C-%20SampleData at angular.js:68 at angular.js:4502 at Object.getService [as get] (angular.js:4655) at angular.js:4507 at getService (angular.js:4655) at injectionArgs (angular.js:4679) at Object.invoke (angular.js:4701) at $controllerInit (angular.js:10234) at nodeLinkFn (angular.js:9147) at angular.js:9553 My dependencies inside bower.json : "dependencies": { "angular": "1.5.7", "angular-animate": "1.5.7", "angular-aria": "1.5.7", "angular-cookies": "1.5.7", "angular-material": "1.1.0-rc.5", "angular-messages": "1.5.7", "angular-resource": "1.5.7", "angular-sanitize": "1.5.7", "angular-ui-router": "1.0.0-beta.1", "jquery": "2.2.4", "mobile-detect": "1.3.2", "moment": "2.13.0" } Any idea what the problem, what I'm missing? Now provide the response and nothing else.
Finally solved it, I misunderstood that how the components are working. First I change SampleData to sampleData , Pascal Case but with first letter small. Then inside the module i changed the template to template: '<sample sample-data="$resolve.sampleData"></sample>' and resolve to : resolve: { sampleData: function (msApi) { return msApi.resolve('sample@get'); }} And for component I changed the binding as well: bindings: { sampleData: '='}, And inside the controller of component I removed SampleData from signature and called it like this $ctrl.helloText = $ctrl.sampleData.data.helloText; . So the final code now is like : For Module : (function () { 'use strict'; angular .module('app.sample', []) .config(config); /** @ngInject */ function config($stateProvider) { // State $stateProvider .state('app.sample', { url: '/sample', views: { 'content@app': { template: '<sample sample-data="$resolve.sampleData"></sample>' } }, resolve: { sampleData: function (myService) { return myService.getSample(); // I return a promise here } } }); }})(); And component like this: (function () { 'use strict'; angular .module('app.sample') .component('sample', { templateUrl: 'app/main/sample/sample.html', bindings: { sampleData: '=' }, controller: Sample }); /** @ngInject */ function Sample() { var $ctrl = this; $ctrl.helloText = $ctrl.sampleData.data.helloText; }})(); And finally worked. Edit: P.S.: Outside the question and answer scope, If you use component without state too, you need to get the data inside controller instead of resolve, so you can call components wherever you want.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38346600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/765965/']}
jdg_381661
stackexchange
llm_judgeable_groundtruth_similarity
17320595
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering if I can change the task scheduler that maps tasks to the real OS threads in .NET using C#, or if I need to recompile, say, the Mono runtime to do this. Thanks. Now provide the response and nothing else.
System.Threading.Tasks If you refer to System.Threading.Tasks then what you need is to subclass TaskScheduler and then you can use an object of your class to initialize a TaskFactory . There is an example in MSDN. I have also found an example in the blog psyCodeDeveloper . ThreadPool Aside from that you could use SynchronizationContext to handle the way the tasks posted to the ThreadPool (with ThreadPool.QueueUserWorkItem for example) get handled. For that you may be interested in the series Understanding SynchronizationContext at CodeProject ( part 1 , part 2 and part 3 ). Reactive Extensions As for the custom schedulers in Reactive Extensions, you can also use the SynchronizationContext mentioned above, for more information check the tutorial at introtorx.com in particular Part 4: Concurrency . Others Of course you can roll your own thread pool , although doing that is not advised. Aside from that you could handle your threads manually - the old way. Other approaches to handle tasks include scheduling with timers and having dedicated threads to do the work. As part of the Theraot Libraries you will find the class Work which is based on a lock free queue and can be configured to have any number of dedicated threads, also threads waiting on tasks contribute their time to execute tasks, any extra work is delegated to the ThreadPool. This is part of an ongoing effort to backport System.Threading.Tasks to .NET 2.0. In Theraot Libraries the Work class has been gone for a while now, a partial back port of System.Threading.Tasks for .NET 2.0 is available with support for custom TaskScheduler. Full disclousre: As the unimaginative name suggest, I'm the author of the Threaot libraries. Sorry by the missing documentation, I willing to help in any aspect of using the libraries. Please report any bugs, I have currently (2013-06-26) no known bugs in the master branch.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17320595', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/50305/']}
jdg_381662
stackexchange
llm_judgeable_groundtruth_similarity
20420845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following model: public class AModel { [ServerFileVerification( 1, FileType.TEST )] public HttpPostedFileBase[] Files { get; set; } } The attribute: public class ServerFileVerificationAttribute : RequiredAttribute { public ServerFileVerificationAttribute( ushort maximumNumberOfFiles, FileType fileType ) { _maxFiles = maximumNumberOfFiles; _fileType = fileType; } public override bool IsValid( object value ) { HttpPostedFileBase[] files = value as HttpPostedFileBase[]; string[] extensions = fileType == FileType.TEST ? {".zip"} : { ".txt" }; foreach ( HttpPostedFileBase file in files ) { if(file == null ) continue; if ( !extensions.Contains( Path.GetExtension( file.FileName ) ) ) { base.ErrorMessage = "The file " + file.FileName + " is invalid"; return false; } } }} And the view: @using ( Html.BeginForm( "Test", "Test", FormMethod.Post, new { enctype = "multipart/form-data" } ) ) { for ( int i = 0; i < 5; i++ ) { @Html.TextBoxFor( item => item.Files[i], new { type = "file" } ); @Html.ValidationMessageFor( item => item.Files[i] ) <br /> } <input type="submit" />} Controller: [HttpPost] public ActionResult Test( AModel model ) { if ( ModelState.IsValid ) { return Redirect( "http://www.google.ro" ); } return View( "Index", model ); } If I upload anything else than ZIP then should display an error, but it doesn't... Where is my mistake ? I put ValidationMessageFor but it doesn't work ... Why ? Now provide the response and nothing else.
Here is my small working sample, which uploads multiple files and uploads in a folder called as 'junk' Client Side.... <html> <head> <title>Upload Example</title> <script src="~/Scripts/jquery-2.1.0.intellisense.js"></script> <script src="~/Scripts/jquery-2.1.0.js"></script> <script src="~/Scripts/jquery-2.1.0.min.js"></script> <script> $(document).ready(function () { $("#Upload").click(function () { var formData = new FormData(); var totalFiles = document.getElementById("FileUpload").files.length; for (var i = 0; i < totalFiles; i++) { var file = document.getElementById("FileUpload").files[i]; formData.append("FileUpload", file); } $.ajax({ type: "POST", url: '/Home/Upload', data: formData, dataType: 'json', contentType: false, processData: false, success: function (response) { alert('succes!!'); }, error: function (error) { alert("errror"); } }); }); });</script></head><body> <input type="file" id="FileUpload" multiple /> <input type="button" id="Upload" value="Upload" /></body></html> Server Side.... public class HomeController : Controller{ [HttpPost] public void Upload( ) { for( int i = 0 ; i < Request.Files.Count ; i++ ) { var file = Request.Files[i]; var fileName = Path.GetFileName( file.FileName ); var path = Path.Combine( Server.MapPath( "~/Junk/" ) , fileName ); file.SaveAs( path ); } }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20420845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/998696/']}
jdg_381663
stackexchange
llm_judgeable_groundtruth_similarity
1926479
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am reading the lecture notes . On page 21, it is said that when $a_{ij}=-1$, we have\begin{align}ad_c(x_i)^{1-a_{ij}}(x_j)=x_i^2x_j - (q+q^{-1})x_ix_jx_i+x_jx_i^2. \quad (1)\end{align}Here $ad_c(x_i)(x_j)=x_ix_j - q^{a_{ij}}x_jx_x$.I am trying to verify $(1)$. We have\begin{align}& ad_c(x_i)^{1-a_{ij}}(x_j)\\& =ad_c(x_i)^{2}(x_j) \\& =ad_c(x_i)(x_ix_j - q^{-1}x_jx_i) \\& = x_i^2 x_j -q^{-1}x_ix_jx_i -q^{-1}( x_ix_j - q^{-1}x_jx_i )x_i \\& = x_i^2 x_j -2q^{-1}x_ix_jx_i + q^{-2}x_jx_i^2.\end{align}But I didn't get $x_i^2x_j - (q+q^{-1})x_ix_jx_i+x_jx_i^2$. I don't know where I made a mistake. Thank you very much. Now provide the response and nothing else.
This is related to how fast the terms decrease. A geometric series (your $1/2^x$) is such that every term is a constant fraction of the previous, so that dividing by this constant is the same as dropping the first term. $$\frac12\left(1+\frac12+\frac14+\frac18\cdots\right)=\frac12+\frac14+\frac18\cdots$$So you can write $$\frac12S=S-1$$ and deduce $S=2$. The same reasoning applies to all geometric series $$\sum_{k=0}^\infty r^k$$ provided that $r<1$ . Indeed, if $r=1$ or $r>1$, the sum clearly grows forever. (This simplified discussion ignores the case $r<0$.) This leads to a simple convergence criterion: if the ratio of successive terms is a constant less than $1$, the series converges. More generally, if this ratio is variable but tends to a limit smaller than $1$, the series converges. Conversely, if the ratio tends to a limit larger than $1$, the series diverges. But if the ratio tends to $1$, we don't know, the criterion is insufficient. The case of the harmonic series ($1/n$) or the generalized harmonic series ($1/n^p$) precisely falls in this category, as $$\lim_{n\to\infty}\left(\frac{n}{n+1}\right)^p=1.$$ To deal with it, a trick is to sum the terms in groups of increasing size (by doubling), so that the sums exceed a constant. More precisely, $$\begin{gather} 1,\\ \frac12,\\ \frac13+\frac14 > \frac14+\frac14 = \frac12,\\ \frac15+\frac16+\frac17+\frac18 > \frac18+\frac18+\frac18+\frac18 = \frac12,\\ \cdots\end{gather}$$ Though the groups get longer and longer, you can continue forever and the sum grows to infinity. If you repeat the reasoning with exponent $p$, $$\begin{gather} 1,\\ \frac1{2^p},\\ \frac1{3^p}+\frac1{4^p} > \frac1{4^p}+\frac1{4^p} = \frac2{4^p}=\frac1{2^{2p-1}},\\ \frac1{5^p}+\frac1{6^p}+\frac1{7^p}+\frac1{8^p} > \frac1{8^p}+\frac1{8^p}+\frac1{8^p}+\frac1{8^p} = \frac4{8^p} = \frac1{2^{3p-2}},\\ \cdots\end{gather}$$ In this new series, the ratio of successive terms tends to $2^{p-1}$ and by the first criterion, you can conclude convergence for $p>1$ and divergence for $p<1$. (A complete discussion must involve a similar upper bound, omitted here.) To summarize, by decreasing order of decrease rate $$\sum r^n, r<1\text{ converges}$$$$\sum \frac1{n^p}, p>1\text{ converges}$$$$\sum \frac1{n^p}, p=1\text{ diverges}$$$$\sum \frac1{n^p}, p<1\text{ diverges}$$$$\sum r^n, r=1\text{ diverges}$$$$\sum r^n, r>1\text{ diverges}$$ For other series, you can compare to these decrease rates. For example, with the general term $1/n!$, the limit of the ratio is $\lim_{n\to\infty}n!/(n+1)!=0$ and the series converges, faster than any geometric series. Or $1/\sqrt[3]{n^2+1}$ makes a diverging series because the general term tends to $1/n^{2/3}$. The curves below shows the trend of the terms of the sequences on a logarithmic scale . The green one corresponds to the harmonic series, which is a border between convergent and divergent series.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1926479', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9791/']}
jdg_381664
stackexchange
llm_judgeable_groundtruth_similarity
514407
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I prove that if $x$ and $y$ are positive then $$\lfloor x\rfloor\lfloor y\rfloor\le\lfloor xy\rfloor$$ Now provide the response and nothing else.
First note that $f:\mathbb R\rightarrow \mathbb Z$ given by $f(x)=\lfloor x\rfloor$ is an increasing function that is the identity on the integers. Then note that for positive $x$ we have $0\leq f(x)\leq x$. With this we get$$f(x)f(y)\leq xy$$and applying the increasing function $f$ on both sides above noting that the left hand side is an integer we then get:$$f(x)f(y)=f(f(x)f(y))\leq f(xy)$$which proves the claim.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/514407', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_381665
stackexchange
llm_judgeable_groundtruth_similarity
27322110
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The final product is an Excel CSV spreadsheet which has more than 250 columns.I was wondering if there was a way to determine the column width in Excel from R? I am using write.csv2, which produces column width in excel all equal to 8,43. write.csv2(df, na = "", file= "Final.csv") If possible I am looking for a trick to vary all of them at once or only specific ones.Is running VBA from R my only option? Thank you for your help! Now provide the response and nothing else.
Please check the package xlsx . I am using it for generating excel files and its pretty good. There is a method setColumnWidth which can help you. Check here for more detailed example about xlsx package functionality. So here is a working example using package xlsx . df <- data.frame(matrix(rnorm(100),nc=10))library(xlsx)# must save as an xls or xlsx file...write.xlsx(df,"Final.xlsx", row.names=FALSE)# load it backwb <- loadWorkbook("Final.xlsx")sheets <- getSheets(wb)# set widths to 20setColumnWidth(sheets[[1]], colIndex=1:ncol(df), colWidth=20)saveWorkbook(wb,"Final.xlsx")# autosize column widthsautoSizeColumn(sheets[[1]], colIndex=1:ncol(df))saveWorkbook(wb,"Final.xlsx")
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27322110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4208590/']}
jdg_381666
stackexchange
llm_judgeable_groundtruth_similarity
5915
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If we look at solubility of salts in water, we are told that they disassociate, because the positive $\ce{Na}$ ion is attracted to the partially negative oxygen in water and the negative $\ce{Cl}$ is attracted to the partially positive hydrogen. But why does this happen? I mean, the couloumb force should be much stronger between the $+1$ and $-1$ charges on the $\ce{Na}$ and $\ce{Cl}$ than the partial charges on the polar water molecule. Why does $\ce{NaCl}$, then, disassociate? It would make much more sense for the two ions to stick together. Also, when you cut $\ce{NaCl}$, shouldn't it stick to its pieces? Look at salt crystals: they don't have any force of attraction between them. But, if you have positive sodium ions and negative chloride ions, then they should stick together. Now provide the response and nothing else.
As it happens, the enthalpy of solution of $\ce{NaCl}$ in water (that is, the energy change associated with the dissolution of sodium chloride crystals in water) at standard conditions is very slightly positive, i.e., it is an endothermic process. At a constant temperature and pressure, these kinds of thermodynamic processes are dictated by the change in Gibbs free energy, described by the equation $$\Delta G = \Delta H - T\Delta S$$ Where $\Delta G < 0$ is a necessary criterion for a spontaneous process. Given that $\Delta H$ is positive, $\Delta S$ must be positive as well, otherwise the process wouldn't occur spontaneously (which is to say, not without input of work from the surroundings, or coupling to some other strongly favorable reaction, neither of which is the case for the dissolution of salt). In other words, this is a process that is driven by the increase in entropy, which is entirely to be expected when moving from a highly ordered state (i.e., a crystalline solid) to a less ordered liquid solution. Indeed, in ideal solutions, where intermolecular forces of attraction are taken to be equal between all components, enthalpy change is necessarily zero, and entropy is always positive, so that the process of mixing in ideal solutions is always spontaneous. Of course, in real solutions, this is not the case. Your intuition that the Coulombic forces between ions should be stronger is correct in this particular instance, as indicated by the positive enthalpy (meaning, the breaking of ionic bonds in the crystal lattice, as well as intermolecular bonds between solvent molecules, requires more energy than is released in the forming of ion-dipole bonds). The input of energy required for this process comes in the form of heat, drawn from the solvent. However, there are numerous examples of salts for which the enthalpy of solution is negative under a wide range of conditions. The statement "[l]ook at salt crystals: they don't have any force of attraction between them" is inaccurate, though. In fact, there is strong ionic bonding. The very fact that the crystals remain solid except at very high temperatures is sufficient evidence of this, and the strength of the bonding in the crystal lattice can be quantified by calculating the lattice energy. It's a mistake to consider any substance in isolation. When you talk about "cutting" $\ce{NaCl}$, what you're talking about is applying physical force to the crystals, exposing them to the complex mixtures of gasses and mositure in the air, and bringing them in contact with, say, the metal in the knife. If left in relative isolation, the salt remains undisturbed, packed in a crystal lattice. A gas, however, which has weak forces of attraction between individual particles, will expand to fill a container without any work being done to it (at least as far as atmospheric pressure allows). A liquid will experience capillary forces, either climbing the walls of a narrow container, resulting in a concavity at its surface, or pooling away from the walls of the container, resulting in a convexity at its surface (which of the two happens depends mainly on the comparative strengths of the forces of attraction between the molecules of the liquid and those of the substance comprising the container).
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/5915', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/1888/']}
jdg_381667
stackexchange
llm_judgeable_groundtruth_similarity
31244400
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I pass a dom-repeated item inside a function in on-click? My code doesn't work: <dom-module id="my-element"> <template> <template is="dom-repeat" items="{{stuff}}> <paper-button on-click="_myFunction(item.name)">{{item.name}}</paper-button> </template> </template></dom-module><script> Polymer({ is: 'my-element', ready: function() { this.stuff = [ { id: 0, name: 'Red' }, { id: 1, name: 'Blue' }, { id: 2, name: 'Yellow' }, ]; }, _myFunction: function(color) { console.log('You pressed button ' + color); }, })</script> Or is there a better approach in achieving something like this? Thanks! Now provide the response and nothing else.
You can't pass arguments directly to on-click methods, but you can retrieve the item clicked inside a dom-repeat template via the event : <script> Polymer({ is: 'my-element', ready: function() { this.stuff = [ { id: 0, name: 'Red' }, { id: 1, name: 'Blue' }, { id: 2, name: 'Yellow' }, ]; }, _myFunction: function(e) { console.log('You pressed button ' + e.model.item.name); },});</script> See the relevant documentation here .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31244400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5078182/']}
jdg_381668
stackexchange
llm_judgeable_groundtruth_similarity
63842135
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have a testing framework (gauge) setup and we use slf4j with log4j for logging. Recently we implemented report portal for reporting purposes and it works as expected. It shows cases executed, passed, failed and failures cases has log shown inside them. But I can't seem to configure logs to be shown at the launch level. I followed this: "https://github.com/reportportal/logger-java-log4j" where I configured log4j2.xml file as shown but logs are not getting reflected in report portal. I'm able to print logs in console by changing the appender in log4j2.xml file though. So I'm guessing I'm doing something wrong with the setup. This is the log4j2.xml file <?xml version="1.0" encoding="UTF-8"?><configuration packages="com.epam.ta.reportportal.log4j.appender"> <properties> <property name="pattern">[%d{HH:mm:ss}] %-5p (%F:%L) - %m%n</property> </properties> <appenders> <ReportPortalLog4j2Appender name="ReportPortalAppender"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" /> </ReportPortalLog4j2Appender> </appenders> <loggers> <root level="debug"> <appender-ref ref="ReportPortalAppender"/> </root> </loggers></configuration> I have added the following maven dependencies - slf4j-api , logger-java-log4j , -slf4j-impl , log4j-api , log4j-core Let me know if I need to give more details for debugging. Update: Problem not solved I have given up on this; but after quite sometime have to come back for it. This time I have done some changes, which also did not help. This is the logger class I use : public class AutomationLogger {private Logger logger;public static AutomationLogger getLogger(Class loggerClass) { return new AutomationLogger(loggerClass);}private AutomationLogger(Class loggerClass) { logger = LoggerFactory.getLogger(loggerClass);}public void debug(String log, Object... objects) { log = MessageFormat.format(Thread.currentThread().getStackTrace()[2].getLineNumber() + " - Thread-" + Thread.currentThread().getId() + " [Scenario: {0}] Message: {1} Objects: {2}", Helper.getScenarioNameFromScenarioStore(), log, Arrays.toString(objects)); logger.debug(log);}public void info(String log, Object... objects) { log = MessageFormat.format(Thread.currentThread().getStackTrace()[2].getLineNumber() + " - Thread-" + Thread.currentThread().getId() + " [Scenario: {0}] Message: {1} Objects: {2}", Helper.getScenarioNameFromScenarioStore(), log, Arrays.toString(objects)); logger.info(log);}public void error(String log, Object... objects) { log = MessageFormat.format(Thread.currentThread().getStackTrace()[2].getLineNumber() + " - Thread-" + Thread.currentThread().getId() + " [Scenario: {0}] Message: {1} Objects: {2}", Helper.getScenarioNameFromScenarioStore(), log, Arrays.toString(objects)); logger.error(log);} } This is log4j.properties file : # Root logger optionlog4j.rootLogger=DEBUG, console# Direct log messages to consolelog4j.appender.console=org.apache.log4j.ConsoleAppenderlog4j.appender.console.Target=System.outlog4j.appender.console.layout=org.apache.log4j.PatternLayoutlog4j.appender.consoleAppender.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%m%n# Log messages to Report Portallog4j.appender.reportportal=com.epam.ta.reportportal.log4j.appender.ReportPortalAppenderlog4j.appender.reportportal.layout=org.apache.log4j.PatternLayoutlog4j.appender.reportportal.layout.ConversionPattern=[%d{HH:mm:ss}] %-5p (%F:%L) - %m%n I have added a listener and some more dependencies in pom.xml (these are added in correct places, here I'm just pasting them without order): <repository> <snapshots> <enabled>false</enabled> </snapshots> <id>bintray-epam-reportportal</id> <name>bintray</name> <url>http://dl.bintray.com/epam/reportportal</url> </repository><dependency> <groupId>com.epam.reportportal</groupId> <artifactId>agent-java-testng</artifactId> <version>5.0.9</version> </dependency> <dependency> <groupId>com.epam.reportportal</groupId> <artifactId>logger-java-log4j</artifactId> <version>5.0.3</version> </dependency><plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> <configuration> <properties> <property> <name>usedefaultlisteners</name> <value>false</value> <!-- disabling default listeners is optional --> </property> <property> <name>listener</name> <value>com.epam.reportportal.testng.ReportPortalTestNGListener</value> </property> </properties> </configuration> </plugin> I use logger in tests as below : AUTOMATION_LOGGER.info("Key=> " + key, "\nExpected Value: " + expectedMap.get(key) + "\nActual Value: " + actualMap.get(key)); Still I don't see the log in report portal. Only the stack trace gets printed. The 'AUTOMATION_LOGGER' statements don't get printed in report portal. Please suggest what more is needed. Thanks,Rahul Now provide the response and nothing else.
On Android 10 Environment.getExternalStorageDirectory() and Environment.getExternalStoragePublicDirectory() will return storage paths but paths are not readable or writable. For Android 10 you can continue to use paths provided by Environment.getExternalStorageDirectory() and Environment.getExternalStoragePublicDirectory() if you add android:requestLegacyExternalStorage="true" to application tag in manifest file. At runtime your app can call Environment.isExternalStorageLegacy() to check if the request has been done. Another (not known) possibility (only for Android 10) is to add <uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" /> to manifest file.The user has to go to the advanced settings of the app and enable from Advanced settings Install unknown apps | Allow from this source .The nice thing with this is that the user can switch the access rights. You can make it easier for the user if you implement an intent for Settings.ACTION_APPLICATION_DETAILS_SETTINGS where he can change the settings.A funny thing is that Environment.isExternalStorageLegacy() returns true then too. Compiling for Android 11 both options do not work on an Android 11 device. (But they continue to work for Android 10 devices). The paths of Environment.getExternalStorageDirectory() and Environment.getExternalStoragePublicDirectory() are usable again in read mode and very often in write mode too . And this is great as one can simply list the contents of directories like Download or Pictures or DCIM/Camera again using the File class.But adding <uses-permission android:name="android.permission.MANAGE_EXTERNAL_STORAGE" /> to manifest file and implementing an intent for Settings.ACTION_MANAGE_APP_ALL_FILES_ACCESS_PERMISSION will give your app read/write access for all files even on removable micro sd card.(Finally you can remove the google ban not being able to read/write your own micro sd card on your own Android device using your own app). Environment.isExternalStorageManager() can be used to check if the permission is on/off.As long as you do not try to upload your app to the play store you are fine.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/63842135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6563802/']}
jdg_381669
stackexchange
llm_judgeable_groundtruth_similarity
26776051
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using FILTER to extract rows from a range, and want to take only certain columns. For example, I filter by D, but want only columns B,C in reverse order. I tried to use QUERY : =QUERY(filter(B:D,D:D>=2), "select C,B") - Error: can't analyze query string for function QUERY parameter 2: NO_COLUMNC =QUERY(filter(B:D,D:D>=2), "select *") - shows me all columns, so QUERY should work... How do I QUERY the results of FILTER ? Any other way to achieve this? Now provide the response and nothing else.
When you are QUERYing a computed array, you need to use the Colx notation rather than column letters: =QUERY(FILTER(B:D,D:D>=2),"select Col2, Col1") which incidentally can be achieved by just using the QUERY: =QUERY(B:D,"select C, B where D >= 2") or just FILTER: =FILTER({C:C,B:B},D:D>=2)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26776051', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/552510/']}
jdg_381670
stackexchange
llm_judgeable_groundtruth_similarity
6410705
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table with more than 20 million rows, and when i do: DELETE [Table] WHERE ID = ? It takes over 40 seconds. The ID column is clustered. Is this what you could expect? or is it possible to optimize this? Now provide the response and nothing else.
In addition to the fine points JNK included in their answer, one particular killer I've seen is when you're deleting rows from the referenced table for one or more foreign key constraints, and the referencing column(s) in the referencing table(s) aren't indexed - you're forcing a table scan on each of those tables to occur before the delete can be accepted.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6410705', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/629852/']}
jdg_381671
stackexchange
llm_judgeable_groundtruth_similarity
6680
Below is a question asked on the forum softwareengineering.meta.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Following the post from Yannis Rizos about the use of "No Action Needed" for posts that are not all that good ( Reviewing first posts from new users ), I am struggling to understand when to use "No Action Needed" in a review of a first post and when to use "Skip". Up to now, I used the heuristic that if the post wasn't bad enough to close vote/down vote/flag and wasn't good enough to upvote, then I would select "No Action Needed". This means that I would use the "No Action Needed" option mostly for posts that had a bit of a 'meh' feeling. From the post by Yannis, I gather that I should Skip the review for those posts and leave it for someone else to take action on the post (either positive or negative), but that leaves me with the question: If good posts should be upvoted, bad posts downvoted (or otherwise dealt with) and meh posts skipped, on which posts should I use the "No Action Needed" option, or should that option not exist at all? Now provide the response and nothing else.
Let's look at a block of "No Action Needed" reviews: What constitutes a 'reviewed' review? Leaving a comment Up voting a comment Up voting the post Down voting the post Flagging (or close voting) Editing the post If something is likely to be closed, even if one doesn't agree with the closing philosophy leaving a comment suggesting for how to refocus the question in a way that isn't likely to get closed. Alternatively, editing it to try to make it something that won't get closed. For a no action needed it means that none of the above reasons are applicable or needed: Not bad enough to down vote Not good enough to up vote No existing comments to up vote to help suggest a better question or improve the existing answer No additional guidance needed to help improve the post (note: likely in conflict with 'not good enough to up vote' - it is unlikely that both of these are true, but may be the case) Grammar, spelling, and formatting of the post are acceptable and don't need any work (note: it's a good thing to click 'edit' to see the actual markdown and make sure there isn't formatting that has been lost that needs to be brought back out like a numbered list or line breaks) There is no unnecessary preamble or bit at the end that is appropriate to remove ('hello', 'thanks', 'hope this helps', etc... consider also linking to Should 'Hi', 'thanks,' taglines, and salutations be removed from posts? ) Double check the title. Tags in the title are unnecessary ( MSE guidance ). Make sure the title of the question matches the question. On topic without any need to migrate or suggest a way to make it narrower, less opinion, or need to refocus it from an existing off topic reason. Not a duplicate of another post (do a search - especially the case with questions, many new users don't know of existing questions) Not posted elsewhere on Stack Exchange (example: "This was posted on Stack Overflow as well, where it was closed and has now been deleted." from https://softwareengineering.stackexchange.com/questions/241598/cms-desktop-app-control could have either been up voted or researched and written themselves) If you don't want to check these things, either hit skip or don't do the First Posts review. Selecting 'no action needed' means that someone else can't do the review to check these things, completing it and giving guidance to the new P.SE user that may help them have a more positive experience on P.SE and Stack Exchange as a whole. This is in part an issue with the system. The First Post review queue, by its nature, has binding votes for every review. This isn't something that people are always aware of, but the First Post review (and late answer) are both review queues that one should approach with all the seriousness of a diamond mod because there's no redo or community safety net for these reviews.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.meta.stackexchange.com/questions/6680', 'https://softwareengineering.meta.stackexchange.com', 'https://softwareengineering.meta.stackexchange.com/users/5099/']}
jdg_381672
stackexchange
llm_judgeable_groundtruth_similarity
30223161
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this so far: x,y,z = data.nonzero() fig = plt.figure()ax = fig.add_subplot(111, projection='3d')ax.scatter(x, y, z, zdir='z', c= 'red')plt.savefig("plot.png") Which creates: What I'd like to do is stretch this out to make the Z axis 9 times taller and keep X and Y the same. I'd like to keep the same coordinates though. So far I tried this guy: fig = plt.figure(figsize=(4.,35.)) But that just stretches out the plot.png image. Now provide the response and nothing else.
The code example below provides a way to scale each axis relative to the others. However, to do so you need to modify the Axes3D.get_proj function. Below is an example based on the example provided by matplot lib: http://matplotlib.org/1.4.0/mpl_toolkits/mplot3d/tutorial.html#line-plots (There is a shorter version at the end of this answer) from mpl_toolkits.mplot3d.axes3d import Axes3Dfrom mpl_toolkits.mplot3d import proj3dimport matplotlib as mplimport numpy as npimport matplotlib.pyplot as plt#Make sure these are floating point values: scale_x = 1.0scale_y = 2.0scale_z = 3.0#Axes are scaled down to fit in scene max_scale=max(scale_x, scale_y, scale_z)scale_x=scale_x/max_scalescale_y=scale_y/max_scalescale_z=scale_z/max_scale#Create scaling matrix scale = np.array([[scale_x,0,0,0], [0,scale_y,0,0], [0,0,scale_z,0], [0,0,0,1]])print scaledef get_proj_scale(self): """ Create the projection matrix from the current viewing position. elev stores the elevation angle in the z plane azim stores the azimuth angle in the x,y plane dist is the distance of the eye viewing point from the object point. """ relev, razim = np.pi * self.elev/180, np.pi * self.azim/180 xmin, xmax = self.get_xlim3d() ymin, ymax = self.get_ylim3d() zmin, zmax = self.get_zlim3d() # transform to uniform world coordinates 0-1.0,0-1.0,0-1.0 worldM = proj3d.world_transformation( xmin, xmax, ymin, ymax, zmin, zmax) # look into the middle of the new coordinates R = np.array([0.5, 0.5, 0.5]) xp = R[0] + np.cos(razim) * np.cos(relev) * self.dist yp = R[1] + np.sin(razim) * np.cos(relev) * self.dist zp = R[2] + np.sin(relev) * self.dist E = np.array((xp, yp, zp)) self.eye = E self.vvec = R - E self.vvec = self.vvec / proj3d.mod(self.vvec) if abs(relev) > np.pi/2: # upside down V = np.array((0, 0, -1)) else: V = np.array((0, 0, 1)) zfront, zback = -self.dist, self.dist viewM = proj3d.view_transformation(E, R, V) perspM = proj3d.persp_transformation(zfront, zback) M0 = np.dot(viewM, worldM) M = np.dot(perspM, M0) return np.dot(M, scale);Axes3D.get_proj=get_proj_scale"""You need to include all the code above.From here on you should be able to plot as usual."""mpl.rcParams['legend.fontsize'] = 10fig = plt.figure(figsize=(5,5))ax = fig.gca(projection='3d')theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)z = np.linspace(-2, 2, 100)r = z**2 + 1x = r * np.sin(theta)y = r * np.cos(theta)ax.plot(x, y, z, label='parametric curve')ax.legend()plt.show() Standard output: Scaled by (1, 2, 3): Scaled by (1, 1, 3): The reason I particularly like this method,Swap z and x, scale by (3, 1, 1): Below is a shorter version of the code. from mpl_toolkits.mplot3d.axes3d import Axes3Dfrom mpl_toolkits.mplot3d import proj3dimport matplotlib as mplimport numpy as npimport matplotlib.pyplot as pltmpl.rcParams['legend.fontsize'] = 10fig = plt.figure(figsize=(5,5))ax = fig.gca(projection='3d')theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)z = np.linspace(-2, 2, 100)r = z**2 + 1x = r * np.sin(theta)y = r * np.cos(theta)""" Scaling is done from here... """x_scale=1y_scale=1z_scale=2scale=np.diag([x_scale, y_scale, z_scale, 1.0])scale=scale*(1.0/scale.max())scale[3,3]=1.0def short_proj(): return np.dot(Axes3D.get_proj(ax), scale)ax.get_proj=short_proj""" to here """ax.plot(z, y, x, label='parametric curve')ax.legend()plt.show()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30223161', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13009/']}
jdg_381673
stackexchange
llm_judgeable_groundtruth_similarity
35134713
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a few RUN commands in my Dockerfile that I would like to run with -no-cache each time I build a Docker image. I understand the docker build --no-cache will disable caching for the entire Dockerfile. Is it possible to disable cache for a specific RUN command? Now provide the response and nothing else.
There's always an option to insert some meaningless and cheap-to-run command before the region you want to disable cache for. As proposed in this issue comment , one can add a build argument block (name can be arbitrary): ARG CACHEBUST=1 before such region, and modify its value each run by adding --build-arg CACHEBUST=$(date +%s) as a docker build argument (value can also be arbitrary, here it is current datetime, to ensure its uniqueness across runs). This will, of course, disable cache for all following blocks too, as hash sum of the intermediate image will be different, which makes truly selective cache disabling a non-trivial problem, taking into account how docker currently works.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/35134713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1581803/']}
jdg_381674
stackexchange
llm_judgeable_groundtruth_similarity
2904491
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am self-studying Boyd & Vandenberghe's Convex Optimization . Example 2.15 (page 43) states that the symmetric positive semi-definite cone $S^n_+$ is a proper cone. This necessitates, amongst other things, that it is closed. I am not sure how to show that $S^n_+$ is closed, particularly because this set consists of matrices, which I am less comfortable working with. The most relevant question I have found that may have some relation to this one is here ; I am not sure how to act on the answer of this question for I am not sure of whether the functions $f_1$ and $f_2$ as defined in the answer are relevant to my task. Now provide the response and nothing else.
The space $\mathbf{R}^{n \times n}$ is a $(n^2)$ -dimensional real vector space, and the space $\mathbf{S}^n$ of symmetric matrices is a linear subspace (this is easy to check). The map $\lambda_{\min} : \mathbf{S}^n \to \mathbf{R}$ , given, for example, by $\lambda_{\min}(X) = \min_{\|v\| = 1}v^TXv$ is continuous (with respect to the relative topology on $\mathbf{S}^n$ ). Now note that $$\mathbf{S}^n_+ = \{X \in \mathbf{S}^n: \lambda_{\min}(X) \geq 0\} = \lambda_{\min}^{-1}([0, \infty)),$$ which is the continuous preimage of a closed set, thus closed.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2904491', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/589723/']}
jdg_381675
stackexchange
llm_judgeable_groundtruth_similarity
11068603
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Motivation :I often want to paste the results of a quick analysis using R Markdown into a StackExchange site. This includes the R-tag on Stack Overflow , Cross Validated , or even a domain specific analysis on sites like Cognitive Sciences Stack Exchange (e.g., this quick analysis of OECD life index data ). Problems with default conversion: The default markdown output of knitr is not suitable for inclusion on StackExchange.The main problems I can see are that images are referenced to the local hard drive code chunks are not tab or space indented; rather they use github style Markdown (i.e., no indentation) I.e., the chunks look like this: ```rsome code``` and output looks like this ```## some output## ...``` There might also be other specific issues to consider, such as ensuring tables are included properly ensuring that equations are passed correctly for sites that support MathJax like Cross Validated and Cognitive Science Stack Exchange. Question What is a good command for converting R Markdown into Markdown (or HTML) suitable for simple inclusion into Stack Exchange sites? I think an ideal command would be a one-liner that takes an R Markdown file and generates a file where the entire content can be pasted directly into Stack Exchange to yield a well-formatted question or answer. I share this simple rmd file with a couple of code chunks, a figure, and an equation as a test example. Initial thoughts :Hosting of images on imgur would presumably sort out the issue with images. This can be done by including the following in the R Markdown file, but it would probably be simpler if this instruction was incorporated into some one-liner command. ``` {r }opts_knit$set(upload.fun = imgur_upload) ```` It might be worth considering whether HTML or Markdown is the better format for pasting into StackExchange. The markdown package provides a lot of flexibility. Now provide the response and nothing else.
Here is a utility function that should get you started. It sets auto uploads to imgur, as well as markdown rendering of source code using tabs instead of fenced blocks. You can enhance this function to add other options that would be useful. stackify <- function(rmd_file, ...){ require(knitr) opts_knit$set(upload.fun = imgur_upload) render_markdown(strict = TRUE) out <- knit(rmd_file, ...) return(invisible(out))} UPDATE: I tested this function on your test file, and it renders well on stats.stackexchange.com which is mathjax enabled.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11068603', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/180892/']}
jdg_381676
stackexchange
llm_judgeable_groundtruth_similarity
2652760
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a potentially huge integer value (in C# string format), I want to be able to generate its hex equivalent. Normal methods don't apply here as we are talking arbitrarily large numbers, 50 digits or more. The techniques I've seen which use a technique like this: // Store integer 182int decValue = 182;// Convert integer 182 as a hex in a string variablestring hexValue = decValue.ToString("X");// Convert the hex string back to the numberint decAgain = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber); won't work because the integer to convert is too large. For example I need to be able to convert a string like this: 843370923007003347112437570992242323 to its hex equivalent. these don't work: C# convert integer to hex and back again How to convert numbers between hexadecimal and decimal in C#? Now provide the response and nothing else.
Oh, that's easy: var s = "843370923007003347112437570992242323"; var result = new List<byte>(); result.Add( 0 ); foreach ( char c in s ) { int val = (int)( c - '0' ); for ( int i = 0 ; i < result.Count ; i++ ) { int digit = result[i] * 10 + val; result[i] = (byte)( digit & 0x0F ); val = digit >> 4; } if ( val != 0 ) result.Add( (byte)val ); } var hex = ""; foreach ( byte b in result ) hex = "0123456789ABCDEF"[ b ] + hex;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2652760', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/750/']}
jdg_381677
stackexchange
llm_judgeable_groundtruth_similarity
6491548
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Magento 1.4.0.1.I have over 21000 simple products, each entered into a single category.There are hundreds of categories in my site.Some products belong in multiple categories.Is there some way for me to programmatically add products into multiple categories? Now provide the response and nothing else.
In PHP code you can put them into the category while you are importing them. Say you have a product called $product and a category ID called $category_id You can set the categories which a product belongs to by doing the following $categories = array($category_id);$product->setCategoryIds($categories);$product->save(); If the product already has categories and you'd like to add one more then you can use getCategoryIds() like this: $categories = $product->getCategoryIds();$categories[] = $categoryId;$product->setCategoryIds($categories);$product->save(); Or, as mentioned by Joshua Peck in the comments, you can use the category_api model to add or remove a product from a category without affecting it's current category assignments: Mage::getSingleton('catalog/category_api') ->assignProduct($category->getId(),$product->getId());Mage::getSingleton('catalog/category_api') ->removeProduct($category->getId(),$product->getId());
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6491548', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/460860/']}
jdg_381678
stackexchange
llm_judgeable_groundtruth_similarity
21738757
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I always thought that the UML aggregate is defined as a black (filled) diamond at the beginning of a path and no arrow head that the end: |--------| |--------|| :MyA |<>------| :MyB ||--------| |--------| Today I came across a notation like <>-----> (with an explicit arrow head on the right end). So I looked it up in the UML 2.4 specification and actually found references for both versions. My favourite reference: "UML and Patterns" by Craig Larman only mentions the first version without the arrow. In the UML specification I found a notice about navigable ends , but I am not sure if this is related and if whats the difference? Could someone explain this more thoroughly and give an example for the use of each version? Now provide the response and nothing else.
Any association end can be designated to be "navigable" with the help of a navigability arrow. However, UML's notion of "navigability" does not have a precise meaning and people confused it with the concept of an association end being owned by the class at the other end. This concept of association end ownership by a class means that the association end corresponds to a reference property of that class. This issue has only been clarified in last year's new UML version 2.5, which has introduced a new visual notation for association end ownership, a "dot" as in . This is most likely the intended meaning of what you came across, namely , and what it really means is the following reference property: For more explanation see this tutorial . Additional answer to the comment : If your class model specifies the ownership of all association ends, and your class diagram displays them, and there is no ownership dot (nor a navigability arrow), as in , then, according to UML 2.5, the association end is "owned" by the association itself. Since we do neither have an ownership dot at the MyA end, both ends are "owned" by the composite association. In terms of the code to write/generate, this would mean that the composite association is not implemented by means of reference properties in either MyA or MyB, but has to be implemented with the help of a separate class, say "MyA-has-MyB", which has two reference properties for referencing both the aggregate and the component of any composition link, as in the following class rectangle:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21738757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/474034/']}
jdg_381679
stackexchange
llm_judgeable_groundtruth_similarity
5220502
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Since starting in WPF I am still struggling to understand the data binding functionality. I am trying to use a multi column list box and my XAMl looks like this: <ListBox Name="RecordList"> <ListView Name="RecordListView"> <ListView.View> <GridView> <GridView.Columns> <GridViewColumn Header="1" Width="Auto" DisplayMemberBinding="{Binding Path=Field1}" /> <GridViewColumn Header="2" Width="50" DisplayMemberBinding="{Binding Path=Field2}" /> <GridViewColumn Header="3" Width="100" DisplayMemberBinding="{Binding Path=Field3}" /> </GridView.Columns> </GridView> </ListView.View> </ListView></ListBox> I just can't get the c# code right to populate the items from my list? Now provide the response and nothing else.
Here's the way it fits together in a nutshell. First, you define a model which holds your data for binding. public sealed class MyListBoxItem{ public string Field1 {get;set;} public string Field2 {get;set;} public string Field3 {get;set;}} Now, you have to have a class that holds these models for binding. This type is often called the ViewModel; it presents information to the View for binding based on user input from the View. Its public properties are typically ObservableCollections and DependencyProperties so that changes in the ViewModel will be automatically picked up by the View (the UI): public sealed class MyViewModel{ public ObservableCollection<MylistBoxItem> Items {get;private set;} public MyViewModel() { Items = new ObservableCollection<MyListBoxItem>(); Items.Add(new MyListBoxItem{Field1="One", Field2="Two",Filed3="Three"}; }} Within the codebehind for your UI, the "View", you instantiate your ViewModel and set it as the DataContext for your View. public MyView(){ this.DataContext = new MyViewModel();} this is important as the DataContext "flows" through the visual tree. It is available to every child element on which it is set. To display the items, I must bind the ItemsSource of the ListView to the Items property of the DataContext (this is understood). Each row within the ListView has its DataContext set to each individual MyViewModel in the Items property. So you must bind each display member to the properties of the MyListBoxItem. <ListView Name="RecordListView" ItemsSource="{Binding Items}"> <ListView.View> <GridView> <GridView.Columns> <GridViewColumn Header="1" Width="Auto" DisplayMemberBinding="{Binding Path=Field1}" /> <GridViewColumn Header="2" Width="50" DisplayMemberBinding="{Binding Path=Field2}" /> <GridViewColumn Header="3" Width="100" DisplayMemberBinding="{Binding Path=Field3}" /> </GridView.Columns> </GridView> </ListView.View></ListView> To understand this whole process better, search for high-rated questions here tagged [MVVM] . ALSO, to help debug your bindings, configure debugging for verbose databinding:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5220502', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/623117/']}
jdg_381680
stackexchange
llm_judgeable_groundtruth_similarity
3589772
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why exactly do we need a "Circular Linked List" (singly or doubly) data structure? What problem does it solve that is evident with simple Linked Lists (singly or doubly)? Now provide the response and nothing else.
Two reasons to use them: 1) Some problem domains are inherently circular. For example, the squares on a Monopoly board can be represented in a circularly linked list, to map to their inherent structure. 2) Some solutions can be mapped to a circularly linked list for efficiency. For example, a jitter buffer is a type of buffer that takes numbered packets from a network and places them in order, so that (for example) a video or audio player can play them in order. Packets that are too slow (laggy) are discarded. This can be represented in a circular buffer, without needing to constantly allocate and deallocate memory, as slots can be re-used once they have been played. It could be implemented with a linked-list, but there would be constant additions and deletions to the list, rather than replacement to the constants (which are cheaper).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3589772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159072/']}
jdg_381681
stackexchange
llm_judgeable_groundtruth_similarity
42866106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to loop through some json values outside the actual function. could anyone help me? app.controller('MapCtrl', function($scope, $http){$scope.contents = [];$http.get('/data4estate/data_model.php') .success(function(data) { $scope.contents = data; });for (i = 0; i < $scope.contents.length; i++) { console.log( $scope.contents[i].name);} }); Now provide the response and nothing else.
You can absolutely do this with VueJS.Have a look at the example under. You can see two examples, one is the adopted code for your case(to make it slide). And other is a simple image slider, that loops through array of images in 3 seconds interval. Important thing to note here, is that we wrap the image element in for loop to force the element to be destroyed, because otherwise your elements will not be removed from DOM and will not transition (virtual DOM). For better understanding of transitions in VueJS in recommend you to check out getting started guide - transition section . new Vue({ el: '#demo', data: { message: 'Click for slide', show: true, imgList: [ 'http://via.placeholder.com/350x150', 'http://via.placeholder.com/350x151', 'http://via.placeholder.com/350x152' ], currentImg: 0 }, mounted() { setInterval(() => { this.currentImg = this.currentImg + 1; }, 3000); }}) #demo { overflow: hidden;}.slide-leave-active,.slide-enter-active { transition: 1s;}.slide-enter { transform: translate(100%, 0);}.slide-leave-to { transform: translate(-100%, 0);}.img-slider{ overflow: hidden; position: relative; height: 200px; width: 400px;}.img-slider img { position: absolute; top: 0; left: 0; bottom: 0; right:0;} <!DOCTYPE html><html> <head> <title>VueJS 2.0 - image slider</title> <link rel="stylesheet" href="style.css"> </head> <body> <div id="demo"> <button v-on:click="show = !show"> Toggle </button> <transition name="slide"> <p v-if="show">{{message}}</p> </transition> <h3> Img slider </h3> <transition-group tag="div" class="img-slider" name="slide"> <div v-for="number in [currentImg]" v-bind:key="number" > <img :src="imgList[Math.abs(currentImg) % imgList.length]"/> </div> </transition-group> </div> <script src="https://unpkg.com/vue/dist/vue.min.js"></script> <script src="script.js"></script> </body></html>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42866106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7728965/']}
jdg_381682
stackexchange
llm_judgeable_groundtruth_similarity
30588549
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I take a Vec<Option<T>> , where T cannot be copied, and unwrap all the Some values? I run into an error in the map step. I'm happy to move ownership of the original list and "throw away" the None s. #[derive(Debug)]struct Uncopyable { val: u64,}fn main() { let num_opts: Vec<Option<Uncopyable>> = vec![ Some(Uncopyable { val: 1 }), Some(Uncopyable { val: 2 }), None, Some(Uncopyable { val: 4 }), ]; let nums: Vec<Uncopyable> = num_opts .iter() .filter(|x| x.is_some()) .map(|&x| x.unwrap()) .collect(); println!("nums: {:?}", nums);} Playground Which gives the error error[E0507]: cannot move out of borrowed content --> src/main.rs:17:15 |17 | .map(|&x| x.unwrap()) | ^- | || | |hint: to prevent move, use `ref x` or `ref mut x` | cannot move out of borrowed content Now provide the response and nothing else.
In Rust, when you need a value, you generally want to move the elements or clone them. Since move is more general, here it is , only two changes are necessary: let nums: Vec<Uncopyable> = num_opts .into_iter()// ^~~~~~~~~~~~-------------- Consume vector, and iterate by value .filter(|x| x.is_some()) .map(|x| x.unwrap())// ^~~------------------ Take by value .collect(); As llogiq points out , filter_map is specialized to filter out None already: let nums: Vec<Uncopyable> = num_opts .into_iter()// ^~~~~~~~~~~~-------- Consume vector, and iterate by value .filter_map(|x| x)// ^~~----- Take by value .collect(); And then it works (consuming num_opts ). As pointed out by @nirvana-msu , in Rust 1.33 std::convert::identity was added which can be used instead of |x| x . From the documentation: let filtered = iter.filter_map(identity).collect::<Vec<_>>();
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30588549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/311358/']}
jdg_381683
stackexchange
llm_judgeable_groundtruth_similarity
322271
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Not so long ago I talked to my colleague and he was definitely against using bit masks because it is hard to understand all the values that are stored in the database. In my opinion it is not always a bad idea to use them, for example to determine the roles of the current user. Otherwise you need to store it in a separate table, which will cause one more JOIN.Can you please tell me if I am wrong? Any other side-effects, advantages/disadvantages of using bit masks? Now provide the response and nothing else.
I work with an application that uses bitmasks to store user role assignments. It's a pain in the butt. If this makes me biased, guilty as charged. If you're already using a relational database, it is an anti-pattern that violates most relational theory and all the normalization rules. When you build your own data storage, it may not be such a bad idea. There is such a thing as too many tables being joined, but relational databases are built to handle this. Many have additional features if performance becomes an issue: indexes, indexed views, etc. Even if the values you're looking up don't change very often, which is an advantage for Bitmask, the over-head of having to manage indexing is pretty easy on the database. Although database do a good job of aggregating data, they can get sluggish when you start introducing things like complex formulas or Scalar Functions into datasets. You can do the bitwise in your app, but if all you're doing is getting related data (looking up a user's role(s)), you're not taking advantage of what your data storage does best. My last argument against it would be simplicity for other developers. You have users, roles and assignments. It's a many-to-many relation set (because there's more than one relationship) that is so common, it should be easy to manage. It's just CRUD stuff.
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/322271', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/233478/']}
jdg_381684
stackexchange
llm_judgeable_groundtruth_similarity
9805529
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using the geocoding API on the server side to translate addresses in latlng.I faced a OVER_QUERY_LIMIT status even though :- the server didn't exceed the 2500 limitation (just a few request on this day)- it didn't do many requests simultaneously (just one single request at a time) how is that possible ? the next day the geocoding was working well but i'm concerned about my application working correctly in the long run. Thanks in advance. Now provide the response and nothing else.
This is how I have handled this issue in the past. I check the result status and if I get and over the limit error I try it again after a slight delay. function Geocode(address) { geocoder.geocode({ 'address': address }, function(results, status) { if (status === google.maps.GeocoderStatus.OK) { var result = results[0].geometry.location; var marker = new google.maps.Marker({ position: result, map: map }); } else if (status === google.maps.GeocoderStatus.OVER_QUERY_LIMIT) { setTimeout(function() { Geocode(address); }, 200); } else { alert("Geocode was not successful for the following reason:" + status); } });} Update: whoops, accidentally glossed over the server side part. Here is a C# version: public XElement GetGeocodingSearchResults(string address){ var url = String.Format( "https://maps.google.com/maps/api/geocode/xml?address={0}&sensor=false", Uri.EscapeDataString(address)); var results = XElement.Load(url); // Check the status var status = results.Element("status").Value; if(status == "OVER_QUERY_LIMIT") { Thread.Sleep(200); GetGeocodingSearchResults(address); }else if(status != "OK" && status != "ZERO_RESULTS") { // Whoops, something else was wrong with the request... } return results;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9805529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1283452/']}
jdg_381685
stackexchange
llm_judgeable_groundtruth_similarity
582423
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Relevant links from Mouser and Digi-key: https://mouser.com/ProductDetail/Nexperia/BC847C235?qs=me8TqzrmIYWk6FelXF%2FgZA%3D%3D https://www.digikey.com/product-detail/en/nexperia-usa-inc/BC847CW-135/1727-1880-2-ND/1232291 This is causing some confusion in our purchasing team. I could not find anything in the datasheets, so it seems to have no technical relevance. Now provide the response and nothing else.
It is related to packing. The BC847C has these two "orderable part numbers": As you can see, they are the same component in the same package (SOT23), but they differ by packing: BC847C,215 is delivered as Reel 7" Q3/T4 (3 000 pieces) BC847C,235 is delivered as Reel 11" Q3/T4, LargePack (10 000 pieces) Another example, for the NXP CLRC 633, the datasheet lists these references (and more): As you can see, the first two are both CLRC66301HN , both HVQFN32, but one is delivered on one tray, with an MOQ of 490 pieces , while the other is delivered as 5 trays, and the MOQ is 5 x 490 pieces . The same applies for the next 3 which are all CLRC66302HN , all HVQFN32, but delivered as either 5 trays (MOQ 5 x 490), 1 tray (MOQ 490) or 1 reel (MOQ 6000) . Some retailers/distributors will sell the components in smaller quantities (they buy the larger reels/trays from the manufacturer and then pick the right quantities from each order or cut parts of the tape), while others will sell only the full trays/reels. Depending on how many you need and any requirements for production it may or may not matter. In the case of your Mouser link for instance, they order the 235 version, but allow you to buy either the full reel (10K pieces) or cut parts of the tape from 1 single component. They also have the 215 version, available as full reel (3K pieces) or cut tapes from 1 single component. If you need smaller quantities, it shouldn't matter which one you order, in this case the tape is exactly identical except for its length. You may end up with two different prices for exactly the same component based on whatever rule they have to compute the per-unit price from the full reel price/quantity. Here the difference is very small. On a final note, the two links you provided point to two actually different references ( BC547C and BC547C W ) which are actually different components, with different packaging (SOT23 v. SOT323) and different characteristics (different \$P_{tot}\$ ). So the number after the comma (probably) doesn't matter, but the reference before it does!
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/582423', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/23587/']}
jdg_381686
stackexchange
llm_judgeable_groundtruth_similarity
91725
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to kill a bunch of processes using this command: sudo ps ax | grep node | awk '{print $1}' | xargs kill But it gives me operation not permitted even with sudo. Then I tried with kill -9 individually for each process and it worked. Now my question is how do I pass -9 flag to kill via xargs? Nether xargs kill -9 or xargs -9 kill worked for me. Now provide the response and nothing else.
I don't believe this is possible with ufw . ufw is just a frontend to iptables which also lacks this feature, so one approach would be to create a crontab entry which would periodically run and check if the IP address has changed. If it has then it will update it. You might be tempted to do this: $ iptables -A INPUT -p tcp --src mydomain.dyndns.org --dport 22 -j ACCEPT But this will resolve the hostname to an IP and use that for the rule, so if the IP later changes this rule will become invalid. Alternative idea You could create a script like so, called, iptables_update.bash . #!/bin/bash#allow a dyndns nameHOSTNAME=HOST_NAME_HERELOGFILE=LOGFILE_NAME_HERECurrent_IP=$(host $HOSTNAME | cut -f4 -d' ')if [ $LOGFILE = "" ] ; then iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT echo $Current_IP > $LOGFILEelse Old_IP=$(cat $LOGFILE) if [ "$Current_IP" = "$Old_IP" ] ; then echo IP address has not changed else iptables -D INPUT -i eth1 -s $Old_IP -j ACCEPT iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT /etc/init.d/iptables save echo $Current_IP > $LOGFILE echo iptables have been updated fifi source: Using IPTables with Dynamic IP hostnames like dyndns.org With this script saved you could create a crontab entry like so in the file /etc/crontab : */5 * * * * root /etc/iptables_update.bash > /dev/null 2>&1 This entry would then run the script every 5 minutes, checking to see if the IP address assigned to the hostname has changed. If so then it will create a new rule allowing it, while deleting the old rule for the old IP address.
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/91725', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/19718/']}
jdg_381687
stackexchange
llm_judgeable_groundtruth_similarity
9819023
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have used the following configuration for my production logging: monolog: handlers: mail: type: fingers_crossed action_level: error handler: grouped grouped: type: group members: [streamed, buffered] streamed: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug # buffered is used to accumulate errors and send them as batch to the email address buffered: type: buffer handler: swift swift: type: swift_mailer from_email: [email protected] to_email: [email protected] subject: Error Occurred! level: debug This sends emails like this: [2012-03-21 21:24:09] security.DEBUG: Read SecurityContext from the session [] [] [2012-03-21 21:24:09] security.DEBUG: Reloading user from user provider. [] [] [2012-03-21 21:24:09] security.DEBUG: Username "jakob.asdf" was reloaded from user provider. [] [] [2012-03-21 21:24:09] request.INFO: Matched route "_user_settings" (parameters: "_controller": "...Bundle\Controller\UserController::settingsAction", "username": "Jakob.asdf", "_route": "_user_settings") [] [] [2012-03-21 21:24:09] request.ERROR: Symfony\Component\HttpKernel\Exception\NotFoundHttpException: ...Bundle\Entity\User object not found. (uncaught exception) at /var/www/.../vendor/bundles/Sensio/Bundle/FrameworkExtraBundle/Request/ParamConverter/DoctrineParamConverter.php line 50 [] [] [2012-03-21 21:24:09] security.DEBUG: Write SecurityContext in the session [] [] I would really love to have a stack trace here, or at least the line number in my controller which triggered the error. Otherwise it's really a lot of guessing of what could have gone wrong. Now, the question: Is there any way to achieve such an even more verbose logging? Now provide the response and nothing else.
Yes it can be achievable. Create a ExceptionListener class. //namespace declarationsclass ExceptionListener{ /** * @var \Symfony\Component\HttpKernel\Log\LoggerInterface */ private $logger =null; /** * @param null|\Symfony\Component\HttpKernel\Log\LoggerInterface $logger */ public function __construct(LoggerInterface $logger = null) { $this->logger = $logger; } /** * @param \Symfony\Component\HttpKernel\Event\GetResponseForExceptionEvent $event */ public function onKernelException(GetResponseForExceptionEvent $event) { if($this->logger === null) return; $exception = $event->getException(); $flattenException = FlattenException::create($exception); $this->logger->err('Stack trace'); foreach ($flattenException->getTrace() as $trace) { $traceMessage = sprintf(' at %s line %s', $trace['file'], $trace['line']); $this->logger->err($traceMessage); } }} And then register listener. kernel.listener.your_listener_name: class: FQCN\Of\ExceptionListener tags: - { name: kernel.event_listener, event: kernel.exception, method: onKernelException , priority: -1} - { name: monolog.logger, channel: mychannel } arguments: - "@logger" You can tweak it as your requirement.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9819023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/457059/']}
jdg_381688
stackexchange
llm_judgeable_groundtruth_similarity
10265216
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to get the name of the file that includes another file from inside the included file. I know there is the __FILE__ magic constant, but that doesn't help, since it returns the name of the included file, not the including one. Is there any way to do this? Or is it impossible due to the way PHP is interpreted? Now provide the response and nothing else.
So this question is pretty old, but I was looking for the answer and after leaving here unsatisfied, I came across $_SERVER['SCRIPT_FILENAME']; Of course this works if the php file doing the including is a web page. This gives you the full path of the "including file" on the server. eg /var/www/index.php. so if you want just the filename, eg index.php, you will need to use basename() eg basename($_SERVER['SCRIPT_FILENAME']); So, if in your index.php you have the following line: <?php include("./somephp.php"); ?> and in somephp.php you have echo "this is the file that included me: " . basename($_SERVER['SCRIPT_FILENAME']); you will get this is the file that included me: index.php output to the browser. This also works if the user is accessing your file without explicitly including the filename in the url, eg www.example.com instead of www.example.com/index.php .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10265216', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/599436/']}
jdg_381689
stackexchange
llm_judgeable_groundtruth_similarity
22415296
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What are the advantages of using jQuery's $(window).blur(function() { ... }) to attach an event handler versus setting it directly with window.onblur = function() { ... } It seems that the latter is less robust because it only supports one blur handler, and when used with other packages, other code might override the window.blur value with another function. However, couldn't this also happen with the jQuery implementation too, which presumably uses window.blur as its underlying implementation? EDIT: Several people have also mentioned the window.addEventListener alternative, which can be used to add an 'onblur' event apart from the methods above. Now provide the response and nothing else.
$(window).blur(function() { ... }) Lets you add one or more event handlers. window.onblur = function() { ... } Lets you only have one event handler handling the blur event. The former uses the jQuery's own event handle mechanism. The call to .blur() will delegate to jQuery.fn.on() which in turn will delegate to jQuery.event.add . This add() method will create it's own handler for the given event type and tell addEventListener() to call this handler whenever a event of given type is fired. So basically jQuery has it's own way of event handling which relies on addEventListener() to execute properly. The latter is just an attribute which can only contain one value so queueing event handlers is impossible. I wrote a little demonstration to prove this point: http://jsfiddle.net/GnNZm/1/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22415296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/586086/']}
jdg_381690
stackexchange
llm_judgeable_groundtruth_similarity
11370211
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to write a script which will read two choices, and if both of them are "y" I want it to say "Test Done!" and if one or both of them isn't "y" I want it to say "Test Failed!" Here's what I came up with: echo "- Do You want to make a choice?"read choiceecho "- Do You want to make a choice1?"read choice1if [ "$choice" != 'y' ] && [ "$choice1" != 'y' ]; then echo "Test Done!"else echo "Test Failed!"fi But when I answer both questions with "y" it's saying "Test Failed!" instead of "Test Done!". And when I answer both questions with "n" it's saying "Test Done!" What have I done wrong? Now provide the response and nothing else.
You are checking for the wrong condition. if [ "$choice" != 'y' ] && [ "$choice1" != 'y' ]; The above statement is true when choice!='y' and choice1!='y' , and so the program correctly prints "Test Done!" . The corrected script is echo "- Do You want to make a choice ?"read choiceecho "- Do You want to make a choice1 ?"read choice1if [ "$choice" == 'y' ] && [ "$choice1" == 'y' ]; then echo "Test Done !"else echo "Test Failed !"fi
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11370211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1507897/']}
jdg_381691
stackexchange
llm_judgeable_groundtruth_similarity
62980752
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: have added lerna to my project, i have added a package to my server by running this command successfully: lerna add @types/express --dev But when I want to add another one: lerna add graphql class-validator type-graphql I got this error : info cli using local version of lernalerna notice cli v3.22.1lerna WARN No packages found where graphql can be added. Is something missed or wrong for adding the packages? Should I use yarn add instead of leran add ? looks it works but I doubt about the packages tree form to be correct Now provide the response and nothing else.
At the moment, lerna doesn't support adding multiple packages to another package like so: ❌ lerna add '@my-company/{utils,types}' --scope '@my-company/ui' // We have to do this insteadlerna add '@my-company/utils' --scope '@my-company/ui' lerna add '@my-company/types' --scope '@my-company/ui' Lerna does support adding 1 package into multiple packages though: lerna add '@my-company/utils --scope '@my-company/{ui,data}' lerna's github discussion on this issue for updates ( link )
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62980752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3469668/']}
jdg_381692
stackexchange
llm_judgeable_groundtruth_similarity
9286
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove $$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2}$$ Now provide the response and nothing else.
This is an old favorite of mine. Define $$I=\int_{-\infty}^{+\infty} e^{-x^2} dx$$ Then $$I^2=\bigg(\int_{-\infty}^{+\infty} e^{-x^2} dx\bigg)\bigg(\int_{-\infty}^{+\infty} e^{-y^2} dy\bigg)$$ $$I^2=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}e^{-(x^2+y^2)} dxdy$$ Now change to polar coordinates $$I^2=\int_{0}^{+2 \pi}\int_{0}^{+\infty}e^{-r^2} rdrd\theta$$ The $\theta$ integral just gives $2\pi$, while the $r$ integral succumbs to the substitution $u=r^2$ $$I^2=2\pi\int_{0}^{+\infty}e^{-u}du/2=\pi$$ So $$I=\sqrt{\pi}$$ and your integral is half this by symmetry I have always wondered if somebody found it this way, or did it first using complex variables and noticed this would work.
{}
{'log_upvote_score': 9, 'links': ['https://math.stackexchange.com/questions/9286', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1382/']}
jdg_381693
stackexchange
llm_judgeable_groundtruth_similarity
92430
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Just for fun, I'm trying to write up some code that will list a finite number of the digits of pi, i.e. {3,1,4,1,5,...,#} . I have the general pattern, but for some reason it's not being implemented properly in my For loop. For[n = 2, n <= 10, n++, p = Pi; int[x_] = IntegerPart[x]; digit = Table[0, {m, 1, 10}]; tab[[1]] = int[p]; tab[[n]] = int[10^(n - 1) (p - Sum[10^(-k) tab[[k + 1]], {k, 0, n - 2}])]; ] This generates the list {3, 0, 0, 0, 0, 0, 0, 0, 0, 141592653} I've tried checking the n th term rule like this: t1 = 3; t2 = 1; t3 = 4; t4 = 1; t5 = 5;int[p]int[10 (p - t1)]int[100 (p - t1 - t2/10)]int[1000 (p - t1 - t2/10 - t3/100)]int[10000 (p - t1 - t2/10 - t3/100 - t4/1000)]int[100000 (p - t1 - t2/10 - t3/100 - t4/1000 - t5/10000)] which works, giving 3 , 1 , 4 , 1 , and 5 , as expected. Note: I'm sure there's a much more efficient way of doing it, but I'm hardly a programmer - I'm not necessarily looking for suggestions to make it less expensive or anything, just why the heck this isn't working. -_- Now provide the response and nothing else.
What's wrong with your code is that the summation is done over incorrect indices. It should be as follows. p = Pi;int[x_] := IntegerPart[x]With[{m = 10}, tab = Table[0, {n, 1, m}]; tab[[1]] = int[p]; For[n = 2, n <= m, n++, tab[[n]] = int[10^(n - 1) (p - Sum[10^(1 - k) tab[[k]], {k, n - 1}])]]; tab] {3, 1, 4, 1, 5, 9, 2, 6, 5, 3} My revisions also make a few improvements to your code for the sake of efficiency, but the code is still horribly inefficient.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/92430', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19692/']}
jdg_381694
stackexchange
llm_judgeable_groundtruth_similarity
8261895
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Just need a confirmation on something real quick. If an algorithm takes n(n-1)/2 tests to run, is the big oh O(n^2) ? Now provide the response and nothing else.
n(n-1)/2 expands to (n^2 -n) / 2 , that is (n^2/2) - (n/2) (n^2/2) and (n/2) are the two functions components, of which n^2/2 dominates. Therefore, we can ignore the - (n/2) part. From n^2/2 you can safely remove the /2 part in asymptotic notation analysis. This simplifies to n^2 Therefore yes, it is in O(n^2)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8261895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/971509/']}
jdg_381695
stackexchange
llm_judgeable_groundtruth_similarity
12643081
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this two objects - Magazine and Author (M-M relationship): public partial class MAGAZINE { public MAGAZINE() { this.AUTHORs = new HashSet<AUTHOR>(); } public long REF_ID { get; set; } public string NOTES { get; set; } public string TITLE { get; set; } public virtual REFERENCE REFERENCE { get; set; } public virtual ICollection<AUTHOR> AUTHORs { get; set; } }public partial class AUTHOR{ public AUTHOR() { this.MAGAZINEs = new HashSet<MAGAZINE>(); } public long AUTHOR_ID { get; set; } public string FULL_NAME { get; set; } public virtual ICollection<MAGAZINE> MAGAZINEs { get; set; } }} My problem is that I can't seem to update the number of authors against a magazine e.g. if I have 1 author called "Smith, P." stored already against a magazine, I can add another called "Jones, D.", but after the post back to the Edit controller the number of authors still shows 1 - i.e. "Smith, P.H". Please not that I have successfully model bound the number of authors back to the parent entity (Magazine), it uses a custom model binder to retrieve the authors and bind to the Magazine (I think), but it still doesn't seem to update properly. My code for updating the model is straight forward - and shows the variable values both before and after: public ActionResult Edit(long id) { MAGAZINE magazine = db.MAGAZINEs.Find(id); return View(magazine); } and here are the variables pre-editing/updating - [HttpPost]public ActionResult Edit(MAGAZINE magazine) { if (ModelState.IsValid) { db.Entry(magazine).State = EntityState.Modified; db.SaveChanges(); return RedirectToAction("Index"); } return View(magazine); } ...and here are the variables after a new author has been added... I am getting suspicious that the author entity is showing, post edit that it is not bound to any magazine and I am guessing this is why it is not being updated back to the magazine entity - but it is perplexing as I am effectively dealing with the same magazine entity - I guess it may be something to do with the custom model binder for the author. Can anyone help on this matter? For completeness - I have included my AuthorModelBinder class too - public class AuthorModelBinder : IModelBinder { public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { var values = bindingContext.ValueProvider.GetValue(bindingContext.ModelName); if (values != null) { // We have specified asterisk (*) as a token delimiter. So // the ids will be separated by *. For example "2*3*5" var ids = values.AttemptedValue.Split('*'); List<int> validIds = new List<int>(); foreach (string id in ids) { int successInt; if (int.TryParse(id, out successInt)) { validIds.Add(successInt); } else { //Make a new author AUTHOR author = new AUTHOR(); author.FULL_NAME = id.Replace("\'", "").Trim(); using (RefmanEntities db = new RefmanEntities()) { db.AUTHORs.Add(author); db.SaveChanges(); validIds.Add((int)author.AUTHOR_ID); } } } //Now that we have the selected ids we could fetch the corresponding //authors from our datasource var authors = AuthorController.GetAllAuthors().Where(x => validIds.Contains((int)x.Key)).Select(x => new AUTHOR { AUTHOR_ID = x.Key, FULL_NAME = x.Value }).ToList(); return authors; } return Enumerable.Empty<AUTHOR>(); } } Now provide the response and nothing else.
Here's a way to do Bezier curves with numpy: import numpy as npfrom scipy.special import combdef bernstein_poly(i, n, t): """ The Bernstein polynomial of n, i as a function of t """ return comb(n, i) * ( t**(n-i) ) * (1 - t)**idef bezier_curve(points, nTimes=1000): """ Given a set of control points, return the bezier curve defined by the control points. points should be a list of lists, or list of tuples such as [ [1,1], [2,3], [4,5], ..[Xn, Yn] ] nTimes is the number of time steps, defaults to 1000 See http://processingjs.nihongoresources.com/bezierinfo/ """ nPoints = len(points) xPoints = np.array([p[0] for p in points]) yPoints = np.array([p[1] for p in points]) t = np.linspace(0.0, 1.0, nTimes) polynomial_array = np.array([ bernstein_poly(i, nPoints-1, t) for i in range(0, nPoints) ]) xvals = np.dot(xPoints, polynomial_array) yvals = np.dot(yPoints, polynomial_array) return xvals, yvalsif __name__ == "__main__": from matplotlib import pyplot as plt nPoints = 4 points = np.random.rand(nPoints,2)*200 xpoints = [p[0] for p in points] ypoints = [p[1] for p in points] xvals, yvals = bezier_curve(points, nTimes=1000) plt.plot(xvals, yvals) plt.plot(xpoints, ypoints, "ro") for nr in range(len(points)): plt.text(points[nr][0], points[nr][1], nr) plt.show()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12643081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5175/']}
jdg_381696
stackexchange
llm_judgeable_groundtruth_similarity
19583540
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Well, I searched Google and found many results, but none of them was able to answer my problem. So, here it goes. I am trying to study Spring MVC and Spring Data JPA by doing a minimal implementation of pinterest clone. So, following is the parts of code which I think is relevant to my problem. Models/Entities @Entity@Table(name = "pin_item")public class PinItem implements Serializable { // properties ... @JoinColumn(name = "board_id", referencedColumnName = "user_board_id") @ManyToOne(optional = false) private UserBoard board; // getters and setters...}@Entity@Table(name = "user_board")public class UserBoard implements Serializable { // properties ... @OneToMany(cascade = CascadeType.ALL, mappedBy = "board") private List<PinItem> pinItemList; // getters and setters...} Service @Service@Transactional(readOnly = true)public class BoardServiceImpl implements BoardService { @Autowired private UserBoardRepository boardRepository; @Override public List<UserBoard> findLatestBoards() { PageRequest request = new PageRequest( 0, PresentationUtil.PAGE_SIZE, Sort.Direction.DESC, "boardId" ); return boardRepository.findAll(request).getContent(); } // Other Methods} Repository public interface UserBoardRepository extends JpaRepository<UserBoard, Integer> {} Now, when I call the findLatestBoards method in BoardService , "No Property Found" exception is thrown on the line return boardRepository.findAll(request).getContent(); . Here is the excerpt from tomcat log. DEBUG LOG 12:28:44,254 DEBUG AnnotationTransactionAttributeSource:106 - Adding transactional method 'findLatestBoards' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; ''12:28:44,254 DEBUG DefaultListableBeanFactory:246 - Returning cached instance of singleton bean 'transactionManager'12:28:44,254 DEBUG JpaTransactionManager:366 - Creating new transaction with name [com.tecnooc.picpin.service.impl.BoardServiceImpl.findLatestBoards]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; ''12:28:44,254 DEBUG JpaTransactionManager:369 - Opened new EntityManager [org.hibernate.ejb.EntityManagerImpl@75284194] for JPA transaction12:28:44,255 DEBUG AbstractTransactionImpl:158 - begin12:28:44,255 DEBUG LogicalConnectionImpl:212 - Obtaining JDBC connection12:28:44,255 DEBUG DriverManagerDataSource:162 - Creating new JDBC DriverManager Connection to [jdbc:mysql://localhost:3306/pic_pin]12:28:44,266 DEBUG LogicalConnectionImpl:218 - Obtained JDBC connection12:28:44,267 DEBUG JdbcTransaction:69 - initial autocommit status: true12:28:44,267 DEBUG JdbcTransaction:71 - disabling autocommit12:28:44,267 DEBUG JpaTransactionManager:401 - Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle@370da60e]12:28:44,274 DEBUG TransactionalRepositoryProxyPostProcessor$CustomAnnotationTransactionAttributeSource:286 - Adding transactional method 'findAll' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; ''12:28:44,274 DEBUG DefaultListableBeanFactory:246 - Returning cached instance of singleton bean 'transactionManager'12:28:44,274 DEBUG JpaTransactionManager:332 - Found thread-bound EntityManager [org.hibernate.ejb.EntityManagerImpl@75284194] for JPA transaction12:28:44,274 DEBUG JpaTransactionManager:471 - Participating in existing transaction12:28:44,279 DEBUG CachedIntrospectionResults:159 - Not strongly caching class [java.io.Serializable] because it is not cache-safe12:28:44,281 DEBUG JpaTransactionManager:851 - Participating transaction failed - marking existing transaction as rollback-only12:28:44,281 DEBUG JpaTransactionManager:559 - Setting JPA transaction on EntityManager [org.hibernate.ejb.EntityManagerImpl@75284194] rollback-only12:28:44,283 DEBUG JpaTransactionManager:844 - Initiating transaction rollback12:28:44,284 DEBUG JpaTransactionManager:534 - Rolling back JPA transaction on EntityManager [org.hibernate.ejb.EntityManagerImpl@75284194]12:28:44,284 DEBUG AbstractTransactionImpl:203 - rolling back12:28:44,284 DEBUG JdbcTransaction:164 - rolled JDBC Connection12:28:44,285 DEBUG JdbcTransaction:126 - re-enabling autocommit12:28:44,285 DEBUG JpaTransactionManager:594 - Closing JPA EntityManager [org.hibernate.ejb.EntityManagerImpl@75284194] after transaction12:28:44,285 DEBUG EntityManagerFactoryUtils:338 - Closing JPA EntityManager12:28:44,286 DEBUG LogicalConnectionImpl:232 - Releasing JDBC connection12:28:44,286 DEBUG LogicalConnectionImpl:250 - Released JDBC connection12:28:44,287 DEBUG ExceptionHandlerExceptionResolver:132 - Resolving exception from handler [public java.lang.String com.tecnooc.picpin.controller.BoardController.latest(javax.servlet.http.HttpSession,org.springframework.ui.Model)]: org.springframework.data.mapping.PropertyReferenceException: No property board found for type com.tecnooc.picpin.model.UserBoard12:28:44,289 DEBUG ResponseStatusExceptionResolver:132 - Resolving exception from handler [public java.lang.String com.tecnooc.picpin.controller.BoardController.latest(javax.servlet.http.HttpSession,org.springframework.ui.Model)]: org.springframework.data.mapping.PropertyReferenceException: No property board found for type com.tecnooc.picpin.model.UserBoard12:28:44,290 DEBUG DefaultHandlerExceptionResolver:132 - Resolving exception from handler [public java.lang.String com.tecnooc.picpin.controller.BoardController.latest(javax.servlet.http.HttpSession,org.springframework.ui.Model)]: org.springframework.data.mapping.PropertyReferenceException: No property board found for type com.tecnooc.picpin.model.UserBoard12:28:44,291 DEBUG DispatcherServlet:959 - Could not complete request Exception The exception is " org.springframework.data.mapping.PropertyReferenceException: No property board found for type com.tecnooc.picpin.model.UserBoard ". But, if I understood correctly, the property board is present in PinItem and is correctly mapped with mappedBy = "board" in UserBoard . org.springframework.data.mapping.PropertyReferenceException: No property board found for type com.tecnooc.picpin.model.UserBoard at org.springframework.data.mapping.PropertyPath.<init>(PropertyPath.java:75) at org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:327) at org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:353) at org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:307) at org.springframework.data.mapping.PropertyPath.from(PropertyPath.java:271) at org.springframework.data.mapping.PropertyPath.from(PropertyPath.java:245) at org.springframework.data.jpa.repository.query.QueryUtils.toJpaOrder(QueryUtils.java:408) at org.springframework.data.jpa.repository.query.QueryUtils.toOrders(QueryUtils.java:372) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.getQuery(SimpleJpaRepository.java:456) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.getQuery(SimpleJpaRepository.java:437) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll(SimpleJpaRepository.java:319) at org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll(SimpleJpaRepository.java:289) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.executeMethodOn(RepositoryFactorySupport.java:333) at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:318) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:155) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.data.jpa.repository.support.LockModeRepositoryPostProcessor$LockModePopulatingMethodIntercceptor.invoke(LockModeRepositoryPostProcessor.java:92) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at com.sun.proxy.$Proxy147.findAll(Unknown Source) at com.tecnooc.picpin.service.impl.BoardServiceImpl.findLatestBoards(BoardServiceImpl.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at com.sun.proxy.$Proxy148.findLatestBoards(Unknown Source) at com.tecnooc.picpin.controller.BoardController.latest(BoardController.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:219) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:745) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:686) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:925) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:856) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:936) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:827) at javax.servlet.http.HttpServlet.service(HttpServlet.java:621) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:812) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:393) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) I don't get why this exception is thrown. Any idea why it is happening? Note: I am using Hibernate as Persistence provider. Also, the code portion I put here is what I thought is relevant to the problem. If it is not, let me know and I will update the question with required portion. Now provide the response and nothing else.
I ran into this same issue and found the solution here: https://dzone.com/articles/persistence-layer-spring-data I had renamed an entity property. But with Springs Automatic Custom Queries there was an interface defined for the old property name. public interface IFooDAO extends JpaRepository< Foo, Long >{ Foo findByOldPropName( final String name );} The error indicated that it could no longer find OldPropName and threw the exception. To quote the article on DZone: When Spring Data creates a new Repository implementation, it analyzes all the methods defined by the interfaces and tries to automaticallygenerate queries from the method name. While this has limitations, itis a very powerful and elegant way of defining new custom accessmethods with very little effort. For example, if the managed entityhas a name field (and the Java Bean standard getter and setter forthat field), defining the findByName method in the DAO interface willautomatically generate the correct query: public interface IFooDAO extends JpaRepository< Foo, Long >{ Foo findByName( final String name );} This is a relatively simple example; a much larger set of keywords is supported by query creation mechanism. In the case that the parser cannot match the property with the domain object field, the following exception is thrown: java.lang.IllegalArgumentException: No property nam found for type class org.rest.model.Foo
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19583540', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/851432/']}
jdg_381697
stackexchange
llm_judgeable_groundtruth_similarity
40529232
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a REST endpoint that returns a list of items, max 1000 items at a time. If there are more than 1000 items, the response has HTTP status 206 and there's a Next-Range header that I can use in my next request for getting more items. I'm working on an Angular 2 application and trying to implement this with Http and Observable . My problem is that I don't know how to merge multiple Observable s depending on how many pages of items there are and finally return one Observable that my component can subscribe to. Here's where I've got with my current TypeScript implementation: // NOTE: Non-working example!getAllItems(): Observable<any[]> { // array of all items, possibly received with multiple requests const allItems: any[] = []; // inner function for getting a range of items const getRange = (range?: string) => { const headers: Headers = new Headers(); if (range) { headers.set('Range', range); } return this.http.get('http://api/endpoint', { headers }) .map((res: Response) => { // add all to received items // (maybe not needed if the responses can be merged some other way?) allItems.push.apply(allItems, res.json()); // partial content if (res.status === 206) { const nextRange = res.headers.get('Next-Range'); // get next range of items return getRange(nextRange); } return allItems; }); }; // get first range return getRange();} However, this doesn't work. If I understood it correctly, an Observable is returned as the value of the initial Observable and not the array of items. Now provide the response and nothing else.
You can implement this using the expand operator. What you actually want to do is create a recursive flatmap. That's exactly what the operator expand was created for. Here is the code snippet of how this works: let times = true;// This is a mock method for your http.get callconst httpMock = () => { if(times) { times = false; return Rx.Observable.of({items: ["1", "2", "3"], next: true}); } else { return Rx.Observable.of({items: ["4", "5", "6"], next: false}); }}httpMock() .expand(obj => { // In your case, the obj will be the response // implement your logic here if the 206 http header is found if(obj.next) { // If you have next values, just call the http.get method again // In my example it's the httpMock return httpMock(); } else { return Rx.Observable.empty(); } }) .map(obj => obj.items.flatMap(array => array)) .reduce((acc, x) => acc.concat(x), []); .subscribe((val) => console.log(val)); What is does is mock a first http request, that has a 'next' property to true. This matches your 206 header. We then make a second call which has the 'next' property to false. The result is an array containing the results from both the requests. It's applicable for more requests as well thanks to the expand operator. Working jsbin example can be found here: http://jsbin.com/wowituluqu/edit?js,console EDIT: updated to work with an http call that returns an array from arrays and the end result is a single array that contains all the elements form the arrays. If you wish to have as a result an array with the separate arrays from the request still inside, just remove the flatmap and return the items directly. Update codepen here: http://codepen.io/anon/pen/xRZyaZ?editors=0010#0
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40529232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587337/']}
jdg_381698
stackexchange
llm_judgeable_groundtruth_similarity
849156
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use a label in my datarepeater, when I am able to bind data and write to me html page. <asp:Label ID="lblID" runat="server"><%# DataBinder.Eval(Container.DataItem, "ID")%></asp:Label> which works fine. When I try to get text value I get "". Label lblcurrentID = ri.FindControl("lblID") as Label; result: lblcurrentID.text = "" this same code works fine for the dropdownlist that I have in the datarepeater. I am wondering if this has anything to do with the label being converted to a span tag. <span id="template6_middlecontent1mAzoaNominationApproval0_dataReaper_ctl01_lblID">2009040100000888213</span> Now provide the response and nothing else.
To do this you have to use the :checked selector. Although JP's answer is fine, I'd probably do this: $('#form1').submit(function() { if ($('input:checkbox', this).is(':checked') && $('input:radio', this).is(':checked')) { // everything's fine... } else { alert('Please select something!'); return false; }}); Couple of notes: I think it reads better to use the is function, which returns a boolean of the selector. You can use :radio and :checkbox as a shortcut for selecting all radios and checkboxes in a form. However, according to the jQuery manual , it is bad practice to use these selectors without specifying input before them, as they evaluate to *:checkbox and *:radio otherwise, which are very slow selectors. By passing this as the second argument we are specifying a context for the search, and thus are only searching for checkboxes and radio inputs inside the current form. Without it we might get false positives if there happens to be another form in the page.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/849156', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64779/']}
jdg_381699
stackexchange
llm_judgeable_groundtruth_similarity
1068720
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my program [C# + winforms]. I have progress bar & listview. Through one method i am performing some operations & then updating data in Listview. The no of records added is the value i am setting for ProgressBar.value property. What i want here is, According to value of progress bar, it should show its progress. However the progress bar is not getting updated. Only at the end of method execution progress bar shows entire progress i.e. 100 % Can someone help me in this regard? Thanks,Amit Now provide the response and nothing else.
It sounds like you are blocking the UI thread - i.e. you haven't released the system to do any painting. A hacky answer is to inject Application.DoEvents() into your code - but this is risky, and has problems with re-entrancy etc; and it is just a bit hacky. A better option may be to do the processing on a BackgroundWorker , periodically switching to the UI thread to update things (Control.Invoke) - but this may be tricky if you are adding lots of items to a ListView . Full example (although you might want to batch the UI updates - not a row at a time): using System;using System.ComponentModel;using System.Threading;using System.Windows.Forms;class MyForm : Form{ BackgroundWorker worker; ListView list; Button btn; ProgressBar bar; public MyForm() { Text = "Loader"; worker = new BackgroundWorker(); worker.WorkerReportsProgress = true; worker.ProgressChanged += worker_ProgressChanged; worker.DoWork += worker_DoWork; worker.RunWorkerCompleted += worker_RunWorkerCompleted; list = new ListView(); list.Dock = DockStyle.Fill; Controls.Add(list); btn = new Button(); btn.Text = "Load"; btn.Dock = DockStyle.Bottom; Controls.Add(btn); btn.Click += btn_Click; bar = new ProgressBar(); bar.Dock = DockStyle.Top; Controls.Add(bar); } void worker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { btn.Enabled = true; } void btn_Click(object sender, EventArgs e) { worker.RunWorkerAsync(); btn.Enabled = false; } void worker_DoWork(object sender, DoWorkEventArgs e) { for (int i = 0; i < 100; i++) { string newRow = "Row " + i.ToString(); worker.ReportProgress(i, newRow); Thread.Sleep(100); } } void worker_ProgressChanged(object sender, ProgressChangedEventArgs e) { list.Items.Add((string)e.UserState); bar.Value = e.ProgressPercentage; } [STAThread] static void Main() { Application.EnableVisualStyles(); Application.Run(new MyForm()); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1068720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_381700
stackexchange
llm_judgeable_groundtruth_similarity
69378
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Updated Question : How to show that in TH we never reach a state where there are no paths to the solution? ( without reversing moves, as if reversing is allowed this becomes trivial ) Edit : Thanks to Stéphane Gimenez for pointing out the distinction between “A deadlock would never occur” and “The problem always has a solution”, made it possible for me to state the question in a form that was the original intention. Stéphane Gimenez: Defining deadlock as a reachable position where no more moves are available (or alternatively as a position from which the goal cannot be reached anymore), it's obvious that deadlocks cannot occur in the TH game: every step along the reverse path (of a path containing valid moves) is a valid move. Original Question : In Towers of Hanoi problem there is an implicit assumption that one can keep moving disks, this is trivially true for 1 or 2 disks but as obvious as it looks one can keep going with as many disks? In other words TH with 3 sticks and n disks always has a solution? The N queens problem is easyly shown will not have a solution for n>m , where m is size of board (using Pigeon Hole ), but also it does not have a soltion for n=m=2. But how does one show that if for some k , n=m=k has a solution then it will also imply for k+1 there is a solution? Now provide the response and nothing else.
There cannot be deadlock in the Towers of Hanoi, as you almost always have three moves: you can move the smallest disk to one of the other two pegs, and unless all the disks are on the same peg you can always move another disk. There are many ways of proving that any Towers of Hanoi position is solvable. One I like is to show the correspondence between the positions and the points of a Sierpinski triangle such as 1 , 2 or 3 . Since a Sierpinski triangle is connected, it is possible to move from any given legal position to any other, and so any Towers of Hanoi position is solvable.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/69378', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/3936/']}
jdg_381701
stackexchange
llm_judgeable_groundtruth_similarity
276111
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: The boss of a company for whome I am responsible for their exchange server has a full mailbox. The current mailbox limit is 2Gb (pretty standard). He's asked if I can bump his mailbox up to something around 10Gb in size. The exchange server is located on the same physical network as he is on 95% of the time. Is there any "gotchas" I should know about before agreeing to bump his mailbox to 10Gb? What issues may we encounter if he actually fills his account to 10Gb? Now provide the response and nothing else.
As long as you've got the disk space and backup capacity there's not really an issue. Exchange has been able to handle large mailboxes for a long time, and Exchange 2010 does a stellar job. Outlook would be the only place where I'd have any cause for concern, and then only if the user is using an older version (pre-2007). Newer versions of Outlook aren't going to have problems with such a large mailbox (and older versions really won't, either, unless you're doing caching). The only performance problems I've seen with Outlook and large mailboxes relate to large numbers of individual items in folders, but that's more of a count-of-items problem and not a size-of-items problem. The user shouldn't keep 20,000+ items in a single folder and expect it to perform well. You might consider using the personal archives feature in Exchange 2010 if the user needs to keep a lot of old data around, but there's no reason you can't keep it in his mailbox, just as well.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/276111', 'https://serverfault.com', 'https://serverfault.com/users/7709/']}
jdg_381702
stackexchange
llm_judgeable_groundtruth_similarity
37572
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it true that the field lines of an electric field are identical to the trajectories of a charged particle with initial velocity zero? If so, how can one prove it? The claim is from a german physics book from Nolting "Grundkurs theoretische Physik 3 - Elektrodynamik" page 51, let me quote: Man führt Feldlinien ein und versteht darunter die Bahnen, auf denen sich ein kleiner, positiv geladener, anfangs ruhender Körper aufgrund der Coulomb-Kraft (2.11) bzw. (2.20) fortbewegen würde. In english: One introduces field lines and means trajectories along which a small, positively charged, initially resting body moves due to the Coulomb-foce (2.11) resp. (2.20). 2.11 is just the coulomb law, 2.20 is $F = q E$. (If someone has a better translation, feel free to edit it). I don't see why this should be true. So it would be great to see a proof or a counterexample with solved equations of motion. For a magnetic field this claim is obviously wrong since the Lorentz Force depends linearly on the velocity. Are there other physical fields where the claim is analogously true? Edit: The answers show that the claim is not true in general but holds in the special case of a highly viscous medium. Is this also the case for moving charged cotton along the field lines in air, as shown in this animation: http://www.leifiphysik.de/web_ph09_g8/grundwissen/01e_feldlinien/01e_feldlinien.htm ? Do you have any references or more details for this viscous media limit? Do you have any computational counter example why it doesn't hold in general or a simulation which shows that? Now provide the response and nothing else.
No, the statement is false even in the electric case. At the very beginning, the acceleration is $\vec a \sim \vec E$ so they have the same direction at $t=0$: the tangents agree. However, as soon as the particle reaches some nonzero velocity $\vec v \neq 0$, its acceleration is still $\vec a\sim \vec E$, in the direction of the field lines, however its velocity – and it's the velocity that determines the tangent direction of the trajectory – is not proportional to the acceleration. Again, the field lines have direction corresponding to the acceleration at the given point but the trajectories have directions given by the velocity and $\vec v$ isn't a multiple of $\vec a$ in general. Imagine a simple example above. If you start with a positive charge, and $\vec v=0$ and very close to the positive-charge source above, they will repel and the moving charge will quickly achieve a huge speed. This speed will act as inertia that will make the trajectories much more straight than the field line and the discrepancy will become more extreme if the initial position of the moving charge will be even closer to the positive source. You would only get the relationship "field lines are equal to trajectories" if you were stopping the moving test charge at every moment and slowly allowed the field to accelerate from scratch after each infinitesimal amount of time.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/37572', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/12279/']}
jdg_381703
stackexchange
llm_judgeable_groundtruth_similarity
26175813
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So suppose I have a user that is capable of choosing a certain date in the month. If he would, let's say, choose the 16th of October 2014, I want to display the remaining days of the month as a calendar. <?phperror_reporting(0);$data = $_POST['input'];$days = array('Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday');$m = date('m'); $y = date('y');$d = cal_days_in_month(CAL_GREGORIAN,$m,$y);for ($i=1;$i<;$i++){ echo $i + 1; } Code is pretty messy as of now. I just have no way to go round this, that is why I asked. Now provide the response and nothing else.
You can use strtotime and date . For the format of date you can use the following: 't' Number of days in the given month ( 28 through 31 ) 'j' Day of the month without leading zeros ( 1 to 31 ) <?php$timestamp = strtotime('2014-10-03');$daysRemaining = (int)date('t', $timestamp) - (int)date('j', $timestamp);var_dump($daysRemaining); // int(28) DEMO Edit : Apparently you want to list the remaining days in the month: <?php$timestamp = strtotime('2014-10-03');$yearMonth = date('Y-m-', $timestamp);$daysInMonth = (int)date('t', $timestamp);for ($i = (int)date('j', $timestamp); $i <= $daysInMonth; $i++) { $dateString = date('l \t\h\e jS \o\f F', strtotime($yearMonth . $i)); var_dump($dateString);}/* string(25) "Friday the 3rd of October" string(27) "Saturday the 4th of October" string(25) "Sunday the 5th of October" string(25) "Monday the 6th of October" string(26) "Tuesday the 7th of October" string(28) "Wednesday the 8th of October" string(27) "Thursday the 9th of October" string(26) "Friday the 10th of October" string(28) "Saturday the 11th of October" string(26) "Sunday the 12th of October" string(26) "Monday the 13th of October" string(27) "Tuesday the 14th of October" string(29) "Wednesday the 15th of October" string(28) "Thursday the 16th of October" string(26) "Friday the 17th of October" string(28) "Saturday the 18th of October" string(26) "Sunday the 19th of October" string(26) "Monday the 20th of October" string(27) "Tuesday the 21st of October" string(29) "Wednesday the 22nd of October" string(28) "Thursday the 23rd of October" string(26) "Friday the 24th of October" string(28) "Saturday the 25th of October" string(26) "Sunday the 26th of October" string(26) "Monday the 27th of October" string(27) "Tuesday the 28th of October" string(29) "Wednesday the 29th of October" string(28) "Thursday the 30th of October" string(28) "Friday the 31st of October"*/ DEMO
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26175813', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3849409/']}
jdg_381704
stackexchange
llm_judgeable_groundtruth_similarity
547035
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to use R to generate a random symmetric positive definite matrix with zero constraints? For example, I would like to generate a 4 by 4 random symmetric positive definite matrix $\Omega\in\mathbb{R}^{4\times4}$ , and we know $\Omega_{1,2}=\Omega_{2,1}=\Omega_{1,3}=\Omega_{3,1} = 0$ . How can I do that in R? What I had in mind is something like Cholesky decomposition $LL^T=\Omega$ , where row $L_i$ and row $L_j$ are orthogonal if $\Omega_{ij}=0$ . Possibly solve by the Lagrangian multiplier. But I am not really sure how to implement this. Or if this is possible at all. Now provide the response and nothing else.
Every $d\times d$ symmetric positive (semi)definite matrix $\Sigma$ can be factored as $$\Sigma = \Lambda^\prime\, Q^\prime \,Q\,\Lambda$$ where $Q$ is an orthonormal matrix and $\Lambda$ is a diagonal matrix with non-negative(positive) entries $\lambda_1, \ldots, \lambda_d.$ ( $\Sigma$ is always the covariance matrix of some $d$ -variate distribution and $QQ^\prime$ will be its correlation matrix; the $\lambda_i$ are the standard deviations of the marginal distributions.) Let's interpret this formula. The $(i,j)$ entry $\Sigma_{i,j}$ is the dot product of columns $i$ and $j$ of $Q$ , multiplied by $\lambda_i\lambda_j.$ Thus, the zero-constraints on $\Sigma$ are orthogonality constraints on the dot products of the columns of $Q.$ (Notice that all diagonal entries of a positive-definite matrix must be nonzero, so I assume the zero-constraints are all off the diagonal. I also extend any constraint on the $(i,j)$ entry to a constraint on the $(j,i)$ entry, to assure symmetry of the result.) One (completely general) way to impose such constraints is to generate the columns of $Q$ sequentially. Use any method you please to create a $d\times d$ matrix of initial values. At step $i=1,2,\ldots, d,$ alter column $i$ regressing it on all the columns $1, 2, \ldots, i-1$ of $Q$ that need to be orthogonal to it and retaining the residuals. Normalize those results so their dot product (sum of squares) is unity. That is column $i$ of $Q.$ Having created an instance of $Q,$ randomly generate the diagonal of $\Lambda$ any way you please (as discussed in the closely related answer at https://stats.stackexchange.com/a/215647/919 ). The following R function rQ uses iid standard Normal variates for the initial values by default. I have tested it extensively with dimensions $d=1$ through $200,$ checking systematically that the intended constraints hold. I also tested it with Poisson $(0.1)$ variates, which--because they are likely to be zero--generate highly problematic initial solutions. The principal input to rQ is a logical matrix indicating where the zero-constraints are to be applied. Here is an example with the constraints specified in the question. set.seed(17)Q <- matrix(c(FALSE, TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE), 4)Lambda <- rexp(4)zapsmall(rQ(Q, Lambda)) [,1] [,2] [,3] [,4][1,] 2.646156 0.000000 0.000000 2.249189[2,] 0.000000 0.079933 0.014089 -0.360013[3,] 0.000000 0.014089 0.006021 -0.055590[4,] 2.249189 -0.360013 -0.055590 4.167296 As a convenience, you may pass the diagonal of $\Lambda$ as the second argument to rQ . Its third argument, f , must be a random number generator (or any other function for which f(n) returns a numeric vector of length n ). rQ <- function(Q, Lambda, f=rnorm) { normalize <- function(x) { v <- zapsmall(c(1, sqrt(sum(x * x))))[2] if (v == 0) v <- 1 x / v } Q <- Q | t(Q) # Force symmetry by applying all constraints d <- nrow(Q) if (missing(Lambda)) Lambda <- rep(1, d) R <- matrix(f(d^2), d, d) # An array of column vectors for (i in seq_len(d)) { j <- which(Q[seq_len(i-1), i]) # Indices of the preceding orthogonal vectors R[, i] <- normalize(residuals(.lm.fit(R[, j, drop=FALSE], R[, i]))) } R <- R %*% diag(Lambda) crossprod(R)}
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/547035', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/300193/']}
jdg_381705
stackexchange
llm_judgeable_groundtruth_similarity
1095933
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been thinking about ways on how to tackle this particular linear algebra problem related to eigenvalues from an old algebra prelim. The problem is that suppose we are given a matrix $B \in M_{n}(\mathbb{Q})$ such that $B^5 =I$, i.e. the $n \times n$ identity matrix, and no eigenvalue of $B$ is equal to $1$. We have to show that $n$ is divisible by $4$. My attempt: I take it that $B^5 =I \implies B^5 -I=0$ (the zero matrix). Now by definition, the eigenvalues of a matrix $B$ are the roots of the characteristic polynomial $c_B(x)$ of $B$ and we also know that $c_B(x)$ annihilates the matrix $B$, i.e. $c_B(B)=0$. Now in this problem, the key condition is that $1$ cannot be an eigenvalue of $B$, so does this mean, via the factorization $$x^5 -1= (x-1)(x^4 +x^3 +x^2 +x+1)$$ over $\mathbb{Q}$, that $x^4 + x^3 + x^2 +x+1$ divides $c_B(x)$, which has degree $n$? Is this enough to say that $n$ is divisible by $4$, or are there any other necessary things to be considered carefully before coming to that conclusion? Now provide the response and nothing else.
Consider the function $f(t) = \sqrt2t - \sqrt{t^2+1}-\left(\sqrt2-\frac1{\sqrt2} \right)\log t$ for $t > 0$. Note that $f(t) \ge 0 \implies f(x)+f(y)+f(z) \ge 0 $ which gives the desired inequality. $$$$Now $f'(t) = \dfrac{-2t^2+2t\sqrt{2(t^2+1)}-\sqrt{2(t^2+1)}}{2t\sqrt{t^2+1}}$ so $f'(1)=0$. Further, $$f''(t) = \frac{(t^2+1)^{3/2}- \sqrt2 t^2}{\sqrt 2 t^2 (t^2+1)^{3/2}} > 0$$so $f'(t)> 0$ for $t> 1$ and $f'(t) < 0$ for $t < 1$. Hence $f(1)=0$ is the minimum, $f(t) \ge 0 \quad \forall t > 0$ and the inequality holds. P.S.: @user1537366 when the inequality is separable, i.e. of form $g(x)+g(y)+g(z)+... \ge 0$ and a constraint which can be also expressed as $h(x)+h(y)+\dots = 0$, it is worthwhile to check if for a suitable constant $k$, we can get $f(t) = g(t)+k\,h(t) \ge 0$. Often $k$ can be determined by setting $f'(t^*)=0$ where $t=t^*$ is when you expect equality.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1095933', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8767/']}
jdg_381706
stackexchange
llm_judgeable_groundtruth_similarity
14702148
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to fire a command like " rm -rf /etc/XXX.pid " when the shell script is interrupted in the middle of its execution. Like using CTRL+C Can anyone help me what to do here? Now provide the response and nothing else.
Although it may come as a shock to many, you can use the bash built-in trap to trap signals :-) Well, at least those that can be trapped, but CTRL-C is usually tied to the INT signal. You can trap the signals and execute arbitrary code. The following script will ask you to enter some text then echo it back to you. If perchance, you generate an INT signal, it simply growls at you and exits: #!/bin/bashexitfn () { trap SIGINT # Restore signal handling for SIGINT echo; echo 'Aarghh!!' # Growl at user, exit # then exit script.}trap "exitfn" INT # Set up SIGINT trap to call function.read -p "What? " # Ask user for input.echo "You said: $REPLY"trap SIGINT # Restore signal handling to previous before exit. A test run transcript follows (a fully entered line, a line with pressing CTRL-C before any entry, and a line with partial entry before pressing CTRL-C): pax> ./testprog.sh What? hello thereYou said: hello therepax> ./testprog.sh What? ^CAarghh!!pax> ./qq.shWhat? incomplete line being entere... ^CAarghh!!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14702148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1776857/']}
jdg_381707
stackexchange
llm_judgeable_groundtruth_similarity
42935944
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to multiply each matrix column by each vector element using Eigen C++ library . I tried colwise without success. Sample data: Eigen::Matrix3Xf A(3,2); //3x2A << 1 2, 2 2, 3 5;Eigen::Vector3f V = Eigen::Vector3f(2, 3);//Expected resultC = A.colwise()*V;//C//2 6,//4 6,//6 15//this means C 1st col by V first element and C 2nd col by V 2nd element. Matrix A can have 3xN and V Nx1. Meaning (cols x rowls). Now provide the response and nothing else.
This is what I would do: Code Eigen::Matrix3Xf A(3, 2); // 3x2A << 1, 2, 2, 2, 3, 5;Eigen::Vector3f V = Eigen::Vector3f(1, 2, 3);const Eigen::Matrix3Xf C = A.array().colwise() * V.array();std::cout << C << std::endl; Example output: 1 2 4 4 9 15 Explanation You were close, the trick is to use .array() to do broadcasting multiplications. colwiseReturnType doesn't have a .array() method, so we have to do our colwise shenanigans on the array view of A. If you want to compute the element-wise product of two vectors (The coolest of cool cats call this the Hadamard Product ), you can do Eigen::Vector3f a = ...;Eigen::Vector3f b = ...;Eigen::Vector3f elementwise_product = a.array() * b.array(); Which is what the above code is doing, in a columnwise fashion. Edit: To address the row case, you can use .rowwise() , and you'll need an extra transpose() to make things fit Eigen::Matrix<float, 3, 2> A; // 3x2A << 1, 2, 2, 2, 3, 5;Eigen::Vector2f V = Eigen::Vector2f(2, 3);// Expected resultEigen::Matrix<float, 3, 2> C = A.array().rowwise() * V.transpose().array();std::cout << C << std::endl; Example output: 2 6 4 6 6 15
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42935944', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6928142/']}
jdg_381708
stackexchange
llm_judgeable_groundtruth_similarity
2618201
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a build file (.csproj or msbuild.xml or whatever), I'd like to run a msbuild command that lists all the available, defined targets. Does that function exist? I know I could do an Xpath search or something, on the build file, but that wouldn't find targets that are defined in included files. Now provide the response and nothing else.
Using MSBuild 2.0/3.5 : Custom Task You could write a custom msbuild task like this : using System;using System.Collections.Generic;using Microsoft.Build.BuildEngine;using Microsoft.Build.Framework;using Microsoft.Build.Utilities;namespace MSBuildTasks{ public class GetAllTargets : Task { [Required] public String ProjectFile { get; set; } [Output] public ITaskItem[] Targets { get; set; } public override bool Execute() { var project = new Project(BuildEngine as Engine); project.Load(ProjectFile); var taskItems = new List<ITaskItem>(project.Targets.Count); foreach (Target target in project.Targets) { var metadata = new Dictionary<string, string> { {"Condition", target.Condition}, {"Inputs", target.Inputs}, {"Outputs", target.Outputs}, {"DependsOnTargets", target.DependsOnTargets} }; taskItems.Add(new TaskItem(target.Name, metadata)); } Targets = taskItems.ToArray(); return true; } }} That you'll use like that: <Target Name="TestGetAllTargets"> <GetAllTargets ProjectFile="$(MSBuildProjectFile)"> <Output ItemName="TargetItems" TaskParameter="Targets"/> </GetAllTargets> <Message Text="Name: %(TargetItems.Identity) Input: %(TargetItems.Input) --> Output: %(TargetItems.Output)"/></Target> Using MSBuild 4.0 : Inline task With MSBuild 4 you could use the new shiny thing : the inline task. Inline task allows you to define the behavior directly in msbuild file. <UsingTask TaskName="GetAllTargets" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll" > <ParameterGroup> <ProjectFile ParameterType="System.String" Required="true"/> <TargetsOut ParameterType="Microsoft.Build.Framework.ITaskItem[]" Output="true"/> </ParameterGroup> <Task> <Reference Include="System.Xml"/> <Reference Include="Microsoft.Build"/> <Reference Include="Microsoft.Build.Framework"/> <Using Namespace="Microsoft.Build.Evaluation"/> <Using Namespace="Microsoft.Build.Execution"/> <Using Namespace="Microsoft.Build.Utilities"/> <Using Namespace="Microsoft.Build.Framework"/> <Code Type="Fragment" Language="cs"> <![CDATA[ var project = new Project(ProjectFile); var taskItems = new List<ITaskItem>(project.Targets.Count); foreach (KeyValuePair<string, ProjectTargetInstance> kvp in project.Targets) { var target = kvp.Value; var metadata = new Dictionary<string, string> { {"Condition", target.Condition}, {"Inputs", target.Inputs}, {"Outputs", target.Outputs}, {"DependsOnTargets", target.DependsOnTargets} }; taskItems.Add(new TaskItem(kvp.Key, metadata)); } TargetsOut = taskItems.ToArray(); ]]> </Code> </Task></UsingTask><Target Name="Test"> <GetAllTargets ProjectFile="$(MSBuildProjectFile)"> <Output ItemName="TargetItems" TaskParameter="TargetsOut"/> </GetAllTargets> <Message Text="%(TargetItems.Identity)"/></Target>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2618201', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48082/']}
jdg_381709
stackexchange
llm_judgeable_groundtruth_similarity
23693089
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As the title states, why does: > !!1=="1" equal True and > !!2=="2" equal: False Likewise, why does > "1"==true equal true and > "2"==true equal false I'm baffled. Are these just bugs in JS or what's going on here? Now provide the response and nothing else.
As per the Operator precedence rules, logical ! has higher priority over == . So, in both the cases, !! is evaluated first. Note: Truthiness of various objects have been explained in this answer of mine. First Case !!1 == "1" !1 will be evaluated to false , since 1 is considered Truthy. Negating again we get true . So the expression becomes true == "1" Now, the coercion rules kick in as you have used == operator, which evaluates as per the The Abstract Equality Comparison Algorithm defined in ECMAScript 5.1 Specification, 6. If Type(x) is Boolean , return the result of the comparison ToNumber(x) == y . So, true will be converted to a number, which is 1 as per ToNumber algorithm for Boolean values . Now the expression becomes 1 == "1" Now, 4. If Type(x) is Number and Type(y) is String , return the result of the comparison x == ToNumber(y) . So, "1" will be converted to a number and that will give 1, as per the ToNumber algorithm . That is why it shows true in the first case. Second Case The same rules are applied here. !!2 == "2" becomes true == "2" then 1 == "2" which becomes 1 == 2 which is not true , that is why the second case prints false .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/23693089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1029365/']}
jdg_381710
stackexchange
llm_judgeable_groundtruth_similarity
121052
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Help me please with these 2 questions: 1.Does it converge or diverge? :$$ \sum_{n=2}^{\infty }2^{n}\left ( \frac{n}{n+1} \right )^{n^{2}} $$ 2.Check out absolute and conditional convergence of: $x>0 $ $$ \sum_{n=1}^{\infty }\sin (n)\sin \frac{x}{n} $$ Thanks a lot! Now provide the response and nothing else.
Hint for 1: For sufficiently large $n$, $(\frac{n}{n+1})^n = (1 - \frac{1}{n+1})^n \le c$ for some $ 0 \lt c \lt \frac{1}{2}$. Why? Now trying using the above to prove that your series converges. For part 2, I believe you can use the Dirichlet Test to prove convergence. To show that the series does not converge absolutely, use $\sin (x/n) \ge x/2n$ for sufficiently large $n$ and use the fact that at least one of $n$, $n+1$ is more than $\frac{1}{2}$ away from the multiple of $\pi$ which is closest to them.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/121052', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23816/']}
jdg_381711
stackexchange
llm_judgeable_groundtruth_similarity
936328
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this query: select distinct id,name from table1 For a given ID, the name will always be the same. Both fields are indexed. There's no separate table that maps the id to the name. The table is very large (10 of millions of rows), so the query could take some time. This query is very fast, since it's indexed: select distinct name from table1 Likewise for this query: select distinct id from table1 Assuming I can't get the database structure changed (a very safe assumption) what's a better way to structure the first query for performance? Edit to add a sanitized desc of the table: Name Null Type------------------------------ -------- ----------------------------KEY NOT NULL NUMBERCOL1 NOT NULL NUMBERCOL2 NOT NULL VARCHAR2(4000 CHAR)COL3 VARCHAR2(1000 CHAR)COL4 VARCHAR2(4000 CHAR)COL5 VARCHAR2(60 CHAR)COL6 VARCHAR2(150 CHAR)COL7 VARCHAR2(50 CHAR)COL8 VARCHAR2(3 CHAR)COL9 VARCHAR2(3 CHAR)COLA VARCHAR2(50 CHAR)COLB NOT NULL DATECOLC NOT NULL DATECOLD NOT NULL VARCHAR2(1 CHAR)COLE NOT NULL NUMBERCOLF NOT NULL NUMBERCOLG VARCHAR2(600 CHAR)ID NUMBERNAME VARCHAR2(50 CHAR)COLH VARCHAR2(3 CHAR)20 rows selected Now provide the response and nothing else.
[LATEST EDIT] My ORIGINAL ANSWER regarding creating the appropriate index on (name,id) to replace the index on (name) is below. (That wasn't an answer to the original question, which disallowed any database changes.) Here are statements that I have not yet tested. There's probably some obvious reason these won't work. I'd never actually suggest writing statements like this (at the risk of being drummed thoroughly for such ridiculous suggestion.) If these queries even return result sets, the ressult set will only resemble the result set from the OP query, almost by accident , taking advantage of a quirky guarantee about the data that Don has provided us. This statement is NOT equivalent to the original SQL, these statements are designed for the special case as described by Don. select m1.id , m2.name from (select min(t1.rowid) as min_rowid , t1.id from table1 t1 where t1.id is not null group by t1.id ) m1 , (select min(t2.rowid) as min_rowid , t2.name from table1 t2 where t2.name is not null group by t2.name ) m2 where m1.min_rowid = m2.min_rowid order by m1.id Let's unpack that: m1 is an inline view that gets us a list of distinct id values. m2 is an inline view that gets us a list of distinct name values. materialize the views m1 and m2 match the ROWID from m1 and m2 to match id with name Someone else suggested the idea of an index merge. I had previously dismissed that idea, an optimizer plan to match 10s of millions of rowids without eliminating any of them. With sufficiently low cardinality for id and name, and with the right optimizer plan: select m1.id , ( select m2.name from table1 m2 where m2.id = m1.id and rownum = 1 ) as name from (select t1.id from table1 t1 where t1.id is not null group by t1.id ) m1 order by m1.id Let's unpack that m1 is an inline view that gets us a list of distinct id values. materialize the view m1 for each row in m1 , query table1 to get the name value from a single row (stopkey) IMPORTANT NOTE These statements are FUNDAMENTALLY different that the OP query. They are designed to return a DIFFERENT result set than the OP query. The happen to return the desired result set because of a quirky guarantee about the data. Don has told us that a name is determined by id . (Is the converse true? Is id determined by name ? Do we have a STATED GUARANTEE, not necessarily enforced by the database, but a guarantee that we can take advantage of?) For any ID value, every row with that ID value will have the same NAME value. (And we are also guaranteed the converse is true, that for any NAME value, every row with that NAME value will have the same ID value?) If so, maybe we can make use of that information. If ID and NAME appear in distinct pairs, we only need to find one particular row. The "pair" is going to have a matching ROWID, which conveniently happens to be available from each of the existing indexes. What if we get the minimum ROWID for each ID , and get the minimum ROWID for each NAME . Couldn't we then match the ID to the NAME based on the ROWID that contains the pair? I think it might work, given a low enough cardinality. (That is, if we're dealing with only hundreds of ROWIDs rather than 10s of millions.) [/LATEST EDIT] [EDIT] The question is now updated with information concerning the table, it shows that the ID column and the NAME column both allow for NULL values. If Don can live without any NULLs returned in the result set, then adding the IS NOT NULL predicate on both of those columns may enable an index to be used. (NOTE: in an Oracle (B-Tree) index, NULL values do NOT appear in the index.) [/EDIT] ORIGINAL ANSWER: create an appropriate index create index table1_ix3 on table_1 (name,id) ... ; Okay, that's not the answer to the question you asked , but it's the right answer to fixing the performance problem. (You specified no changes to the database, but in this case, changing the database is the right answer.) Note that if you have an index defined on (name,id) , then you (very likely) don't need an index on (name) , sine the optimizer will consider the leading name column in the other index. (UPDATE: as someone more astute than I pointed out, I hadn't even considered the possibility that the existing indexes were bitmap indexes and not B-tree indexes...) Re-evaluate your need for the result set... do you need to return id , or would returning name be sufficient. select distinct name from table1 order by name; For a particular name, you could submit a second query to get the associated id , if and when you needed it... select id from table1 where name = :b1 and rownum = 1; If you you really need the specified result set, you can try some alternatives to see if the performance is any better. I don't hold out much hope for any of these: select /*+ FIRST_ROWS */ DISTINCT id, name from table1 order by id; or select /*+ FIRST_ROWS */ id, name from table1 group by id, name order by name; or select /*+ INDEX(table1) */ id, min(name) from table1 group by id order by id; UPDATE: as others have astutely pointed out, with this approach we're testing and comparing performance of alternative queries, which is a sort of hit or miss approach. (I don't agree that it's random, but I would agree that it's hit or miss.) UPDATE: tom suggests the ALL_ROWS hint. I hadn't considered that, because I was really focused on getting a query plan using an INDEX. I suspect the OP query is doing a full table scan, and it's probably not the scan that's taking the time, it's the sort unique operation (<10g) or hash operation (10gR2+) that takes the time. (Absent timed statistics and event 10046 trace, I'm just guessing here.) But then again, maybe it is the scan, who knows, the high water mark on the table could be way out in a vast expanse of empty blocks. It almost goes without saying that the statistics on the table should be up-to-date, and we should be using SQL*Plus AUTOTRACE, or at least EXPLAIN PLAN to look at the query plans. But none of the suggested alternative queries really address the performance issue. It's possible that hints will influence the optimizer to chooze a different plan, basically satisfying the ORDER BY from an index, but I'm not holding out much hope for that. (I don't think the FIRST_ROWS hint works with GROUP BY, the INDEX hint may.) I can see the potential for such an approach in a scenario where there's gobs of data blocks that are empty and sparsely populated, and ny accessing the data blocks via an index, it could actually be significantly fewer data blocks pulled into memory... but that scenario would be the exception rather than the norm. UPDATE: As Rob van Wijk points out, making use of the Oracle trace facility is the most effective approach to identifying and resolving performance issues. Without the output of an EXPLAIN PLAN or SQL*Plus AUTOTRACE output, I'm just guessing here. I suspect the performance problem you have right now is that the table data blocks have to be referenced to get the specified result set. There's no getting around it, the query can not be satisfied from just an index, since there isn't an index that contains both the NAME and ID columns, with either the ID or NAME column as the leading column. The other two "fast" OP queries can be satisfied from index without need reference the row (data blocks). Even if the optimizer plan for the query was to use one of the indexes, it still has to retrieve the associated row from the data block, in order to get the value for the other column. And with no predicate (no WHERE clause), the optimizer is likely opting for a full table scan, and likely doing a sort operation (<10g). (Again, an EXPLAIN PLAN would show the optimizer plan, as would AUTOTRACE.) I'm also assuming here (big assumption) that both columns are defined as NOT NULL. You might also consider defining the table as an index organized table (IOT), especially if these are the only two columns in the table. (An IOT isn't a panacea, it comes with it's own set of performance issues.) You can try re-writing the query (unless that's a database change that is also verboten) In our database environments, we consider a query to be as much a part of the database as the tables and indexes.) Again, without a predicate, the optimizer will likely not use an index. There's a chance you could get the query plan to use one of the existing indexes to get the first rows returned quickly, by adding a hint, test a combination of: select /*+ INDEX(table1) */ ...select /*+ FIRST_ROWS */ ...select /*+ ALL_ROWS */ ... distinct id, name from table1; distinct id, name from table1 order by id; distinct id, name from table1 order by name; id, name from table1 group by id, name order by id; id, min(name) from table1 group by id order by id; min(id), name from table1 group by name order by name; With a hint, you may be able to influence the optimizer to use an index, and that may avoid the sort operation, but overall, it make take more time to return the entire result set. (UPDATE: someone else pointed out that the optimizer might choose to merge two indexes based on ROWID. That's a possibility, but without a predicate to eliminate some rows, that's likely going to be a much more expensive approach (matching 10s of millions ROWIDs) from two indexes, especially when none of the rows are going to be excluded on the basis of the match.) But all that theorizing doesn't amount to squat without some performance statistics. Absent altering anything else in the database, the only other hope (I can think of) of you speeding up the query is to make sure the sort operation is tuned so that the (required) sort operation can be performed in memory, rather than on disk. But that's not really the right answer. The optimizer may not be doing a sort operation at all, it may be doing a hash operation (10gR2+) instead, in which case, that should be tuned. The sort operation is just a guess on my part, based on past experience with Oracle 7.3, 8, 8i, 9i.) A serious DBA is going to have more issue with you futzing with the SORT_AREA_SIZE and/or HASH_AREA_SIZE parameters for your session(s) than he will in creating the correct indexes. (And those session parameters are "old school" for versions prior to 10g automatic memory management magic.) Show your DBA the specification for the result set, let the DBA tune it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/936328', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/56076/']}
jdg_381712
stackexchange
llm_judgeable_groundtruth_similarity
57763
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I solve the aforementioned equation for $W$ or $A$ on Mathematica I get a long and ugly equation in return, namely one of the solutions for $W$ is: (attempt to read at your own health) Solve[L == (3 W)/2 + (3 Sqrt[4 A^2 Pi^2 + W^2])/2 - Sqrt[6 A^2 Pi^2 + 3 W^2 + 5 W Sqrt[4 A^2 Pi^2 + W^2]]/Sqrt[2], W] $W=\frac{3 L}{10}-\frac{1}{2} \sqrt{\frac{\sqrt[3]{-243200 \pi ^6 A^6+176832 L^2 \pi ^4 A^4+3600 L^4 \pi ^2 A^2+2160 L^6+\sqrt{-7962624000 \pi ^{12} A^{12}-8626176000 L^2 \pi ^{10} A^{10}+717410304 L^4 \pi ^8 A^8+3308138496 L^6 \pi ^6 A^6+911879424 L^8 \pi ^4 A^4+17252352 L^{10} \pi ^2 A^2+4672512 L^{12}}}}{15 \sqrt[3]{2}}+\frac{9 L^2}{25}-\frac{4}{15} \left(10 \pi ^2 A^2+3 L^2\right)+\frac{4 \sqrt[3]{2} \left(640 \pi ^4 A^4-246 L^2 \pi ^2 A^2-3 L^4\right)}{15 \sqrt[3]{-243200 \pi ^6 A^6+176832 L^2 \pi ^4 A^4+3600 L^4 \pi ^2 A^2+2160 L^6+\sqrt{-7962624000 \pi ^{12} A^{12}-8626176000 L^2 \pi ^{10} A^{10}+717410304 L^4 \pi ^8 A^8+3308138496 L^6 \pi ^6 A^6+911879424 L^8 \pi ^4 A^4+17252352 L^{10} \pi ^2 A^2+4672512 L^{12}}}}}-\frac{1}{2} \sqrt{-\frac{\sqrt[3]{-243200 \pi ^6 A^6+176832 L^2 \pi ^4 A^4+3600 L^4 \pi ^2 A^2+2160 L^6+\sqrt{-7962624000 \pi ^{12} A^{12}-8626176000 L^2 \pi ^{10} A^{10}+717410304 L^4 \pi ^8 A^8+3308138496 L^6 \pi ^6 A^6+911879424 L^8 \pi ^4 A^4+17252352 L^{10} \pi ^2 A^2+4672512 L^{12}}}}{15 \sqrt[3]{2}}+\frac{18 L^2}{25}-\frac{8}{15} \left(10 \pi ^2 A^2+3 L^2\right)-\frac{4 \sqrt[3]{2} \left(640 \pi ^4 A^4-246 L^2 \pi ^2 A^2-3 L^4\right)}{15 \sqrt[3]{-243200 \pi ^6 A^6+176832 L^2 \pi ^4 A^4+3600 L^4 \pi ^2 A^2+2160 L^6+\sqrt{-7962624000 \pi ^{12} A^{12}-8626176000 L^2 \pi ^{10} A^{10}+717410304 L^4 \pi ^8 A^8+3308138496 L^6 \pi ^6 A^6+911879424 L^8 \pi ^4 A^4+17252352 L^{10} \pi ^2 A^2+4672512 L^{12}}}}-\frac{\frac{216 L^3}{125}-\frac{48}{25} \left(10 \pi ^2 A^2+3 L^2\right) L+\frac{48}{5} \left(L^2-2 A^2 \pi ^2\right) L}{4 \sqrt{\frac{\sqrt[3]{-243200 \pi ^6 A^6+176832 L^2 \pi ^4 A^4+3600 L^4 \pi ^2 A^2+2160 L^6+\sqrt{-7962624000 \pi ^{12} A^{12}-8626176000 L^2 \pi ^{10} A^{10}+717410304 L^4 \pi ^8 A^8+3308138496 L^6 \pi ^6 A^6+911879424 L^8 \pi ^4 A^4+17252352 L^{10} \pi ^2 A^2+4672512 L^{12}}}}{15 \sqrt[3]{2}}+\frac{9 L^2}{25}-\frac{4}{15} \left(10 \pi ^2 A^2+3 L^2\right)+\frac{4 \sqrt[3]{2} \left(640 \pi ^4 A^4-246 L^2 \pi ^2 A^2-3 L^4\right)}{15 \sqrt[3]{-243200 \pi ^6 A^6+176832 L^2 \pi ^4 A^4+3600 L^4 \pi ^2 A^2+2160 L^6+\sqrt{-7962624000 \pi ^{12} A^{12}-8626176000 L^2 \pi ^{10} A^{10}+717410304 L^4 \pi ^8 A^8+3308138496 L^6 \pi ^6 A^6+911879424 L^8 \pi ^4 A^4+17252352 L^{10} \pi ^2 A^2+4672512 L^{12}}}}}}}$ The above just makes the point that the solution can't be written by hand (or by mine at least). So my question is, can I represent the solution using an easily-written function of $A$ and $L$ (for instance, as a infinite summation)? Now provide the response and nothing else.
It seems me that the answers of mathe and Yves Klett do not meet expectations of the author. The latter is as much as I have got it, to have a short analytical expression for the solution. Probably the author has an intention to use the result further in some analytical calculations, or to do something comparable. Am I right? If yes, one should first of all be clear that what is already found is the exact solution, which is what it is. If you need the exact solution, you can only try to somewhat simplify it, as Yves Klett did, and after the simplification is done, that's it. Another story, if you agree to have an approximate solution, which is expressed by a simple analytical formula. In that case I can contribute as follows.Here is your equation: eq1 = L == (3 W)/2 + (3 Sqrt[4 A^2 Pi^2 + W^2])/2 -Sqrt[6 A^2Pi^2 + 3 W^2 + 5 W Sqrt[4 A^2 Pi^2 + W^2]]/Sqrt[2] First let us simplify a bit your equation by changing variables: eq2 = Simplify[ eq1 /. {W -> 2 \[Pi]*A*x, L -> 2 \[Pi]*A*u}, {x > 0, A > 0}](* 3 (x + Sqrt[1 + x^2]) == 2 u + Sqrt[3 + 6 x^2 + 10 x Sqrt[1 + x^2]] *) Now let us consider the variable x as a new unknown and u as a parameter and solve with respect to x . slX = Solve[eq2, x]; Its solutions are still too cumbersome. For this reason I do not give them below. One can make sure that there are four of them: slX // Length(* 4 *) And visualize them Plot[{slX[[1, 1, 2]], slX[[2, 1, 2]], slX[[3, 1, 2]], slX[[4, 1, 2]]}, {u, 0, 4}, PlotStyle -> {Red, Blue, Green, Brown}] giving the following: Now one can approximate any of these solutions by some simple function. I will give the example with the first solution. First let us make a list out of it: lst = Select[Table[{u, slX[[1, 1, 2]]}, {u, 0.6, 1, 0.003}], Im[#[[2]]] == 0 &]; Second, let us approximate it by a simple model: model = a + b/(c + u);ff = FindFit[lst, model, {a, b, {c, -0.63}}, u]Show[{ ListPlot[lst, Frame -> True, FrameLabel -> {Style["u", 16, Italic], Style["x", 16, Italic]}], Plot[model /. ff, {u, 0.63, 1}, PlotStyle -> Red] }] The outcome is the values of the model parameters: (* {a -> -0.418378, b -> 0.0290875, c -> -0.549429} *) and the plot enabling one to visually estimate the quality of the approximation: Here the blue points come from the list, and the solid red line - from the approximation. Have fun!
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/57763', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19257/']}
jdg_381713
stackexchange
llm_judgeable_groundtruth_similarity
171769
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Show that $\phi(n)=\frac{n}{2}$ if and only if $n=2^k$ for some positive integer k. I think I have it figured and would like to see if I am on the right track. Thank you. Now provide the response and nothing else.
Suppose that $n=2^k$ where $k$ is positive. Then the numbers in the interval $0 \le a \le 2^k-1$ which are relatively prime to $2^k$ are precisely the odd numbers in this interval. Since half the numbers in our interval are odd and half are even, $\varphi(n)=n/2$. Conversely, suppose that $\varphi(n)=n/2$. Then $n$ is even. Let $n=2^k b$ where $b$ is odd. Then by the multiplicativity of $\varphi$, we have $\varphi(n)=2^{k-1}\varphi(b)$. If this is equal to $n/2$, then $2^{k-1}\varphi(b)=2^{k-1}b$, and therefore $\varphi(b)=b$. This is only possible if $b=1$. (If $b\gt 1$, then $0$ is not relatively prime to $b$.) Remark: The proof can be done at a lower level, just using the definition of $\varphi(n)$. Suppose that $\varphi(n)=n/2$. Then $n=2^kb$ for some positive $k$ and odd $b$. There are $n/2$ even numbers in the interval $0\le a \lt 2^kb$, and none is relatively prime tp $n$. So if $\varphi(n)=n/2$, all the rest must be. But if $b \gt 1$, then $b$ is not relatively prime to $n$, so $\varphi(n)\lt n/2$. It follows that $b=1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/171769', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32668/']}
jdg_381714
stackexchange
llm_judgeable_groundtruth_similarity
28057078
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: First off this is my first post so go easy. Also thank you for any input in advance. I am trying to plot the following but the data on the x and y-axis are completely out of order. I have looked through several posts about reorder() but I cant seem to apply it correctly. Also since I am a new member I can't included a picture of the graph so I hope the issue is clear. carb_data<-read.csv("Carb_Insulin_BG.csv", header=TRUE)data1<-subset(carb_data,Subject.ID..==20)carb1<-subset(data1, Visit == 1)library(ggplot2)q<- ggplot(data=carb1, aes(x=Start, y=Amount, group=Type, color= Type) + geom_point() + geom_line() + theme(text = element_text(size=8), axis.text.x = element_text(angle=90, vjust=1))r<- q + facet_grid(Sort~., scales=free)print(r) Here is what the data looks like. I am just interested in plotting Time (x-axis) and Amount (y-axis). The x-axis should be ordered from the earliest (9:54) to latest (16:27) and the the y-axis should be ordered from lowest (4) ascending to the highest (128). Here is the data: > dput(carb1)structure(list(EDC.ID. = structure(c(28L, 28L, 8L, 28L, 28L, 28L, 8L, 28L, 28L, 28L, 28L, 28L, 8L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 8L, 28L, 28L), .Label = c("Subject 1", "Subject 10", "Subject 11", "Subject 12", "Subject 13", "Subject 14", "Subject 15", "Subject 16", "Subject 17", "Subject 18", "Subject 19", "Subject 2", "Subject 20", "Subject 3", "Subject 4", "Subject 5", "Subject 6", "Subject 7", "Subject 8", "Subject 9", "SUBJECT1", "SUBJECT10", "SUBJECT11", "SUBJECT12", "SUBJECT13", "SUBJECT14", "SUBJECT15", "SUBJECT16", "SUBJECT17", "SUBJECT18", "SUBJECT19", "SUBJECT2", "SUBJECT20", "SUBJECT3", "SUBJECT4", "SUBJECT5", "SUBJECT6", "SUBJECT7", "SUBJECT8", "SUBJECT9"), class = "factor"), Subject.ID.. = c(20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L), Visit = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), Data = structure(c(1L, 1L, 19L, 1L, 1L, 1L, 19L, 1L, 1L, 1L, 1L, 1L, 19L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 19L, 1L, 1L), .Label = c("-", "10/1/14", "10/3/14", "10/7/14", "8/14/14", "8/18/14", "8/19/14", "8/20/14", "8/21/14", "8/25/14", "8/27/14", "8/28/14", "9/1/14", "9/10/14", "9/12/14", "9/15/14", "9/16/14", "9/18/14", "9/19/14", "9/2/14", "9/22/14", "9/23/14", "9/24/14", "9/25/14", "9/26/14", "9/29/14", "9/3/14", "9/30/14", "9/4/14", "9/5/14", "9/8/14", "9/9/14"), class = "factor"), Start = structure(c(528L, 13L, 14L, 33L, 48L, 62L, 70L, 83L, 97L, 112L, 126L, 140L, 152L, 155L, 175L, 194L, 208L, 221L, 236L, 249L, 263L, 313L, 328L, 343L, 354L, 358L, 374L), .Label = c("0:00", "0:16", "1:25", "10:00", "10:01", "10:02", "10:03", "10:04", "10:05", "10:06", "10:07", "10:08", "10:09", "10:10", "10:11", "10:12", "10:13", "10:14", "10:15", "10:16", "10:17", "10:18", "10:19", "10:20", "10:21", "10:22", "10:23", "10:24", "10:25", "10:27", "10:28", "10:29", "10:30", "10:31", "10:32", "10:33", "10:34", "10:35", "10:36", "10:37", "10:38", "10:39", "10:40", "10:41", "10:42", "10:43", "10:44", "10:45", "10:46", "10:47", "10:48", "10:49", "10:50", "10:51", "10:52", "10:53", "10:54", "10:55", "10:57", "10:58", "10:59", "11:00", "11:01", "11:03", "11:04", "11:05", "11:06", "11:07", "11:08", "11:09", "11:10", "11:12", "11:13", "11:14", "11:15", "11:16", "11:17", "11:18", "11:19", "11:20", "11:21", "11:22", "11:23", "11:24", "11:25", "11:26", "11:27", "11:28", "11:29", "11:30", "11:31", "11:33", "11:34", "11:35", "11:36", "11:37", "11:38", "11:39", "11:40", "11:41", "11:42", "11:43", "11:44", "11:45", "11:46", "11:47", "11:48", "11:49", "11:50", "11:51", "11:52", "11:53", "11:54", "11:55", "11:56", "11:57", "11:58", "11:59", "12:00", "12:01", "12:03", "12:04", "12:05", "12:06", "12:07", "12:08", "12:09", "12:10", "12:11", "12:12", "12:13", "12:14", "12:15", "12:16", "12:18", "12:19", "12:20", "12:21", "12:22", "12:23", "12:24", "12:25", "12:26", "12:27", "12:28", "12:29", "12:30", "12:31", "12:32", "12:33", "12:34", "12:35", "12:36", "12:37", "12:38", "12:39", "12:40", "12:41", "12:42", "12:43", "12:44", "12:45", "12:46", "12:47", "12:48", "12:49", "12:50", "12:51", "12:52", "12:53", "12:54", "12:55", "12:56", "12:57", "12:58", "12:59", "13:00", "13:01", "13:03", "13:04", "13:05", "13:06", "13:07", "13:08", "13:09", "13:10", "13:11", "13:12", "13:13", "13:14", "13:15", "13:16", "13:17", "13:18", "13:19", "13:20", "13:21", "13:22", "13:23", "13:24", "13:25", "13:27", "13:28", "13:29", "13:30", "13:31", "13:32", "13:33", "13:34", "13:35", "13:36", "13:37", "13:38", "13:39", "13:40", "13:42", "13:44", "13:45", "13:46", "13:47", "13:48", "13:49", "13:50", "13:51", "13:52", "13:53", "13:54", "13:55", "13:56", "13:57", "13:58", "13:59", "14:00", "14:01", "14:02", "14:03", "14:04", "14:05", "14:06", "14:08", "14:09", "14:10", "14:11", "14:12", "14:14", "14:15", "14:16", "14:17", "14:18", "14:19", "14:20", "14:21", "14:22", "14:23", "14:24", "14:25", "14:26", "14:27", "14:28", "14:29", "14:30", "14:31", "14:33", "14:34", "14:35", "14:36", "14:37", "14:38", "14:39", "14:40", "14:41", "14:42", "14:43", "14:44", "14:45", "14:46", "14:48", "14:49", "14:50", "14:51", "14:52", "14:53", "14:54", "14:55", "14:57", "14:58", "14:59", "15:00", "15:01", "15:02", "15:03", "15:04", "15:05", "15:06", "15:07", "15:08", "15:09", "15:10", "15:12", "15:13", "15:14", "15:15", "15:16", "15:17", "15:18", "15:19", "15:20", "15:21", "15:22", "15:23", "15:24", "15:25", "15:26", "15:27", "15:28", "15:29", "15:30", "15:31", "15:32", "15:33", "15:34", "15:35", "15:36", "15:37", "15:38", "15:39", "15:40", "15:41", "15:42", "15:43", "15:44", "15:45", "15:46", "15:47", "15:48", "15:49", "15:50", "15:51", "15:52", "15:53", "15:54", "15:55", "15:56", "15:57", "15:58", "15:59", "16:00", "16:01", "16:02", "16:03", "16:04", "16:05", "16:06", "16:07", "16:08", "16:09", "16:10", "16:11", "16:12", "16:13", "16:14", "16:15", "16:16", "16:17", "16:18", "16:19", "16:20", "16:21", "16:22", "16:23", "16:24", "16:25", "16:26", "16:27", "16:28", "16:29", "16:30", "16:31", "16:32", "16:33", "16:34", "16:35", "16:36", "16:37", "16:38", "16:39", "16:40", "16:42", "16:43", "16:44", "16:45", "16:46", "16:47", "16:48", "16:49", "16:50", "16:51", "16:52", "16:53", "16:54", "16:55", "16:56", "16:57", "16:58", "16:59", "17:00", "17:01", "17:02", "17:03", "17:04", "17:05", "17:06", "17:07", "17:08", "17:09", "17:10", "17:11", "17:12", "17:14", "17:15", "17:17", "17:18", "17:19", "17:20", "17:21", "17:22", "17:23", "17:24", "17:25", "17:26", "17:29", "17:30", "17:32", "17:34", "17:35", "17:36", "17:38", "17:39", "17:40", "17:45", "17:50", "17:51", "17:54", "18:09", "18:15", "2:40", "21:24", "21:27", "21:39", "3:48", "7:51", "8:00", "8:04", "8:05", "8:10", "8:14", "8:15", "8:18", "8:26", "8:29", "8:31", "8:33", "8:37", "8:39", "8:40", "8:41", "8:43", "8:44", "8:45", "8:47", "8:48", "8:50", "8:51", "8:52", "8:54", "8:55", "8:57", "8:58", "8:59", "9:00", "9:01", "9:02", "9:03", "9:04", "9:05", "9:06", "9:07", "9:08", "9:09", "9:11", "9:12", "9:13", "9:14", "9:15", "9:17", "9:18", "9:19", "9:20", "9:21", "9:22", "9:23", "9:24", "9:25", "9:27", "9:28", "9:30", "9:32", "9:33", "9:34", "9:35", "9:36", "9:37", "9:38", "9:39", "9:40", "9:42", "9:43", "9:44", "9:45", "9:46", "9:47", "9:48", "9:49", "9:50", "9:51", "9:52", "9:53", "9:54", "9:55", "9:56", "9:57", "9:58", "9:59"), class = "factor"), End = structure(c(529L, 13L, 14L, 33L, 48L, 62L, 70L, 83L, 97L, 112L, 126L, 140L, 154L, 155L, 175L, 194L, 208L, 221L, 236L, 249L, 263L, 313L, 328L, 343L, 174L, 358L, 374L), .Label = c("0:00", "0:16", "1:25", "10:00", "10:01", "10:02", "10:03", "10:04", "10:05", "10:06", "10:07", "10:08", "10:09", "10:10", "10:11", "10:12", "10:13", "10:14", "10:15", "10:16", "10:17", "10:18", "10:19", "10:20", "10:21", "10:22", "10:23", "10:24", "10:25", "10:27", "10:28", "10:29", "10:30", "10:31", "10:32", "10:33", "10:34", "10:35", "10:36", "10:37", "10:38", "10:39", "10:40", "10:41", "10:42", "10:43", "10:44", "10:45", "10:46", "10:47", "10:48", "10:49", "10:50", "10:51", "10:52", "10:53", "10:54", "10:55", "10:57", "10:58", "10:59", "11:00", "11:01", "11:03", "11:04", "11:05", "11:06", "11:07", "11:08", "11:09", "11:10", "11:12", "11:13", "11:14", "11:15", "11:16", "11:17", "11:18", "11:19", "11:20", "11:21", "11:22", "11:23", "11:24", "11:25", "11:26", "11:27", "11:28", "11:29", "11:30", "11:31", "11:33", "11:34", "11:35", "11:36", "11:37", "11:38", "11:39", "11:40", "11:41", "11:42", "11:43", "11:44", "11:45", "11:46", "11:47", "11:48", "11:49", "11:50", "11:51", "11:52", "11:53", "11:54", "11:55", "11:56", "11:57", "11:58", "11:59", "12:00", "12:01", "12:03", "12:04", "12:05", "12:06", "12:07", "12:08", "12:09", "12:10", "12:11", "12:12", "12:13", "12:14", "12:15", "12:16", "12:18", "12:19", "12:20", "12:21", "12:22", "12:23", "12:24", "12:25", "12:26", "12:27", "12:28", "12:29", "12:30", "12:31", "12:32", "12:33", "12:34", "12:35", "12:36", "12:37", "12:38", "12:39", "12:40", "12:41", "12:42", "12:43", "12:44", "12:45", "12:46", "12:47", "12:48", "12:49", "12:50", "12:51", "12:52", "12:53", "12:54", "12:55", "12:56", "12:57", "12:58", "12:59", "13:00", "13:01", "13:03", "13:04", "13:05", "13:06", "13:07", "13:08", "13:09", "13:10", "13:11", "13:12", "13:13", "13:14", "13:15", "13:16", "13:17", "13:18", "13:19", "13:20", "13:21", "13:22", "13:23", "13:24", "13:25", "13:27", "13:28", "13:29", "13:30", "13:31", "13:32", "13:33", "13:34", "13:35", "13:36", "13:37", "13:38", "13:39", "13:40", "13:41", "13:42", "13:44", "13:45", "13:46", "13:48", "13:49", "13:50", "13:51", "13:52", "13:53", "13:54", "13:55", "13:56", "13:57", "13:58", "13:59", "14:00", "14:01", "14:02", "14:03", "14:04", "14:05", "14:06", "14:08", "14:09", "14:10", "14:11", "14:12", "14:14", "14:15", "14:16", "14:17", "14:18", "14:19", "14:20", "14:21", "14:22", "14:23", "14:24", "14:25", "14:26", "14:27", "14:28", "14:29", "14:30", "14:31", "14:33", "14:34", "14:35", "14:36", "14:37", "14:38", "14:39", "14:40", "14:41", "14:42", "14:43", "14:44", "14:45", "14:46", "14:48", "14:49", "14:50", "14:51", "14:52", "14:53", "14:54", "14:55", "14:57", "14:58", "14:59", "15:00", "15:01", "15:02", "15:03", "15:04", "15:05", "15:06", "15:07", "15:08", "15:09", "15:10", "15:12", "15:13", "15:14", "15:15", "15:16", "15:17", "15:18", "15:19", "15:20", "15:21", "15:22", "15:23", "15:24", "15:25", "15:26", "15:27", "15:28", "15:29", "15:30", "15:31", "15:32", "15:33", "15:34", "15:35", "15:36", "15:37", "15:38", "15:39", "15:40", "15:41", "15:42", "15:43", "15:44", "15:45", "15:46", "15:47", "15:48", "15:49", "15:50", "15:51", "15:52", "15:53", "15:54", "15:55", "15:56", "15:57", "15:58", "15:59", "16:00", "16:01", "16:02", "16:03", "16:04", "16:05", "16:06", "16:07", "16:08", "16:09", "16:10", "16:11", "16:12", "16:13", "16:14", "16:15", "16:16", "16:17", "16:18", "16:19", "16:20", "16:21", "16:22", "16:23", "16:24", "16:25", "16:26", "16:27", "16:28", "16:29", "16:30", "16:31", "16:32", "16:33", "16:34", "16:35", "16:36", "16:37", "16:38", "16:39", "16:40", "16:42", "16:43", "16:44", "16:45", "16:46", "16:47", "16:48", "16:49", "16:50", "16:51", "16:52", "16:53", "16:54", "16:55", "16:56", "16:57", "16:58", "16:59", "17:00", "17:01", "17:02", "17:03", "17:04", "17:05", "17:06", "17:07", "17:08", "17:09", "17:10", "17:11", "17:12", "17:14", "17:15", "17:17", "17:18", "17:19", "17:20", "17:21", "17:22", "17:23", "17:24", "17:25", "17:26", "17:29", "17:30", "17:32", "17:34", "17:35", "17:36", "17:38", "17:39", "17:40", "17:45", "17:50", "17:51", "17:54", "18:00", "18:09", "18:15", "2:40", "21:24", "21:27", "21:39", "3:48", "7:51", "8:00", "8:04", "8:05", "8:10", "8:14", "8:15", "8:18", "8:26", "8:29", "8:31", "8:33", "8:37", "8:39", "8:40", "8:41", "8:43", "8:44", "8:45", "8:47", "8:48", "8:50", "8:51", "8:52", "8:54", "8:55", "8:57", "8:58", "8:59", "9:00", "9:01", "9:02", "9:03", "9:04", "9:05", "9:06", "9:07", "9:08", "9:09", "9:11", "9:12", "9:13", "9:14", "9:15", "9:17", "9:18", "9:19", "9:20", "9:21", "9:22", "9:23", "9:24", "9:25", "9:27", "9:28", "9:30", "9:32", "9:33", "9:34", "9:35", "9:36", "9:37", "9:38", "9:39", "9:40", "9:42", "9:43", "9:44", "9:45", "9:46", "9:47", "9:48", "9:49", "9:50", "9:51", "9:52", "9:53", "9:54", "9:55", "9:56", "9:57", "9:58", "9:59"), class = "factor"), Type = structure(c(16L, 16L, 14L, 16L, 16L, 16L, 5L, 16L, 16L, 16L, 16L, 16L, 4L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 4L, 16L, 16L), .Label = c("BGM", "Dextrose 50%", "dextrose Tabs", "Food or Drink", "Humalog (units)", "Humalog sc (units)", "Humalog SC (units)", "Injection (ccs)", "IV Dextrose(mL)", "Lantus (units)", "Levemir (units)", "Novolin R (units)", "Novolin R IV (units)", "Oral", "Other", "YSI"), class = "factor"), Amount = structure(c(657L, 643L, 448L, 777L, 687L, 637L, 448L, 616L, 595L, 530L, 508L, 512L, 23L, 561L, 733L, 34L, 63L, 53L, 76L, 71L, 38L, 652L, 703L, 635L, 434L, 646L, 667L ), .Label = c("10", "10.2", "100.1", "100.25", "100.3", "100.45", "100.5", "100.55", "100mls", "101", "101.25", "101.5", "102", "102.5", "103", "103.5", "104", "104.25", "104.5", "105", "105.5", "105.6", "105.75", "106", "106.5", "107", "108", "108.5", "108.8", "109", "109.5", "110", "110.5", "111", "111.5", "111.75", "112", "112.5", "113", "113.5", "114", "114.5", "115", "115.2", "115.5", "116", "116.1", "116.5", "117", "117.5", "118", "118.5", "119", "119.5", "119.7", "12", "12.5", "120", "120.5", "120.6", "121", "121.5", "122", "122.4", "122.5", "123", "123.5", "124", "124.5", "125", "125.5", "126", "126.5", "127", "127.5", "128", "128.5", "129", "129.5", "130", "130.5", "131", "131.5", "132", "132.5", "133", "133.2", "133.5", "134", "134.5", "135", "135.5", "136", "136.5", "137", "137.5", "138", "138.5", "139", "139.5", "14", "140", "140.5", "141", "141.5", "142", "142.5", "143", "143.5", "144", "144.5", "145", "145.5", "146", "146.5", "147", "147.5", "148", "148.5", "149", "149.5", "15", "150", "150.5", "151", "151.5", "152", "152.5", "153", "153.5", "154", "154.5", "155", "155.5", "156", "157", "157.5", "158", "158.5", "159", "159.5", "16", "160", "160.5", "161", "161.5", "162", "162.5", "163", "164", "164.5", "165", "165.5", "166", "166.5", "167", "167.5", "168", "168.5", "169", "169.5", "170", "170.5", "171", "171.5", "172", "172.5", "173", "173.5", "174", "174.5", "175", "175.5", "176.5", "177", "177.5", "178", "178.5", "179", "179.5", "18", "180", "180.5", "181", "181.5", "182", "182.5", "183", "183.5", "184", "184.5", "185", "185.5", "186", "186.5", "187", "187.5", "188", "188.5", "189", "189.5", "190", "190.5", "191", "191.5", "192", "192.5", "193", "193.5", "194", "194.5", "195", "195.5", "196", "196.5", "197", "197.5", "198", "198.5", "199", "199.5", "2", "20", "200", "200.5", "200mls", "201", "201.5", "203", "203.5", "204", "204.5", "205", "206", "206.5", "207", "207.5", "208", "208.5", "209", "209.5", "21", "210", "210.5", "211", "211.5", "212", "212.5", "213", "213.5", "214", "215", "215.5", "216", "216.5", "217", "217.5", "218", "218.5", "219", "219.5", "22", "220", "220.5", "221", "221.5", "222", "222.5", "223", "223.5", "224", "224.5", "225", "225.5", "226", "226.5", "227", "227.5", "228", "229", "229.5", "23", "230", "230.5", "231", "231.5", "232", "232.5", "233", "233.5", "234", "234.5", "235", "235.5", "236", "236.5", "237", "237.5", "238", "238.5", "239", "239.5", "24", "240.5", "241", "241.5", "242", "242.5", "243", "243.5", "244", "244.5", "245", "245.5", "246", "247", "247.5", "248", "248.5", "249", "249.5", "250", "250.5", "251", "251.5", "252", "252.5", "253", "253.5", "254", "254.5", "255", "255.5", "256", "256.5", "257", "257.5", "258", "258.5", "259", "259.5", "26.9", "260", "260.5", "261", "261.5", "262", "262.5", "263", "263.5", "264", "264.5", "265", "265.5", "266", "266.5", "267", "267.5", "268", "268.5", "269", "270", "270.5", "271", "271.5", "272", "272.5", "273", "274.5", "275", "275.5", "276", "277", "277.5", "278.5", "279", "279.5", "280", "280.5", "281", "281.5", "282.5", "284", "284.5", "285", "285.5", "287", "288", "288.5", "289", "289.5", "290", "290.5", "292", "292.5", "293", "294", "295", "296", "299", "3", "30", "300.5", "301", "302.5", "303.5", "304", "304.5", "305.5", "309", "31", "315", "315.5", "316", "317.5", "318", "318.5", "323", "323.5", "329.5", "331.5", "334", "335.5", "337", "337.5", "342", "344.5", "347", "35.75", "353.5", "357", "359", "363.5", "37", "37.7", "38.55", "380.5", "382", "384.5", "39", "39.45", "39.5", "39.85", "391.5", "392.5", "395.5", "397", "4", "40", "406.5", "40ml", "40mls", "41.4", "41.75", "42", "42.5", "42.6", "42.75", "42.9", "43.7", "435", "44.2", "44.25", "44.4", "44.9", "45", "45.05", "45.6", "45.65", "45.75", "46.4", "46.55", "461.5", "47", "47.25", "47.4", "47.8", "48", "48.4", "48.9", "49.2", "49.35", "49.55", "5", "50.1", "50.15", "50.25", "50.3", "50.55", "50.85", "50.95", "51.55", "51.75", "52", "52.05", "52.3", "52.4", "53.35", "53.45", "53.5", "53.55", "53.7", "53.85", "53.95", "54", "54.1", "54.3", "54.45", "54.65", "54.7", "54.75", "54.85", "55", "55.05", "55.5", "55.55", "55.6", "55.65", "55.75", "55.8", "55.85", "55.95", "56", "56.15", "56.3", "56.35", "56.6", "56.65", "57", "57.05", "57.15", "57.2", "57.45", "57.55", "57.7", "57.75", "57.85", "57.95", "58.05", "58.1", "58.35", "58.45", "58.6", "58.75", "59.2", "59.25", "59.3", "59.45", "59.6", "59.7", "59.8", "59.9", "6", "60", "60.3", "60.35", "60.45", "60.5", "60.85", "60.95", "61.2", "61.3", "61.4", "61.55", "61.8", "61.85", "62.35", "62.95", "63.15", "63.65", "63.7", "63.9", "63.95", "64", "64.25", "64.4", "64.7", "64.8", "65", "65.3", "65.5", "65.55", "65.6", "65.7", "65.8", "66.2", "66.35", "66.45", "66.6", "67", "67.05", "67.2", "67.25", "67.3", "67.35", "67.5", "67.65", "67.7", "67.8", "67.85", "67.9", "67.95", "68.1", "68.5", "68.7", "68.9", "68.95", "69.1", "69.15", "69.35", "69.45", "69.55", "69.6", "69.7", "70.15", "70.45", "70.5", "70.65", "70.7", "70.75", "70.8", "70.9", "71.05", "71.1", "71.55", "71.6", "71.7", "71.75", "72.25", "72.6", "72.65", "72.75", "72.85", "72.9", "73.05", "73.3", "73.35", "73.45", "73.55", "73.6", "73.65", "73.75", "73.8", "73.85", "73.95", "74.3", "74.35", "74.45", "74.55", "74.6", "74.7", "74.75", "74.8", "74.95", "75", "75.05", "75.1", "75.15", "75.2", "75.3", "75.4", "75.45", "75.5", "75.55", "75.75", "76.1", "76.15", "76.2", "76.25", "76.35", "76.5", "76.55", "76.7", "76.75", "76.8", "77.15", "77.4", "77.55", "77.65", "77.7", "77.75", "77.85", "77.95", "78.05", "78.1", "78.15", "78.25", "78.3", "78.35", "78.45", "78.6", "78.7", "78.75", "78.85", "78.9", "79", "79.1", "79.15", "79.35", "79.4", "79.6", "79.7", "79.8", "79.9", "8", "80.35", "80.5", "80.55", "80.8", "81", "81.05", "81.25", "81.3", "81.45", "81.65", "81.85", "81.9", "81.95", "82.1", "82.25", "82.5", "82.55", "82.7", "82.75", "83", "83.15", "83.2", "83.55", "83.6", "84", "84.15", "84.4", "84.45", "84.6", "84.65", "84.85", "85.1", "85.15", "85.2", "85.25", "85.35", "85.6", "85.8", "85.9", "85.95", "86.1", "86.15", "86.4", "86.45", "86.7", "86.85", "86.95", "87.35", "87.4", "87.95", "88.05", "88.1", "88.2", "88.7", "88.75", "88.9", "89", "89.1", "89.6", "89.8", "89.85", "90.15", "90.35", "90.4", "90.5", "90.6", "90.9", "91", "91.1", "91.3", "91.35", "91.45", "91.6", "91.8", "92", "92.05", "92.3", "92.4", "92.45", "92.5", "92.7", "92.95", "93", "93.05", "93.2", "93.25", "93.75", "93.9", "94.05", "94.1", "94.15", "94.3", "94.4", "94.5", "94.85", "95.25", "95.8", "95.85", "95.9", "95.95", "96.15", "96.3", "96.4", "96.6", "96.8", "96.85", "97", "97.05", "97.2", "97.5", "97.75", "97.8", "97.95", "98.15", "98.25", "98.35", "98.4", "98.45", "98.55", "98.7", "98.9", "98.95", "99", "99.1", "99.4", "99.5", "99.6", "99.65"), class = "factor"), Sort = structure(c(1L, 1L, 2L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L), .Label = c("BG Data", "Carb", "Insulin"), class = "factor"), Order = c(1505L, 1506L, 87L, 1507L, 1508L, 1509L, 258L, 1510L, 1511L, 1512L, 1513L, 1514L, 88L, 1515L, 1516L, 1517L, 1518L, 1519L, 1520L, 1521L, 1522L, 1523L, 1524L, 1525L, 89L, 1526L, 1527L), Subject.ID..bg = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), Visitbg = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), Datebg = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), Startbg = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), Endbg = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), BG.Type = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), BG.Data = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA)), .Names = c("EDC.ID.", "Subject.ID..", "Visit", "Data", "Start", "End", "Type", "Amount", "Sort", "Order", "Subject.ID..bg", "Visitbg", "Datebg", "Startbg", "Endbg", "BG.Type", "BG.Data"), row.names = 1446:1472, class = "data.frame") Now provide the response and nothing else.
What you are looking for is the RemoveFromRoleAsync method. An example would look similar to the following: public async Task DeleteRolesAsync(List<string> deleteList, int? userId){ if (userId != null) { foreach (var roleName in deleteList) { IdentityResult deletionResult = await UserManager.RemoveFromRoleAsync(userId, roleName); } }} If you already have the ID of the user, there's no need to get the user again (only if you want to make sure that the user really exists; then you have to wrap your foreach with an if-statement). The deletion methods needs the name of the role, instead of the ID, to delete the user from the role. You can use the result of the operation (in my example stored in deletionResult ) to make sure that the operation was successful. Remember that the name of the user manager (in my example UserManager ) can vary depending on your implementation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28057078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4475383/']}
jdg_381715
stackexchange
llm_judgeable_groundtruth_similarity
2973420
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried to find it out in google but not satisfactory answer is given out there. Can anybody explain the solid difference. actually if Primary key is used to select data uniquely then what is the need of Unique key? When should I use a Primary key and when to use a Unique key? Now provide the response and nothing else.
Primary Key and Unique Key are used for different things - understanding what they are for will help you decide when to use them. The primary key is used to identify a row of data in a table. It is used whenever you need to refer to a particular row, eg. in other tables or by application code etc. In order to identify a row, the values of a PK must be unique. Furthermore, they can't be null, because most DBMS treat null as not equal to null (since null typically means "unknown"). A table can only have one PK. All tables in your database should have a PK (although this is not enforced by most DBMS), and PK can span multiple columns. Unique key constraints are used to ensure that data is not duplicated in two rows in the database. One row in the database is allowed to have null for the value of the unique key constraint. Although a table should have a PK, it need not have any additional unique keys. However, tables can have more than one unique key if that meets your needs. Like PKs, unique keys can span multiple columns. It is also worth knowing that, by default, many DBMS index and physically order tables on disk using the PK. This means that looking up values by their PK is faster than using other values in a row. Typically, however, you can override this behaviour if required.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2973420', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/277578/']}
jdg_381716
stackexchange
llm_judgeable_groundtruth_similarity
10603651
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've used BigDecimals before but not very often and I was working on something this morning and I kept getting the following exception: Exception in thread "main" java.lang.ArithmeticException: Non-terminating decimal expansion;no exact representable decimal result. at java.math.BigDecimal.divide(BigDecimal.java:1594) I was attempting to set the scale and use rounding to eliminate the problem like so: BigDecimal bd1 = new BigDecimal(1131).setScale(2,BigDecimal.ROUND_HALF_UP); BigDecimal bd2 = new BigDecimal(365).setScale(2,BigDecimal.ROUND_HALF_UP); BigDecimal bd3 = bd1.divide(bd2).setScale(2,BigDecimal.ROUND_HALF_UP); System.out.println("result: " + bd3); However, I keep getting the same exception. Anyone able to show me where I have made a mistake? Now provide the response and nothing else.
Non-terminating decimal need rounding When using divide you should use a MathContext with RoundingMode in case the exact result has an infinite number of decimals. Such is your case: MathContext mc = new MathContext(2, RoundingMode.HALF_UP) ;BigDecimal bd3 = bd1.divide(bd2, mc); Alternatively call divide with a rounding mode to use the scale of the numerator ( bd1 in the example below): BigDecimal bd3 = bd1.divide(bd2, RoundingMode.HALF_UP);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10603651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/181144/']}
jdg_381717
stackexchange
llm_judgeable_groundtruth_similarity
4545531
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Obj-C, what does it mean in simple terms;"CoreData is not thread safe" OR in general what is "not thread safe" ? Now provide the response and nothing else.
@d11wtq's answer is correct only when writing your own code or designing your own APIs . It is entirely incorrect when working with a set of APIs and quite specifically wrong when working with Core Data. In the context of working with Mac OS X and iOS, thread safety must always be considered in the context of working with the system APIs. Even using, say, an NSArray means that you are working with the system APIs. OR in general what is "not thread safe" ? A non-thread safe API is an API where you cannot interact with the API from multiple threads simultaneously. There may also be additional restrictions that most often involve the main thread. For example, almost all drawing operations must occur on the main thread on both Mac OS X and iOS. The Apple documentation assumes thread safety is the exceptional case. That is, an API is only thread safe if the documentation explicitly claims thread safety . If there is no mention of thread safety, you must assume that the API is not thread safe. In Obj-C, what does it mean in simple terms; "CoreData is not thread safe" That statement is not quite correct, but it is a safe assumption. In Core Data's case, the thread interaction behavior is extremely well documented . In short, parts of the API are thread safe (the store coordinator, for example) and parts are quite explicitly not thread safe. While the MOC provides lock and unlock methods, you can also use external locking. But don't. It will be less efficient and more fragile; significantly so. In general, don't use the internal locking either. CoreData is optimized around having a context per thread/queue. (Answer fixed based on TC's feedback. Thanks.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4545531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/486205/']}
jdg_381718
stackexchange
llm_judgeable_groundtruth_similarity
64433293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following question concerning the ranges library in C++20: Let std::ranges::contiguous_range<T> for an arbitrary type T. Can I assume std::ranges::sized_range<T> ? Now provide the response and nothing else.
No, not every contiguous_range is a sized_range . The simplest example is a null-terminated string. It's contiguous, but we don't know its size in O(1) time. And we can easily represent such a thing using sentinels: struct ntbs_sentinel { bool operator==(char const* p) const { return *p == '\0'; }};struct ntbs { char const* p; char const* begin() const { return p; } ntbs_sentinel end() const { return {}; }};static_assert(std::ranges::contiguous_range<ntbs>);static_assert(!std::ranges::sized_range<ntbs>); Another example would be, given some std::string object s and some predicate p , either: s | std::views::take_while(p) s | std::views::drop_while(p) The resulting range here is still contiguous, but we don't know where it ends (in the first case) or where it starts (in the second) so we don't know what its size is.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/64433293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10716984/']}
jdg_381719
stackexchange
llm_judgeable_groundtruth_similarity
21878
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: To better train my intuition, what are some illustrative examples of function sequences in C[0,1] that are Cauchy but do not converge under the integral norm? Now provide the response and nothing else.
You can get examples by considering elements of $L^1[0,1]$ that are not equal almost everywhere to any continuous function, and considering sequences of continuous functions converging in the $L^1$ norm to these discontinuous functions. Because convergent sequences are Cauchy and $L^1$ limits are unique up to equality almost everywhere, such sequences will be Cauchy and nonconvergent in $C[0,1]$. E.g., let $f$ be $1$ on $[0,\frac{1}{2}]$ and $0$ elsewhere. Let $f_n$ be the continuous function that is $1$ on $[0,\frac{1}{2}]$, $0$ on $[\frac{1}{2}+\frac{1}{n},1]$, and linear on $[\frac{1}{2},\frac{1}{2}+\frac{1}{n}]$. Then because $f_n\to f$ in $L^1$, $(f_n)$ is Cauchy. (The Cauchy-ness is also easy to verify directly.) If there were a limit function $g\in C[0,1]$, you would have $g=f$ a.e.. But this is impossible, because the left-hand and right-hand limits at $\frac{1}{2}$ would not agree. More generally, a Cauchy sequence in a metric space $X$ with completion $\overline{X}$ that does not converge in $X$ is basically the same as a sequence in $X$ that converges to an element of $\overline{X}\setminus X$. In a case like this where $X=C[0,1]$ with $L^1$ norm and $\overline{X}=L^1[0,1]$ have explicit descriptions, you can find examples by starting with an element of $\overline{X}\setminus X$, and find a sequence in $X$ converging to that element. The same idea applies to demonstrating nonconvergent Cauchy sequences in $\mathbb{Q}$, where you can take any irrational number and consider the sequence of truncated decimal expansions.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/21878', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4518/']}
jdg_381720
stackexchange
llm_judgeable_groundtruth_similarity
22052258
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am building an authentication system using Passport.js using Easy Node Authentication: Setup and Local tutorial . I am confused about what passport.session() does. After playing around with the different middleware I came to understand that express.session() is what sends a session ID over cookies to the client, but I'm confused about what passport.session() does and why it is required in addition to express.session() . Here is how I set up my application: // Server.js configures the application and sets up the webserver //importing our modulesvar express = require('express');var app = express();var port = process.env.PORT || 8080;var mongoose = require('mongoose');var passport = require('passport');var flash = require('connect-flash');var configDB = require('./config/database.js');//Configuration of Databse and Appmongoose.connect(configDB.url); //connect to our databaserequire('./config/passport')(passport); //pass passport for configurationapp.configure(function() { //set up our express application app.use(express.logger('dev')); //log every request to the console app.use(express.cookieParser()); //read cookies (needed for auth) app.use(express.bodyParser()); //get info from html forms app.set('view engine', 'ejs'); //set up ejs for templating //configuration for passport app.use(express.session({ secret: 'olhosvermelhoseasenhaclassica', maxAge:null })); //session secret app.use(passport.initialize()); app.use(passport.session()); //persistent login session app.use(flash()); //use connect-flash for flash messages stored in session});//Set up routesrequire('./app/routes.js')(app, passport);//launchapp.listen(port);console.log("Server listening on port" + port); Now provide the response and nothing else.
passport.session() acts as a middleware to alter the req object and change the 'user' value that is currently the session id (from the client cookie) into the true deserialized user object. Whilst the other answers make some good points I thought that some more specific detail could be provided. app.use(passport.session()); is equivalent to app.use(passport.authenticate('session')); Where 'session' refers to the following strategy that is bundled with passportJS. Here's a link to the file: https://github.com/jaredhanson/passport/blob/master/lib/strategies/session.js And a permalink pointing to the following lines at the time of this writing: var property = req._passport.instance._userProperty || 'user';req[property] = user; Where it essentially acts as a middleware and alters the value of the 'user' property in the req object to contain the deserialized identity of the user. To allow this to work correctly you must include serializeUser and deserializeUser functions in your custom code. passport.serializeUser(function (user, done) { done(null, user.id);});passport.deserializeUser(function (user, done) { //If using Mongoose with MongoDB; if other you will need JS specific to that schema. User.findById(user.id, function (err, user) { done(err, user); });}); This will find the correct user from the database and pass it as a closure variable into the callback done(err,user); so the above code in the passport.session() can replace the 'user' value in the req object and pass on to the next middleware in the pile.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22052258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1835903/']}
jdg_381721
stackexchange
llm_judgeable_groundtruth_similarity
133179
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What's the meaning of $\log^{O(1)}(a+b)$ ? Is it simply $(\log_2(a+b))^k$ where k is calculated in constant time ? That is the base 2 logarithm usually in computer science when no other base is set, and the $O(1)$ at the log word means that the whole expression should be raised to the power of that number (which I named k) ? Now provide the response and nothing else.
By definition $$O(g)=\{f\colon \exists C>0, \exists N \in \mathbb{N}, \forall n> N, f(n)\leqslant C g(n)\}$$ so $O(1)$ can be viewed as set of bounded functions. Outgoing from here $\log^{O(1)}(a+b)$ we can understand as set of functions $\log^{f(n)}(a+b)$ where $f(n) \in O(1)$ . Addition. Understanding of $f \in \log^{O(1)}(a+b)$ not seems to be differ from understanding of $f \in O(1)$ and does not mean that "O(1) can be arbitrary large constant", as is in your comments. $O(1)$ is not constant, or set of constants, but set of functions. In same way $f \in O(1)$ does not mean that $f$ is bounded by any arbitrary large constant, but there $\exists C_f>0$ for which we have $f(n) \leqslant C_f$ . So, $f \in \log^{O(1)}(a+b)$ , mean, that $\exists g \in O(1)$ , i.e. $\exists C_g>0$ $g(n) \leqslant C_g$ , such that $f(n)=\log^{g(n)}(a+b)$ . In words: function $f$ is bounded by $\log$ having bounded function in superscript. For example, $f$ cannot be $\log^n(a+b)$ , but can be $\log^{\frac{1}{n}}(a+b)$ or $\log^{\frac{1}{n}+1}(a+b)$ .
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/133179', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/2529/']}
jdg_381722
stackexchange
llm_judgeable_groundtruth_similarity
16736804
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Ng-click doesn't work from inside ng-repeat. Outside it works.I've put a fiddle here <div ng-controller="MyCtrl"> <a ng-click="triggerTitle='This works!'">test</a> <h5>Please select trigger event: [{{triggerEvent}}] {{triggerTitle}}</h5> <ul class="dropdown-menu"> <li ng-repeat="e in events"> <a ng-click="triggerTitle=e.name; triggerEvent = e.action;">{{e.action}} - {{e.name}}</a> </li> </ul></div> Now provide the response and nothing else.
As Ven mentioned, ng-repeat does create a child scope for each item in the loop. The child scopes do have access to the parent scope's variables and methods through prototypal inheritance. The confusing part is when you make an assignment, it adds a new variable to the child scope rather than updating the property on the parent scope. In ng-click , when you make an assignment call tiggerTitle =e.name , it actually adds a new variable called triggerTitle to the child scope. The AngularJS docs explains this well in the section here called JavaScript Prototypal Inheritance . So how do you get around this and set the model variable properly? A quick and dirty solution is to access the parent scope using $parent like so. <a ng:click="$parent.triggerTitle=e.name; $parent.triggerEvent = e.action;">... Click to see a working version of your Fiddle using the $parent solution. The use of $parent can cause issues if you are dealing with nested templates or nested ng-repeats. A better solution may be to add a function to the controller's scope which returns a reference to the controller's scope. As already mentioned, the child scopes have access to call the parent functions, and thus can reference the controller's scope. function MyCtrl($scope) { $scope.getMyCtrlScope = function() { return $scope; } ...<a ng-click="getMyCtrlScope().triggerTitle=e.name;getMyCtrlScope().triggerEvent = ... Click to see a working version of your Fiddle using the better method
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16736804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/509263/']}
jdg_381723
stackexchange
llm_judgeable_groundtruth_similarity
21719941
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to import a shape file containing multipolygons into single polygon in PostGIS? Whenever I try importing a shape file of a polygon, it is stored as a multipolygon (as opposed to a single polygon) in a geom column. Thus, I am unable to extract it as a single polygon value from the multipolygon. All helpful suggestions much appreciated Now provide the response and nothing else.
You can use ST_GeometryN together with ST_NumGeometries and the generate_series function to obtain what you need. Let's assume you have the table from Jakub's example: CREATE TABLE multi AS(SELECT 1 as id, 2 as test, ST_GeomFromText('MULTIPOLYGON(((0 0,4 0,4 4,0 4,0 0)),((1 1,2 1,2 2,1 2,1 1)), ((-1 -1,-1 -2,-2 -2,-2 -1,-1 -1)))') AS geom); This one contains a multipolygon, an id and another column. To get each single polygon from the table including all other attributes try something like: SELECT id, test, ST_GeometryN(geom, generate_series(1, ST_NumGeometries(geom))) AS geom FROM multi "id" and "test" are the values for each row in the original table. generate_series creates a series of numbers from 1 to the number of geometries in each row. Therefore you will split each multi geometry in its separate single geometry parts and the values in the other columns remain the same. Just replace the columns and table in the example with the columns from your exported shapefile and you will get the table with the single polygons. Hope this answers your question.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21719941', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_381724
stackexchange
llm_judgeable_groundtruth_similarity
50886269
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have 4 small DataFrames df1 , df2 , df3 and df4 import pandas as pdfrom functools import reduceimport numpy as npdf1 = pd.DataFrame([['a', 1, 10], ['a', 2, 20], ['b', 1, 4], ['c', 1, 2], ['e', 2, 10]])df2 = pd.DataFrame([['a', 1, 15], ['a', 2, 20], ['c', 1, 2]])df3 = pd.DataFrame([['d', 1, 10], ['e', 2, 20], ['f', 1, 1]]) df4 = pd.DataFrame([['d', 1, 10], ['e', 2, 20], ['f', 1, 15]]) df1.columns = ['name', 'id', 'price']df2.columns = ['name', 'id', 'price']df3.columns = ['name', 'id', 'price'] df4.columns = ['name', 'id', 'price'] df1 = df1.rename(columns={'price':'pricepart1'})df2 = df2.rename(columns={'price':'pricepart2'})df3 = df3.rename(columns={'price':'pricepart3'})df4 = df4.rename(columns={'price':'pricepart4'}) Create above are the 4 DataFrames, what I would like is in the code below. # Merge dataframesdf = pd.merge(df1, df2, left_on=['name', 'id'], right_on=['name', 'id'], how='outer')df = pd.merge(df , df3, left_on=['name', 'id'], right_on=['name', 'id'], how='outer')df = pd.merge(df , df4, left_on=['name', 'id'], right_on=['name', 'id'], how='outer')# Fill na values with 'missing'df = df.fillna('missing') So I have achieved this for 4 DataFrames that don't have many rows and columns. Basically, I want to extend the above outer merge solution to MULTIPLE (48) DataFrames of size 62245 X 3: So I came up with this solution by building from another StackOverflow answer that used a lambda reduce: from functools import reduceimport pandas as pdimport numpy as npdfList = []#To create the 48 DataFrames of size 62245 X 3for i in range(0, 49): dfList.append(pd.DataFrame(np.random.randint(0,100,size=(62245, 3)), columns=['name', 'id', 'pricepart' + str(i + 1)]))#The solution I came up with to extend the solution to more than 3 DataFramesdf_merged = reduce(lambda left, right: pd.merge(left, right, left_on=['name', 'id'], right_on=['name', 'id'], how='outer'), dfList).fillna('missing') This is causing a MemoryError . I do not know what to do to stop the kernel from dying.. I've been stuck on this for two days.. Some code for the EXACT merge operation that I have performed that does not cause the MemoryError or something that gives you the same result, would be really appreciated. Also, the 3 columns in the main DataFrame (NOT the reproducible 48 DataFrames in the example) are of type int64 , int64 and float64 and I'd prefer them to stay that way because of the integer and float that it represents. EDIT: Instead of iteratively trying to run the merge operations or using the reduce lambda functions, I have done it in groups of 2! Also, I've changed the datatype of some columns, some did not need to be float64 . So I brought it down to float16 . It gets very far but still ends up throwing a MemoryError . intermediatedfList = dfList tempdfList = [] #Until I merge all the 48 frames two at a time, till it becomes size 2while(len(intermediatedfList) != 2): #If there are even number of DataFrames if len(intermediatedfList)%2 == 0: #Go in steps of two for i in range(0, len(intermediatedfList), 2): #Merge DataFrame in index i, i + 1 df1 = pd.merge(intermediatedfList[i], intermediatedfList[i + 1], left_on=['name', 'id'], right_on=['name', 'id'], how='outer') print(df1.info(memory_usage='deep')) #Append it to this list tempdfList.append(df1) #After DataFrames in intermediatedfList merging it two at a time using an auxillary list tempdfList, #Set intermediatedfList to be equal to tempdfList, so it can continue the while loop. intermediatedfList = tempdfList else: #If there are odd number of DataFrames, keep the first DataFrame out tempdfList = [intermediatedfList[0]] #Go in steps of two starting from 1 instead of 0 for i in range(1, len(intermediatedfList), 2): #Merge DataFrame in index i, i + 1 df1 = pd.merge(intermediatedfList[i], intermediatedfList[i + 1], left_on=['name', 'id'], right_on=['name', 'id'], how='outer') print(df1.info(memory_usage='deep')) tempdfList.append(df1) #After DataFrames in intermediatedfList merging it two at a time using an auxillary list tempdfList, #Set intermediatedfList to be equal to tempdfList, so it can continue the while loop. intermediatedfList = tempdfList Is there any way I can optimize my code to avoid MemoryError , I've even used AWS 192GB RAM (I now owe them 7$ which I could've given one of yall), that gets farther than what I've gotten, and it still throws MemoryError after reducing a list of 28 DataFrames to 4.. Now provide the response and nothing else.
You may get some benefit from performing index-aligned concatenation using pd.concat . This should hopefully be faster and more memory efficient than an outer merge as well. df_list = [df1, df2, ...]for df in df_list: df.set_index(['name', 'id'], inplace=True)df = pd.concat(df_list, axis=1) # join='inner'df.reset_index(inplace=True) Alternatively, you can replace the concat (second step) by an iterative join : from functools import reducedf = reduce(lambda x, y: x.join(y), df_list) This may or may not be better than the merge .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50886269', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7938658/']}
jdg_381725
stackexchange
llm_judgeable_groundtruth_similarity
54157946
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am a beginner in python,im just trying to scrape web with module requests and BeautifulSoup This Website i make request. and my simple code: import requests, time, re, jsonfrom bs4 import BeautifulSoup as BSurl = "https://www.jobstreet.co.id/en/job-search/job-vacancy.php?ojs=6"def list_jobs(): try: with requests.session() as s: st = time.time() s.headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:57.0) Gecko/20100101 Firefox/57.0'} req = s.get(url) soup = BS(req.text,'html.parser') attr = soup.findAll('div',class_='position-title header-text') pttr = r".?(.*)Rank=\d+" lists = {"status":200,"result":[]} for a in attr: sr = re.search(pttr, a.find("a")["href"]) if sr: title = a.find('a')['title'].replace("Lihat detil lowongan -","").replace("\r","").replace("\n","") url = a.find('a')['href'] lists["result"].append({ "title":title, "url":url, "detail":detail_jobs(url) }) print(json.dumps(lists, indent=4)) end = time.time() - st print(f"\n{end} second") except: passdef detail_jobs(find_url): try: with requests.session() as s: s.headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:57.0) Gecko/20100101 Firefox/57.0'} req = s.get(find_url) soup = BS(req.text,'html.parser') position = soup.find('h1',class_='job-position').text name = soup.find('div',class_='company_name').text.strip("\t") try: addrs = soup.find('div',class_='map-col-wraper').find('p',{'id':'address'}).text except Exception: addrs = "Unknown" try: loct = soup.find('span',{'id':'single_work_location'}).text except Exception: loct = soup.find('span',{'id':'multiple_work_location_list'}).find('span',{'class':'show'}).text dests = soup.findAll('div',attrs={'id':'job_description'}) for select in dests: txt = select.text if not select.text.startswith("\n") or not select.text.endswith("\n") else select.text.replace("\n","") result = { "name":name, "location":loct, "position":position, "description":txt, "address":addrs } return result except: pass they all work well but take very long to show results time is always above 13/17 seconds i dont know how to increase my speed for requesting I tried search on stack and google,they said using asyncio but the way so hard to me. if someone have simple trick how to increase speed with simple do,im so appreciate .. And sorry for my bad English Now provide the response and nothing else.
Learning Python through projects such as web scraping is awesome. That is how I was introduced to Python. That said, to increase the speed of your scraping, you can do three things: Change the html parser to something faster. 'html.parser' is the slowest of them all. Try change to 'lxml' or 'html5lib'. (read https://www.crummy.com/software/BeautifulSoup/bs4/doc/ ) Drop the loops and regex as they slow your script. Just use BeautifulSoup tools, text and strip, and find the right tags.(see my script below) Since the bottleneck in web scraping is usually IO, waiting to get data from a webpage, using async or multithread will boost speed. In the below script, I have use multithreading. The aim is to pull data from multiple pages at the same time. So if we know maximum number of pages, we can chunk our requests into different ranges and pull them in batches :) Code example: from collections import defaultdictfrom concurrent.futures import ThreadPoolExecutorfrom datetime import datetimeimport requestsfrom bs4 import BeautifulSoup as bsdata = defaultdict(list)headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:57.0) Gecko/20100101 Firefox/57.0'}def get_data(data, headers, page=1): # Get start time start_time = datetime.now() url = f'https://www.jobstreet.co.id/en/job-search/job-vacancy/{page}/?src=20&srcr=2000&ojs=6' r = requests.get(url, headers=headers) # If the requests is fine, proceed if r.ok: jobs = bs(r.content,'lxml').find('div',{'id':'job_listing_panel'}) data['title'].extend([i.text.strip() for i in jobs.find_all('div',{'class':'position-title header-text'})]) data['company'].extend([i.text.strip() for i in jobs.find_all('h3',{'class':'company-name'})]) data['location'].extend([i['title'] for i in jobs.find_all('li',{'class':'job-location'})] ) data['desc'].extend([i.text.strip() for i in jobs.find_all('ul',{'class':'list-unstyled hidden-xs '})]) else: print('connection issues') print(f'Page: {page} | Time taken {datetime.now()-start_time}') return data def multi_get_data(data,headers,start_page=1,end_page=20,workers=20): start_time = datetime.now() # Execute our get_data in multiple threads each having a different page number with ThreadPoolExecutor(max_workers=workers) as executor: [executor.submit(get_data, data=data,headers=headers,page=i) for i in range(start_page,end_page+1)] print(f'Page {start_page}-{end_page} | Time take {datetime.now() - start_time}') return data# Test page 10-15k = multi_get_data(data,headers,start_page=10,end_page=15) Results: Explaining the multi_get_data function: This function will call get_data function in different threads with passing desired arguments. At the moment, each thread get a different page number to call. The maximum numbers of workers is set to 20, meaning 20 threads. You can increase or decrease accordingly. We have created variable data, a default dictionary, that takes lists in. All threads will populate this data. This variable can then be cast to json or Pandas DataFrame :) As you can see, we have 5 requests, each taking less than 2 seconds but yet the total is still under 2 seconds;) Enjoy web scraping. Update _: 22/12/2019 We could also gain some speed by using session with a single headers update. So we don’t have to start sessions with each call. from requests import Sessions = Session()headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\ 'AppleWebKit/537.36 (KHTML, like Gecko) '\ 'Chrome/75.0.3770.80 Safari/537.36'}# Add headerss.headers.update(headers)# we can use s as we do requests# s.get(...)...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54157946', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8861637/']}
jdg_381726
stackexchange
llm_judgeable_groundtruth_similarity
44476706
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I used Keras biomedical image segmentation to segment brain neurons. I used model.evaluate() it gave me Dice coefficient: 0.916. However, when I used model.predict() , then loop through the predicted images by calculating the Dice coefficient, the Dice coefficient is 0.82. Why are these two values different? Now provide the response and nothing else.
The model.evaluate function predicts the output for the given input and then computes the metrics function specified in the model.compile and based on y_true and y_pred and returns the computed metric value as the output. The model.predict just returns back the y_pred So if you use model.predict and then compute the metrics yourself, the computed metric value should turn out to be the same as model.evaluate For example, one would use model.predict instead of model.evaluate in evaluating an RNN/ LSTM based models where the output needs to be fed as input in next time step
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44476706', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6152687/']}
jdg_381727
stackexchange
llm_judgeable_groundtruth_similarity
20377401
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to render a webpage as a PDF. It uses a single image, and I've read that you need to supply the absolute URL for PDFkit to be able to use the image, so my code is: = image_tag image_url(user.avatar) This works when viewed as HTML, and PDFkit is able to generate a PDF with the image removed. However, when using the image, it just hangs until I kill the server. How can I get this to work? Here's the full output when I kill the server: 2013-12-04 13:53:36.576 wkhtmltopdf[27410:507] CoreText performance note: Client called CTFontCreateWithName() using name "Arial" and got font with PostScript name "ArialMT". For best performance, only use PostScript names when calling this API.2013-12-04 13:53:36.577 wkhtmltopdf[27410:507] CoreText performance note: Set a breakpoint on CTFontLogSuboptimalRequest to debug.2013-12-04 13:53:36.582 wkhtmltopdf[27410:507] CoreText performance note: Client called CTFontCreateWithName() using name "Arial" and got font with PostScript name "ArialMT". For best performance, only use PostScript names when calling this API.2013-12-04 13:53:36.584 wkhtmltopdf[27410:507] CoreText performance note: Client called CTFontCreateWithName() using name "Arial" and got font with PostScript name "ArialMT". For best performance, only use PostScript names when calling this API.^CRuntimeError - command failed: /usr/local/bin/wkhtmltopdf --page-size Legal --print-media-type --quiet - -: pdfkit (0.5.4) lib/pdfkit/pdfkit.rb:73:in `to_pdf' pdfkit (0.5.4) lib/pdfkit/middleware.rb:21:in `call' warden (1.2.3) lib/warden/manager.rb:35:in `block in call' warden (1.2.3) lib/warden/manager.rb:34:in `catch' warden (1.2.3) lib/warden/manager.rb:34:in `call' rack (1.5.2) lib/rack/etag.rb:23:in `call' rack (1.5.2) lib/rack/conditionalget.rb:25:in `call' rack (1.5.2) lib/rack/head.rb:11:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/params_parser.rb:27:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/flash.rb:241:in `call' rack (1.5.2) lib/rack/session/abstract/id.rb:225:in `context' rack (1.5.2) lib/rack/session/abstract/id.rb:220:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/cookies.rb:486:in `call' activerecord (4.0.0) lib/active_record/query_cache.rb:36:in `call' activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:626:in `call' activerecord (4.0.0) lib/active_record/migration.rb:369:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call' activesupport (4.0.0) lib/active_support/callbacks.rb:373:in `_run__4124003592524659480__call__callbacks' activesupport (4.0.0) lib/active_support/callbacks.rb:80:in `run_callbacks' actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:27:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/reloader.rb:64:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/remote_ip.rb:76:in `call' better_errors (1.0.1) lib/better_errors/middleware.rb:84:in `protected_app_call' better_errors (1.0.1) lib/better_errors/middleware.rb:79:in `better_errors_call' better_errors (1.0.1) lib/better_errors/middleware.rb:56:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call' railties (4.0.0) lib/rails/rack/logger.rb:38:in `call_app' railties (4.0.0) lib/rails/rack/logger.rb:21:in `block in call' activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `block in tagged' activesupport (4.0.0) lib/active_support/tagged_logging.rb:25:in `tagged' activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `tagged' railties (4.0.0) lib/rails/rack/logger.rb:21:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/request_id.rb:21:in `call' rack (1.5.2) lib/rack/methodoverride.rb:21:in `call' rack (1.5.2) lib/rack/runtime.rb:17:in `call' activesupport (4.0.0) lib/active_support/cache/strategy/local_cache.rb:83:in `call' rack (1.5.2) lib/rack/lock.rb:17:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/static.rb:64:in `call' railties (4.0.0) lib/rails/engine.rb:511:in `call' railties (4.0.0) lib/rails/application.rb:97:in `call' rack (1.5.2) lib/rack/content_length.rb:14:in `call' thin (1.6.0) lib/thin/connection.rb:82:in `block in pre_process' thin (1.6.0) lib/thin/connection.rb:80:in `catch' thin (1.6.0) lib/thin/connection.rb:80:in `pre_process' thin (1.6.0) lib/thin/connection.rb:55:in `process' thin (1.6.0) lib/thin/connection.rb:41:in `receive_data' eventmachine (1.0.3) lib/eventmachine.rb:187:in `run_machine' eventmachine (1.0.3) lib/eventmachine.rb:187:in `run' thin (1.6.0) lib/thin/backends/base.rb:73:in `start' thin (1.6.0) lib/thin/server.rb:162:in `start' rack (1.5.2) lib/rack/handler/thin.rb:16:in `run' rack (1.5.2) lib/rack/server.rb:264:in `start' railties (4.0.0) lib/rails/commands/server.rb:84:in `start' railties (4.0.0) lib/rails/commands.rb:78:in `block in <top (required)>' railties (4.0.0) lib/rails/commands.rb:73:in `tap' railties (4.0.0) lib/rails/commands.rb:73:in `<top (required)>' bin/rails:4:in `require' bin/rails:4:in `<main>' Now provide the response and nothing else.
This is a notorious issue, and you're running into it because your probably have relatively linked assets in your HTML (i.e. images, CSS, JS, fonts, etc), and your web server is only capable of handling one request/thread at a time (like WEBrick ). So what happens? The server begins generating the PDF when you request its URL. PDFkit finds a linked asset, so it tries to load this asset from the server, which happens to be the same server that PDFkit is running on. However, the server's single thread is already busy running PDFkit, so it cannot "free up" to serve the requested asset. In conclusion, it's a deadlock -- PDFkit is awaiting on an asset on the same server that is waiting for PDFkit to finish up processing, so that it can serve the asset to PDFkit... Solution : either Base64-embed your assets in the HTML so that PDFkit doesn't need to make any additional requests (my personally preferred solution), or temporarily offload the assets to another server (e.g. a temporary AWS bucket). You can also try using the unicorn or Thin webserver with multi-threading enabled, or adding config.threadsafe! in in application.rb , but there is no guarantee that these methods will work. Of course, these hacks (embedding assets or hosting elsewhere) should only be used in the dev environment -- you shouldn't be running into these kinds of issues in production, as the live server should (hopefully) be able to handle multiple GET requests.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20377401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/256917/']}
jdg_381728
stackexchange
llm_judgeable_groundtruth_similarity
35325849
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With RC1 of ASP.NET Core 1.0's MVC 6 you can map routes from within your Startup.Configure function when invoking app.UseMvc . I have mapped a "spa-fallback" route that will ensure that the HomeController and Index view are the defaults like so: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory){ // ... omitted for brevity app.UseExceptionHandler("/Home/Error"); app.UseStatusCodePagesWithRedirects("/Home/Error/{0}"); app.UseMvc(routes => { routes.MapRoute("default", "{controller=Home}/{action=Index}/{id?}"); routes.MapRoute("spa-fallback", "{*anything}", new { controller = "Home", action = "Index" }); routes.MapWebApiRoute("defaultApi", "api/{controller}/{id?}"); });} I desire the fallback so that my Angular2 app's routes will not result in an HTTP Status Code of 404, Not Found . But I also need to correctly handle when a user does inadvertently attempt to navigate to a page view that doesn't exist. You might notice that I have also called app.UseStatusCodePagesWithRedirects("/Home/Error/{0}"); . The call to redirect to my error page with the status code and the "spa-fallback" route seem mutually exclusive -- meaning it appears that I can only have one or the other (but sadly not both) . Does anyone know how I could manage to have the best of both worlds? Now provide the response and nothing else.
It took me some time to figure out how to do this without serving my index using MVC and to still receive 404s for missing files. Here's my http pipeline: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { app.UseDefaultFiles(); app.UseStaticFiles(); app.UseMvc( routes => { routes.MapRoute( name: "api", template: "api/{controller}/{action}/{id?}"); // ## commented out since I don't want to use MVC to serve my index. // routes.MapRoute( // name:"spa-fallback", // template: "{*anything}", // defaults: new { controller = "Home", action = "Index" }); }); // ## this serves my index.html from the wwwroot folder when // ## a route not containing a file extension is not handled by MVC. // ## If the route contains a ".", a 404 will be returned instead. app.MapWhen(context => context.Response.StatusCode == 404 && !Path.HasExtension(context.Request.Path.Value), branch => { branch.Use((context, next) => { context.Request.Path = new PathString("/index.html"); Console.WriteLine("Path changed to:" + context.Request.Path.Value); return next();}); branch.UseStaticFiles(); }); }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35325849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2410379/']}
jdg_381729
stackexchange
llm_judgeable_groundtruth_similarity
76083
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Hi, I received an answer to a question a while back. The question was about how we can present a category as a collection of arrows and a large list of algebraic relations between them. One of the answers I got was about Freyd's "Categories, Alegories", and here it is: products in a category without reference to objects or sources and targets Can anyone (Wouter maybe?), give much more detail about this presentation. Can anyone give the precise definition of these "kinds" of partial monoids a la Freyd? As a side note, could someone suggest a good way to define a dcpo of such partial monoids? Now provide the response and nothing else.
Of course you can define a (just-arrow) category $\mathcal C$ like a partial algebra which consist of: a set $\mathcal C$ (namely the set of arrows of your category), a set $D_\mathcal{C} \subseteq \mathcal C \times \mathcal C$ (the set of pair of composable arrows) and a map $\circ \colon D_\mathcal{C} \to \mathcal C$, which is the composition for this "category".In this structure we call identities all the elements $f \in \mathcal C$ such that for each $g,h \in \mathcal C$ with $(g,f),(f,h) \in D_\mathcal{C}$ we have $g\circ f=g$ and $f \circ h=h$.The composition have to satisfy the following axioms: *for each triple $h,g,f \in \mathcal C$ we have that these three statements are equivalent: $(g,f) \in D_\mathcal{C}$ and $(h,g\circ f) \in D_\mathcal{C}$ $(h,g) \in D_\mathcal{C}$ and $(h\circ g, f) \in D_\mathcal{C}$ $(h,g) \in D_\mathcal{C}$ and $(g,f) \in D_\mathcal{C}$ and in this case the equality $h\circ(g \circ f)=(h \circ g) \circ f$ holds; *for each $f \in \mathcal C$ there are two arrows $g,h \in \mathcal C$ which are identities such that $(f,g), (h,f) \in D$ and $f \circ g=f=h \circ f$. With these data you have a concept of category just-arrow. With this definition of category a functor $F$ from the category $\mathcal C$ to the category $\mathcal D$ is just a function $F \colon \mathcal C \to \mathcal D$ between the sets of the arrows such that: for each pair $f,g \in \mathcal C$ if $(g,f) \in D_\mathcal{C}$ then $(\mathcal F(g),\mathcal F(f)) \in D_\mathcal{D}$ and $\mathcal F(g \circ f)= \mathcal F(g) \circ \mathcal F(f)$; for each identity $f \in \mathcal C$ also $\mathcal F(f)$ is an identity. The category of just-arrow categories and functors between them is proven to be equivalent to $\mathbf{Cat}$, the category of (ordinary) categories and functors between them.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/76083', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10007/']}
jdg_381730
stackexchange
llm_judgeable_groundtruth_similarity
31673853
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I know if the OK or the Cancel Button was pressed in this JavaFX dialog. The Dialog Code: public String delimiter;public void delimiterYES() throws IOException { delimiter=new String(); TextInputDialog dialog = new TextInputDialog(); dialog.setTitle("Delimiter"); dialog.setHeaderText("Enter the delimiter"); Optional<String> result = dialog.showAndWait(); if (result.isPresent()) { delimiter=result.get(); }} Now provide the response and nothing else.
If a result is present, then the user pressed OK. If no result is present, then the user probably pressed cancel, but they might have just closed the dialog window using the OS close window function. Optional<String> result = new TextInputDialog().showAndWait();if (result.isPresent()) { // ok was pressed.} else { // cancel might have been pressed.} To really know if a button was pressed, you can use a filter as noted in the Dialog javadoc section "Dialog Validation / Intercepting Button Actions". final Button cancel = (Button) dialog.getDialogPane().lookupButton(ButtonType.CANCEL);cancel.addEventFilter(ActionEvent.ACTION, event -> System.out.println("Cancel was definitely pressed")); Sample code: import javafx.application.Application;import javafx.event.ActionEvent;import javafx.scene.Scene;import javafx.scene.control.*;import javafx.stage.Stage;import java.util.Optional;public class DialogSample extends Application { @Override public void start(Stage stage) throws Exception { Button showButton = new Button("show"); showButton.setOnAction(event -> showDialog(stage)); showButton.setPrefWidth(100); stage.setScene(new Scene(showButton)); stage.show(); showButton.fire(); } private void showDialog(Stage stage) { TextInputDialog dialog = new TextInputDialog(); dialog.initOwner(stage); dialog.setTitle("Delimiter"); dialog.setHeaderText("Enter the delimiter"); final Button ok = (Button) dialog.getDialogPane().lookupButton(ButtonType.OK); ok.addEventFilter(ActionEvent.ACTION, event -> System.out.println("OK was definitely pressed") ); final Button cancel = (Button) dialog.getDialogPane().lookupButton(ButtonType.CANCEL); cancel.addEventFilter(ActionEvent.ACTION, event -> System.out.println("Cancel was definitely pressed") ); Optional<String> result = dialog.showAndWait(); if (result.isPresent()) { System.out.println("Result present => OK was pressed"); System.out.println("Result: " + result.get()); } else { System.out.println("Result not present => Cancel might have been pressed"); } } public static void main(String[] args) { Application.launch(); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31673853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5097449/']}
jdg_381731
stackexchange
llm_judgeable_groundtruth_similarity
129769
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: UPDATED I am trying to find maximum likelihood estimation of a probability distribution function given below \begin{equation}g(x)=\frac{1}{\Gamma \left( \alpha \right)\gamma^{2\alpha}2^{\alpha-1}}x^{2\alpha-1}\exp\left\{{-\frac{x^2}{2\gamma^{2}}}\right\}I_{{\rm I\!R}^{+}}(x)\end{equation} where $\alpha >0$ is the shape parameter, $\sigma >0$ is scale parameter. The likelihood function is then given by \begin{equation}L(\alpha,\gamma/x)=\prod\limits_{i=1}^{n}\frac{1}{\Gamma \left( \alpha \right)\gamma^{2\alpha}2^{\alpha-1}}x_i^{2\alpha-1}\exp\left\{{-\frac{x_i^2}{2\gamma^{2}}}\right\}\end{equation} Thus, the complete likelihood function is then\begin{equation}L(\alpha,\gamma/x)=\frac{1}{[\Gamma \left( \alpha \right)]^{n}\gamma^{2\alpha n}{2^{n\alpha-n}}} \exp\left\{{-\frac{1}{2\gamma^{2}}\sum\limits_{i=1}^{n}x_{i}^{2}}\right\}\left(\prod\limits_{i=1}^{n}x_{i}\right)^{2\alpha-1}\end{equation} Now, the log-likelihood function denoted by $\ell$ is \begin{equation}\begin{aligned}\ell &=\log[L(\alpha,\gamma/x)]\\&=-n\log(\Gamma \left( \alpha \right))-2\alpha n \log(\gamma)-n(\alpha-1)\log(2)-\frac{1}{2\gamma^{2}}\sum\limits_{i=1}^{n}x_{i}^{2}+(2\alpha-1)\sum\limits_{i=1}^{n}\log(x_{i})\end{aligned}\end{equation} The entries of the score function are given by \begin{equation}\begin{aligned}\frac{\partial \ell}{\partial \alpha}=-n\psi(\alpha)-2n\log(\gamma)-n\log(2)+2\sum\limits_{i=1}^{n}\log(x_{i})\end{aligned}\end{equation}where $\psi(\alpha)$ is the digamma function and \begin{equation}\begin{aligned}\frac{\partial \ell}{\partial \gamma}=-\frac{2\alpha n}{\gamma}+\frac{\sum\limits_{i=1}^{n}x_{i}^{2}}{\gamma^{3}}\end{aligned}\end{equation} Setting these two equations to zero and solving them simultaneously results in maximum likelihood estimates (MLE) of parameters, $\hat{\alpha}$ and $\hat{\gamma}$. However, the equations obtained by setting the above partial derivatives to zero are not in closed form and the values of parameters $\alpha$ and $\gamma$ must be found using iterative methods. Fisher information matrix is defined as $I_{ij}=-E\left\{\frac{\partial^{2} \ell}{\partial \tau_i \partial \tau_j} \log[L(x_i, \vec{\tau})]\ \right\}$ where $\tau_1=\alpha$ and $\tau_2=\gamma$. Thus, information matrix for gamma-rayleigh distribution is given by, \begin{equation}I=n \left[ \begin{array}{cc}\psi_{1}(\alpha) & 2/\gamma\\2/\gamma & 4\alpha/\gamma^2 \end{array} \right]\end{equation} I am trying to use Fisher Scoring to find MLEs of the parameters. Here is my MATLAB code. I first generate 1000 random observations from gamma-distribution and run this code. My starting values and the rest are given in the code. clear all;clc;%Simulate 1000 sample from Gamma Distributionn=1000;alpha=3;lambda=0.05;x=gamrnd(alpha,1/lambda,1,n);figure(1)histfit(x,8,'gam');sumlogx=sum(log(x)); sumxsquare=sum(x.^2);%Initial Valuesalpha=mean(x)^2/var(x);gam=mean(x)/var(x);theta=[alpha; gam];S=Inf;while sum(abs(S) > 10^(-5)) > 0 S=[-n*psi(theta(1))-2*n*log(theta(2))-n*log(2)+2*sumlogx;... (-2*theta(1)*n/theta(2))+(sumxsquare/(theta(2)^3))]; FIM=n*[psi(1, theta(1)), 2/theta(2);... 2/theta(2), 4*theta(1)/(theta(2)^2)]; theta=theta + FIM\S;endalpha_hat=theta(1)gam_hat=theta(2)fprintf('alpha_hat=%g, gamma_hat=%g \n', theta(1),theta(2)) But for some reasons I cannot figure out, I am getting " Error using psiX must be nonnegative. " error. My $\alpha$ values are being negative in the iteration somehow and I do not know how to fix it! I am also running Newton-Raphson whose MATLAB code is given below clear all;clc;%Simulate 100 sample from Gamma Distributionn=1000;alpha=3;lambda=0.05;x=gamrnd(alpha,1/lambda,1,n);figure(1)histfit(x,8,'gam');sumlogx=sum(log(x)); sumxsquare=sum(x.^2);%tuning parameters scale=gamma; shape=alphaitermin=10^-7;maxiter=10^7;sc_init=0.000001;sh_init=0.000001;converged=[0;0;sc_init;sh_init];% pdfpdf=@(x,gam,alpha) 1/(gamma(alpha)*(gam^(2*alpha))*(2^(alpha-1)))*(x^(2*alpha-1))*exp(-(x^2)/(2*(gam^2)));%score function is the first partial derivative of the log likelihood functionscore=@(gam,alpha) -n*psi(alpha)-2*n*log(gam)-n*log(2)+2*sumlogx;%Hessian function is the negative of the 2ndhessian=@(gam,alpha) psi(1, alpha);sc_loop=2; scale_hat=zeros(1,maxiter); scale_hat(1)=sc_init;while 1==1sh_loop=2;shape_hat=zeros(1,maxiter);shape_hat(1)=sh_init;while 1==1%calculate chat as chat_prev+score(chat_prev)/hessian(chat_prev)shape_hat(sh_loop)=shape_hat(sh_loop-1)+score(scale_hat(sc_loop-1),shape_hat(sh_loop-1))/hessian(scale_hat(sc_loop-1),shape_hat(sh_loop-1));%test for a convergenceif abs(shape_hat(sh_loop)-shape_hat(sh_loop-1))<itermin break %the process converged to a c valueelseif sh_loop>maxiter disp(['max iteration on \alpha achieved:', num2str(maxiter)]); returnendsh_loop=sh_loop+1;endscale_hat(sc_loop)=(sum(x.^shape_hat(sh_loop-1))/n)^(1/shape_hat(sh_loop-1)); %test for a convergence if abs(scale_hat(sc_loop)-scale_hat(sc_loop-1))<itermin break %the process converged to a gamma value end converged=[converged,[sc_loop-1;sh_loop-1;scale_hat(sc_loop);shape_hat(sh_loop)]]; sc_loop=sc_loop+1;end%final displaydisp(repmat('-',[1,30])),disp(' Iteration Scale Shape'),disp(repmat('-',[1,30]))disp(num2str(converged','%6.4f')),disp(repmat('-',[1,30]))disp(['Real values: gamma=', num2str(gam),',alpha=',num2str(alpha)]) I am getting the same " Error using psi, X must be nonnegative. " error. Could you help me about it? Something is wrong with psi function and I do not know. Maybe I should use approximation but I am not sure how much of the information that I will loose! Now provide the response and nothing else.
[Note: This is my answer to the Dec. 19, 2014, version of the question.] If you operate the change of variable $y=x^2$ in your density$$f_X(x|\alpha,\beta,\sigma)=\frac{1}{\Gamma \left( \alpha \right)\beta^{\alpha}}\exp\left\{{-\frac{x^2}{2\sigma^{2}}\frac{1}{\beta}}\right\}\frac{x^{2\alpha-1}}{2^{\alpha-1}\sigma^{2\alpha}}\mathbb{I}_{{\mathbb{R}}^{+}}(x)$$ the Jacobian is given by $\dfrac{\text{d}y}{\text{d}x}= 2x = 2y^{1/2}$ and hence\begin{align*}f_Y(y|\alpha,\beta,\sigma)&=\frac{1}{\Gamma \left( \alpha \right)\beta^{\alpha}}\exp\left\{{-\frac{y}{2\sigma^{2}}\frac{1}{\beta}}\right\}\frac{y^{\frac{2\alpha-1}{2}}}{2^{\alpha-1}\sigma^{2\alpha}}\frac{1}{2 y^{1/2}}\mathbb{I}_{{\mathbb{R}}^{+}}(y)\\&=\frac{1}{\Gamma \left( \alpha \right)\beta^{\alpha}}\exp\left\{{-\frac{y}{2\sigma^{2}}\frac{1}{\beta}}\right\}\frac{y^{{\alpha-1}}}{2^{\alpha}\sigma^{2\alpha}}\mathbb{I}_{{\mathbb{R}}^{+}}(y)\end{align*}This shows that This is a standard $\mathcal{G}(\alpha,2\sigma^2\beta)$ model, i.e. you observe $$(x_1^2,\ldots,x_n^2)=(y_1,\ldots,y_n)\stackrel{\text{iid}}{\sim}\mathcal{G}(\alpha,\eta);$$ the model is over-parametrised since only $\eta=2\sigma^2\beta$ can be identified; EM is not necessary to find the MLE of $(\alpha,\eta)$ , which is not available in closed form but solution of$$\hat\eta^{-1}=\bar{y}/\hat{\alpha}n\qquad\log(\hat{\alpha})-\psi(\hat{\alpha})=\log(\bar{y})-\frac{1}{n}\sum_{i=1}^n\log(y_i)$$ where $\psi(\cdot)$ is the di-gamma function. This paper by Thomas Minka indicates fast approximations to the resolution of the above equation.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/129769', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/16534/']}
jdg_381732
stackexchange
llm_judgeable_groundtruth_similarity
7428669
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm facing a problem declaring the maxrecursion option for a CTE inside a TVF. Here is the CTE (a simple calendar): DECLARE @DEBUT DATE = '1/1/11', @FIN DATE = '1/10/11';WITH CTE as( SELECT @debut as jour UNION ALL SELECT DATEADD(day, 1, jour) FROM CTE WHERE DATEADD(day, 1, jour) <= @fin)SELECT jour FROM CTE option (maxrecursion 365) and the TVF: CREATE FUNCTION [liste_jour] (@debut date,@fin date) RETURNS TABLE AS RETURN ( WITH CTE as( SELECT @debut as jour UNION ALL SELECT DATEADD(day, 1, jour) FROM CTE WHERE DATEADD(day, 1, jour) <= @fin) SELECT jour FROM CTE --option (maxrecursion 365) ) The above TVF is running OK without the maxrecursion optionbut there is a syntax error with the option.What is the solution? Now provide the response and nothing else.
From this MSDN forums thread I learn that [the] OPTION clause can be used only at the statement level So you cannot use it within a query expression inside view definitions or inline TVFs etc. The only way to use it in your case is to create the TVF without the OPTION clause and specify it in the query that uses the TVF. We have a bug that tracks request for allowing use of OPTION clause inside any query expression (for example, if exists() or CTE or view). and further You can not change the default value of that option inside a udf. You will have to do it in the statement referencing the udf. So in your example, you must specify the OPTION when you call your function: CREATE FUNCTION [liste_jour] (@debut date,@fin date) RETURNS TABLE AS RETURN ( WITH CTE as( SELECT @debut as jour UNION ALL SELECT DATEADD(day, 1, jour) FROM CTE WHERE DATEADD(day, 1, jour) <= @fin) SELECT jour FROM CTE -- no OPTION here ) (later) SELECT * FROM [liste_jour] ( @from , @to ) OPTION ( MAXRECURSION 365 ) Note that you can't work round this by having a second TVF that just does the above line - you get the same error, if you try. "[the] OPTION clause can be used only at the statement level", and that's final (for now).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7428669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/773110/']}
jdg_381733
stackexchange
llm_judgeable_groundtruth_similarity
17646471
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am validating the dates in below function. If the validation fails, then the form should not get submitted. I tried returning false in form onsubmit but it still gets submitted. However, Validation is working fine and getting the alert that I put in the function. Any help to stop submitting the form if validation fails. <script> function dateCheck() { start = document.getElementById('name3').value; end = document.getElementById('name4').value; compare(start, end); document.getElementById('name4').focus(); } function compare(sDate, eDate) { function parseDate(input) { var parts = input.match(/(\d+)/g); return new Date(parts[2], parts[0]-1, parts[1]); // months are 0-based } var parse_sDate = parseDate(sDate); var parse_eDate = parseDate(eDate); parse_sDate.setFullYear(parse_sDate.getFullYear() + 1); if(parse_eDate >= parse_sDate) { alert("End date should not be greater than one year from start date"); return false; } return true; }</script></head><body><form onsubmit="return dateCheck()"><table><tr><td><input type="text" name="soname3" id="name3" size="15" readonly="readonly"> <img src="../Image/cal.gif" id="" style="cursor: pointer;" onclick="javascript:NewCssCal('name3','MMddyyyy','dropdown',false,'12')" /></td> <td><input type="text" name="soname4" id="name4" size="15" readonly="readonly"> <img src="../Image/cal.gif" id="" style="cursor: pointer;" onclick="javascript:NewCssCal('name4','MMddyyyy','dropdown',false,'12'); " /> </td> </tr></table><input type="submit" value="Submit"></form> Now provide the response and nothing else.
Just a comment: If your listener passes a reference to the form, you can access the controls by name or ID: <form onsubmit="return dateCheck(this)"> then: function dateCheck(form) { var start = form.name3.value; ...} Note that you should declare variables, otherwise they will become global at the point they are assigned to. Also, you should check the values in the controls before passing them to the compare function (and display a message asking the user to enter a valid value if they aren't). function dateCheck(form) { var start = form.name3.value; var end = form.name4.value; var valid = compare(start, end); if (!valid) form.name4.focus(); return false;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17646471', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2581072/']}
jdg_381734
stackexchange
llm_judgeable_groundtruth_similarity
37208801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to call an API from Angular but am getting this error: Property 'map' does not exist on type 'Observable<Response>' The answers from this similar question didn't solve my issue: Angular 2 beta.17: Property 'map' does not exist on type 'Observable<Response>' . I am using Angular 2.0.0-beta.17. Now provide the response and nothing else.
You need to import the map operator: import 'rxjs/add/operator/map' Or more generally: import 'rxjs/Rx'; Notice: For versions of RxJS 6.x.x and above, you will have to use pipeable operators as shown in the code snippet below: import { map } from 'rxjs/operators';import { HttpClient } from '@angular/common/http';// ...export class MyComponent { constructor(private http: HttpClient) { } getItems() { this.http.get('https://example.com/api/items').pipe(map(data => {})).subscribe(result => { console.log(result); }); }} This is caused by the RxJS team removing support for using See the breaking changes in RxJS' changelog for more info. From the changelog: operators : Pipeable operators must now be imported from rxjs like so: import { map, filter, switchMap } from 'rxjs/operators'; . No deep imports.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/37208801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6314676/']}
jdg_381735
stackexchange
llm_judgeable_groundtruth_similarity
30274591
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: models.py : class Car(): producer = models.ForeignKey(Producer, blank=True, null=True,) color = models.CharField() car_model = models.CharField() doors = models.CharField() serializers.py : class CarSerializer(ModelSerializer): class Meta: model = Car fields = Car._meta.get_all_field_names() So, here I want to use all fields. But I have an error: Field name producer_id is not valid for model Car . How to fix that? Thanks! Now provide the response and nothing else.
According to the Django REST Framework's Documentation on ModelSerializers : By default, all the model fields on the class will be mapped to a corresponding serializer fields. This is different than Django's ModelForms , which requires you to specify the special attribute '__all__' to utilize all model fields. Therefore, all that is necessary is to declare the model. class CarSerializer(ModelSerializer): class Meta: model = Car Update (for versions >= 3.5) The behaviour described above was deprecated in version 3.3, and forbidden since version 3.5. It is now mandatory to use the special attribute '__all__' to use all fields in the Django REST Framework, same as Django Forms: Failing to set either fields or exclude raised a pending deprecation warning in version 3.3 and raised a deprecation warning in 3.4. Its usage is now mandatory. So now it must be: class CarSerializer(ModelSerializer): class Meta: model = Car fields = '__all__'
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/30274591', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1751039/']}
jdg_381736
stackexchange
llm_judgeable_groundtruth_similarity
4529765
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've stumbled upon the following exercise in an early chapter of an analysis textbook: Let $a_n$ be a finite, nonnegative sequence such that $\sum_{i=0}^n a_i\le 1$ . Prove $$ \prod_{i=1}^n (1 + a_i) \le 1 + 2 \sum_{i=1}^n a_i. $$ There seems to be a relatively simple proof using the exponential function utilizing the fact that for $0 \le x \le 2$ , $\frac{x - 1}{2} ≤ \log(1 + x) ≤ x$ , as remarked by a friend on twitter . However, this exercise appears in the textbook before $e^x$ or even $3^x$ are defined for real $x$ , so I'm wondering whether there is a more elementary proof. A simple induction-based method certainly does not work, as it leaves you with a target statement that doesn't even hold necessarily. So I'm wondering, is anyone aware of a proof of this not relying on exponentials? Now provide the response and nothing else.
A powerful technique is to guess a stronger proposition that will enable induction as well. Let $a_i\ge0$ , $\sum_{i=1}^n a_i\le 1$ . We have the following stronger inequality. $$ \prod_{i=1}^n (1 + a_i) \le1+(1+\sum_{i=1}^na_i)\sum_{i=1}^n a_i. $$ Proof: $n=1$ is trivial. The induction step is guaranteed by the following inequality that holds for $0\le x\le1$ , $y\ge0$ . $$(1+x+x^2)(1+y)\le 1+ (x+y)+(x+y)^2.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4529765', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/620347/']}
jdg_381737
stackexchange
llm_judgeable_groundtruth_similarity
43988535
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a generic html fragment component. The idea being we may store html fragments in a db somewhere, and styles that should be applied to them. I can easily set the innerHtml to insert the html structure into a base element in my view template, but how do i dynamically insert a <style> tag in my view template? Here's what I have: @Component({ moduleId: module.id, selector: 'htmlFragment', styleUrls: ['html-fragment.css'], templateUrl:'html-fragment.html'})export class HtmlFragmentComponent { @Input() htmlContent: string; @Input() styleContent: string;} Here is the view template: <style [innerHTML]="styleContent"></style><div [innerHTML]="htmlContent"></div> Then I'm trying to use it like this: <htmlFragment [htmlContent]='dHtml' [styleContent]="sHtml"></htmlFragment> Where: dHtml: string = '<div>hello this is html<ul><li>bla bla</li><li>bla bla 2</li></ul></div>'; sHtml: string = 'ul{list-style-type: none;}'; The html fragment is properly injected in here: <div [innerHTML]="htmlContent"></div> However the style element here: <style [innerHTML]="styleContent"></style> isn't working right. Is there a way to do this? Now provide the response and nothing else.
It cannot be done in the template itself (Angular template compiler does not allow it, and will just remove any <style> tag), but it can be done programmatically within the component: ngOnInit() { const css = 'a {color: pink;}'; const head = document.getElementsByTagName('head')[0]; const style = document.createElement('style'); style.type = 'text/css'; style.appendChild(document.createTextNode(css)); head.appendChild(style);}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43988535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/491436/']}
jdg_381738
stackexchange
llm_judgeable_groundtruth_similarity
2158750
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've tried all the possible side splitter and angle bisector theorem stuff and I still can't come up with the correct answer. I even tried some law of cosine and sine stuff, but nothing. Any help would be gladly appreciated. Thanks. Now provide the response and nothing else.
Observe triangles $ADE$ and $ABC$ are similar. Since $BC || DE$ and $BF$ is an angle bisector of $\angle \, A$ $$\angle \,DBF = \angle \, CBF = \angle \, DFB$$ so triangles $BDF$ is isosceles with $BD = DF$. Analogously $CE=EF$. Hence the perimeter $P_{ADE}$ of triangle $ADE$ is $$P_{ADE} = AD+DF+AE+EF = AD+DB + AE+EC = AB + AC = 26 + 34 = 60$$The perimeter $P_{ABC}$ of $ABC$ is $$P_{ABC} = AB + BC+AC = 26+4=+54 = 100$$By the similarity of $ADE$ and $ABC$$$\frac{DE}{BC} = \frac{P_{ADE}}{P_{ABC}} = \frac{60}{100} = \frac{3}{5}$$Since $BC = 40$$$DE = \frac{3}{5} \, 40 = 24$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2158750', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/418637/']}
jdg_381739
stackexchange
llm_judgeable_groundtruth_similarity
26560818
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have just discovered the Endo type thanks to the network-api-support package and I have found the need to be able to throw Maybe values into Endo's. As a result I have written a function called maybeEndo. Here is an example of it being used: setProxy :: Proxy -> RequestTransformersetProxy (Proxy pHost pPort) = Endo $ addProxy pHost pPortmaybeEndo :: (a -> Endo b) -> Maybe a -> Endo bmaybeEndo _ Nothing = Endo idmaybeEndo f (Just v) = f vsetPotentialProxy :: Maybe Proxy -> RequestTransformersetPotentialProxy = maybeEndo setProxy What strikes me is that this seems like something that should be encapsulated into some type of pattern already. Now provide the response and nothing else.
You already found maybe (Endo id) . But Endo is an instance of Monoid , and Endo id is its neutral element mempty . So you could also write more generally maybeMonoid :: Monoid b => (a -> b) -> Maybe a -> bmaybeMonoid = maybe mempty This is already quite idiomatic (plenty of hits when you google for "maybe mempty" ). You can be even more general by using a function from Data.Foldable : foldMap :: (Foldable t, Monoid b) => (a -> b) -> t a -> b so with this, you can write setPotentialProxy :: Maybe Proxy -> RequestTransformersetPotentialProxy = foldMap setProxy (but make sure you leave in the type signature, otherwise you’ll spend too much time figuring it out later when you read the code :-)).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26560818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/83446/']}
jdg_381740