date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/21
2,153
8,182
<issue_start>username_0: These two middleware functions behave differently and I cannot figure out why: Here, the error will get trapped by try/catch: ``` router.get('/force_async_error/0', async function (req, res, next) { try{ await Promise.reject(new Error('my zoom 0')); } catch(err){ next(err); } }); ``` But here, the error will **not** get trapped by try/catch: ``` router.get('/force_async_error/1', async function (req, res, next) { await Promise.reject(new Error('my zoom 1')); }); ``` I thought Express wrapped all middleware functions with try/catch, so I don't see how it would behave differently? I looked into the Express source, and the handler looks like: ``` Layer.prototype.handle_request = function handle(req, res, next) { var fn = this.handle; if (fn.length > 3) { // not a standard request handler return next(); } try { fn(req, res, next); // shouldn't this trap the async/await error? } catch (err) { next(err); } }; ``` so why doesn't the try/catch there capture the thrown error?<issue_comment>username_1: This is because the call is asynchronous, take this code : ```js try { console.log('Before setTimeout') setTimeout(() => { throw new Error('Oups') }) console.log('After setTimeout') } catch(err) { console.log('Caught', err) } console.log("Point of non-return, I can't handle anything anymore") ``` If you run it you should see that the error is triggered after `Point of non-return`. When we're at the `throw` line it's too late, we're outside of `try`/`catch`. At this moment if an error is thrown it'll be uncaught. You can work around this by using `async`/`await` **in the caller** (doesn't matter for the callee), ie : ```js void async function () { try { console.log('Before setTimeout') await new Promise((resolve, reject) => setTimeout(() => { reject(new Error('Oups')) }) ) console.log('After setTimeout') } catch(err) { console.log('Caught', err.stack) } console.log("Point of non-return, I can't handle anything anymore") }() ``` Finally, this means that for Express to handle async errors you would need to change the code to : ``` async function handle(req, res, next) { // [...] try { await fn(req, res, next); // shouldn't this trap the async/await error? } catch (err) { next(err); } } ``` **A better workaround**: Define a `wrap` function like this : ``` const wrap = fn => (...args) => Promise .resolve(fn(...args)) .catch(args[2]) ``` And use it like this : ``` app.get('/', wrap(async () => { await Promise.reject('It crashes!') })) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I'm going to add an answer here even though you've already accepted another one because I think what's going on here can be explained better and this will help others attempting to understand this. In your code here: ``` router.get('/force_async_error/1', async function (req, res, next) { await Promise.reject(new Error('my zoom 1')); }); ``` Let's discuss what is going on: First, you declared the callback as `async` which you had to do in order to use `await` in it. An `async` function tells the interpreter to do several important things. **1. An async function always returns a promise.** The resolved value of the promise will be whatever the function returns. **2. An async function is internally wrapped with a `try/catch`.** If any exceptions are thrown in the top level scope of the function code, then those exceptions will be caught and will automatically reject the promise that the function returns. **3. An async function allows you to use `await`.** This is an indicator to the interpreter that it should implement and allow the `await` syntax inside the function. This is tied to the previous two points above which is why you can't use `await` in just any 'ol function. Any uncaught rejections from `await` will also reject the promise that the function returns. It's important to understand that while the `async/await` syntax allows you to kind of program with exceptions and try/catch like synchronous code, it isn't exactly the same thing. The function is still returning a promise immediately and uncaught exceptions in the function cause that promise to get rejected at some time later. They don't cause a synchronous exception to bubble up to the caller. So, the Express `try/catch` won't see a synchronous exception. > > But here, the error will not get trapped by try/catch > > > I thought Express wrapped all middleware functions with try/catch, so I don't see how it would behave differently? > > > so why doesn't the try/catch [in Express] there capture the thrown error? > > > **This is for two reasons:** 1. The rejected promise is not a synchronous throw so there's no way for Express to catch it with a try/catch. The function just returns a rejected promise. 2. Express is not looking at the return value of the route handler callback at all (you can see that in the Express code you show). So, the fact that your `async` function returns a promise which is later rejected is just completely ignored by Express. It just does this `fn(req, res, next);` and does not pay attention to the returned promise. Thus the rejection of the promise falls on deaf ears. There is a somewhat Express-like framework called [Koa](http://koajs.com/) that uses promises a lot and does pay attention to returned promises and which would see your rejected promise. But, that's not what Express does. --- If you wanted some Koa-type functionality in Express, you could implement it yourself. In order to leave other functionality undisturbed so it can work normally, I'll implement a new method called `getAsync` that does use promises: ``` router.getAsync = function(...args) { let fn = args.pop(); // replace route with our own route wrapper args.push(function(req, res, next) { let p = fn(req, res, next); // if it looks like a promise was returned here if (p && typeof p.catch === "function") { p.catch(err => { next(err); }); } }); return router.get(...args); } ``` You could then do this: ``` router.getAsync('/force_async_error/1', async function (req, res, next) { await Promise.reject(new Error('my zoom 1')); }); ``` And, it would properly call `next(err)` with your error. Or, your code could even just be this: ``` router.getAsync('/force_async_error/1', function (req, res, next) { return Promise.reject(new Error('my zoom 1')); }); ``` --- P.S. In a full implementation, you'd probably make async versions of a bunch of the verbs and you'd implement it for middleware and you'd put it on the router prototype. But, this example is to show you how that could work, not to do a full implementation here. Upvotes: 4 <issue_comment>username_3: Neither of these really answer the question, which if I understand correctly is: ***Since the async/await syntax lets you handle rejected "awaits" with non-async style try/catch syntax, why doesn't a failed "await" get handled by Express' try/catch at the top level and turned into a 500 for you?*** I believe the answer is that whatever function in the Express internals that calls you would also have to be declared with "async" and invoke your handler with "await" to enable async-catching try/catch to work at that level. Wonder if there's a feature request for the Express team? All they'd need to add is two keywords in two places. If success, do nothing, if exception hand off to the error handling stack. Upvotes: 0 <issue_comment>username_4: Beware that if you don't await or return the promise, it has nothing to do with express - it just crashes the whole process. For a general solution for detached promise rejections: <https://stackoverflow.com/a/28709667> Copied from above answer: ``` process.on("unhandledRejection", function(reason, p){ console.log("Unhandled", reason, p); // log all your errors, "unsuppressing" them. //throw reason; // optional, in case you want to treat these as errors }); ``` Upvotes: 0
2018/03/21
1,268
3,910
<issue_start>username_0: I have a table called [database] with the following structure: ``` ID|text|section| 1 |xxxx| 1 | 2 |xxxx| 2 | 3 |xxxx| 2 | 4 |xxxx| 1 | 5 |xxxx| 4 | 6 |xxxx| 1 | ``` I'm trying to write SQL which returns the row with the second highest ID value for a given section. i.e. to get the row with the highest id value within section 1 I can do this: ``` select * from [database] where [ID] = ( select max([ID]) from [database] where [section] = 1 ) ``` To get the row with the second highest ID value I tried this, but with the data below this returns the row with ID 5, whilst I'd expect it to return the row with ID 4. ``` select * from [database] where [ID]= ( select max([ID])-1 from [database] where [section]=1 ) ``` I was trying `AND` command but it doesn't work. I am newbie to SQL and ASP. I think for first I need filter on `section = 1` and then second maximum/highest `id` within this section; something like: ``` if( section == 1 && secMaximum ) { . . . } ```<issue_comment>username_1: If you select the two highest `id` values for the section, and then select the lowest of the two, you should get your desired result. ``` SELECT * FROM ( SELECT * FROM [database] a WHERE section = 1 ORDER BY id DESC LIMIT 2 ) AS a1 ORDER BY id ASC LIMIT 1 ``` <http://sqlfiddle.com/#!9/b413eb/3> Upvotes: 1 <issue_comment>username_2: If you need the second maximum, just use `offset`: ``` select a.* from [database] a where a.section = 1 order by a.id desc limit 1, 1; ``` Upvotes: 2 <issue_comment>username_3: Both [Fubar](https://stackoverflow.com/a/49417775/361842) and [GordonLinoff](https://stackoverflow.com/a/49417930/361842)'s answers are excellent and will work if you're using MySql. In the comments you mentioned that you're using SQL Server (Management) Studio; which implies that you're using [Microsoft's SQL Server](https://www.microsoft.com/en-gb/sql-server/sql-server-downloads) rather than [MySQL](https://www.mysql.com/), as you had tagged in your question. NB: These are different products, and whist the `SQL` language is similar for both, there are differences. Here's a solution for MS Sql Server: ``` select [Id], [Text], [Section] from ( select * , Row_Number() over (partition by [Section] order by [Id] desc) MaxOrder from [Database] where [Section] = 1 ) x where MaxOrder = 2 ``` You can try it out here: <http://sqlfiddle.com/#!18/4dfce/2> Most of the SQL above you'll probably understand. The confusing part is `Row_Number() over (partition by [Section] order by [Id] desc)`, so here's an explanation of that. * `row_number()` is just an incremental number; i.e. the first row returned is 1, the second 2, and so on until the end of the "partition" (see next point). * `partition by` creates the partitions; here we're partitioning by `Section`, so for the rows with `Section = 1`, `row_number()` returns 1, 2, 3, etc. then when for those rows with `Section = 2`, `row_number()` returns 1, 2, and so on. * `order by` says how to decide how the numbering occurs within the partition; i.e. which row should get number 1, etc. Here we're ordering by `Id desc`; i.e. the row with the highest `Id` in the `Section` is row 1, the second hightest row 2, etc. If we only ran the statement below on your sample data we'd get the below results: ```sql select * , Row_Number() over (partition by [Section] order by [Id] desc) MaxOrder from [Database] ``` ``` Id | Text | Section | MaxOrder 6 | xxxx | 1 | 1 4 | xxxx | 1 | 2 1 | xxxx | 1 | 3 3 | xxxx | 2 | 1 2 | xxxx | 2 | 2 5 | xxxx | 4 | 1 ``` You can see that in action here: <http://sqlfiddle.com/#!18/4dfce/3> The rest I think you'd understand; but please ask in the comments if you have any questions. Upvotes: 1 [selected_answer]
2018/03/21
1,037
3,804
<issue_start>username_0: I have a container which has created a default user, which has UID 1000. In my Dockerfile, I am creating the user: ``` RUN groupadd sudo && useradd -G sudo -u 1000 -U ${RUST_USER} ``` Now when I run the container, unless my current user has exactly UID 1000, volume permissions are messed up: ``` docker run -it --rm naftulikay/circleci-lambda-rust:latest \ -v $PWD:/home/circleci/project \ .local/bin/build ``` At runtime: ``` error: failed to write /home/circleci/project/Cargo.lock Caused by: failed to open: /home/circleci/project/Cargo.lock Caused by: Permission denied (os error 13) Exited with code 101 ``` This is because the user within the container has UID 1000 and the user outside the container has UID 1001. I'd imagine that since this is already all virtual mappings into kernel namespaces, it would be possible to map internal UIDs to external UIDs from the container. Is there a command line option which will allow me to dynamically remap UIDs as necessary?<issue_comment>username_1: The dynamic mapping of UID's between the container and host has been requested but I believe it requires kernel and filesystem changes to implement. Until then, you've got a few options: 1. Make the host match the container. With host volumes, this isn't easy. But with named volumes, docker will initialize the volume to the contents of the image, including directory and file permissions, making it rather seamless. You would need to adjust your work flow to no longer have direct access to the data in the volume and instead use containers to access your data. 2. Run the container as the host uid. You can mount /etc/passwd into the container as a host volume, and you can start the container as any uid (with `docker run -u` or the `user` entry in a compose file). The only downside is that files in the image may already be owned by the uid used to build the image, so they'll either need to be world readable (potentially writable) or moved to the volume. 3. I've been known to start my container as root with an entrypoint that corrects the uid/gid mismatch based on the file permissions from a volume mount. Then the last step of my entrypoint is to drop permissions to that of the new uid and execute the container application. For an example of an entrypoint that does this, see this [jenkins in docker](https://github.com/sudo-bmitch/jenkins-docker) example of mine that matches the jenkins gid to that of the docker socket mounted from the host. Upvotes: 2 <issue_comment>username_2: Like [username_1](https://stackoverflow.com/users/596285/bmitch) said you have many options depending on your system, but keep in mind you may have to restart your running process (php-fpm for example) Some examples on how to achieve this: (you can run the commands outside the container with: docker container exec ...) * Example 1: `usermod -g 1007 www-data` It will update the uid of the user www-data to 1007 * Example 2: `deluser www-data` `adduser -u 1007 -D -S -G www-data www-data` It will delete the user www-data and recreate it with the uid 1007 * Get pid and restart process To restart a a running process, for example php-fpm, you can do it that way: First get the pid, with one of the following command: `pidof php-fpm` `ps -ef | grep -v grep | grep php-fpm | awk '{print $2}'` `find /proc -mindepth 2 -maxdepth 2 -name exe -lname '*/php-fpm' -printf %h\\n 2>/dev/null | sed s+^/proc/++` Then restart the process with the pid(s) you got just before (if your process support USR2 signal): `kill -USR2 pid` <-- replace pid by the number you got before I found that the easiest way is to update the host or to build your container knowing the right pid (not always doable if you work with different environments) Upvotes: 0
2018/03/21
805
3,053
<issue_start>username_0: I have a html form: ``` ``` I want to submit this form (so the request get sent to the php script) but without redirecting the page. I tried a lot of things like $.post, $.ajax, form.submit prevent defaults, etc., but without resolving my problem.<issue_comment>username_1: The dynamic mapping of UID's between the container and host has been requested but I believe it requires kernel and filesystem changes to implement. Until then, you've got a few options: 1. Make the host match the container. With host volumes, this isn't easy. But with named volumes, docker will initialize the volume to the contents of the image, including directory and file permissions, making it rather seamless. You would need to adjust your work flow to no longer have direct access to the data in the volume and instead use containers to access your data. 2. Run the container as the host uid. You can mount /etc/passwd into the container as a host volume, and you can start the container as any uid (with `docker run -u` or the `user` entry in a compose file). The only downside is that files in the image may already be owned by the uid used to build the image, so they'll either need to be world readable (potentially writable) or moved to the volume. 3. I've been known to start my container as root with an entrypoint that corrects the uid/gid mismatch based on the file permissions from a volume mount. Then the last step of my entrypoint is to drop permissions to that of the new uid and execute the container application. For an example of an entrypoint that does this, see this [jenkins in docker](https://github.com/sudo-bmitch/jenkins-docker) example of mine that matches the jenkins gid to that of the docker socket mounted from the host. Upvotes: 2 <issue_comment>username_2: Like [username_1](https://stackoverflow.com/users/596285/bmitch) said you have many options depending on your system, but keep in mind you may have to restart your running process (php-fpm for example) Some examples on how to achieve this: (you can run the commands outside the container with: docker container exec ...) * Example 1: `usermod -g 1007 www-data` It will update the uid of the user www-data to 1007 * Example 2: `deluser www-data` `adduser -u 1007 -D -S -G www-data www-data` It will delete the user www-data and recreate it with the uid 1007 * Get pid and restart process To restart a a running process, for example php-fpm, you can do it that way: First get the pid, with one of the following command: `pidof php-fpm` `ps -ef | grep -v grep | grep php-fpm | awk '{print $2}'` `find /proc -mindepth 2 -maxdepth 2 -name exe -lname '*/php-fpm' -printf %h\\n 2>/dev/null | sed s+^/proc/++` Then restart the process with the pid(s) you got just before (if your process support USR2 signal): `kill -USR2 pid` <-- replace pid by the number you got before I found that the easiest way is to update the host or to build your container knowing the right pid (not always doable if you work with different environments) Upvotes: 0
2018/03/21
384
1,289
<issue_start>username_0: I have an arrayList which contains employees information like employeename, grade, designation. I have a multiselect component in my view which returns an array of grades like `[1,2,3]` once we choose `grade1`, `grade2`, `grade3` from the multiselect dropdown. Is there a way to filter my employeelist based on these grades array? Similar like this: ``` this.employeeList.filter(x=> x.grade in (grade1,grade2,grade3)); ``` Or is there otherway to acheive this? Basically I need to filter my employeelist based on multiselect values. Please suggest as I am new to typescript. Your help is very much appreciated.<issue_comment>username_1: I'd just use something like: ``` x => [grade1,grade2,grade3].indexOf(x) != -1 ``` Upvotes: 0 <issue_comment>username_2: When you deal with regular variables, you can just wrap them into additional array `[var1, var2, ...]` and use `includes()` for check: ```js const grade1 = 1, grade2 = 2, grade3 = 3; const employeeList = [ { grade: 1 }, { grade: 3 }, { grade: 4 }, { grade: 8 }, { grade: 9 } ]; const result = employeeList.filter(x => [grade1,grade2,grade3].includes(x.grade)); console.log(result) // should print the array with objects {"grade": 1} and {"grade": 3} ``` Upvotes: 2
2018/03/21
675
2,666
<issue_start>username_0: If I issue `gcloud dataproc clusters list` 0 clusters are listed: ``` $ gcloud dataproc clusters list Listed 0 items. ``` However if I specify the region `gcloud dataproc clusters list --region europe-west1` I get back a list of clusters: ``` $ gcloud dataproc clusters list --region europe-west1 NAME WORKER_COUNT STATUS ZONE mydataproccluster1 2 RUNNING europe-west1-d mydataproccluster2 2 RUNNING europe-west1-d ``` I'm guessing that the inability to get a list of clusters without specifying `--region` is a consequence of a decision made by my org's administrators however I'm hoping there is a way around it. I can visit <https://console.cloud.google.com/> and see a list of all the clusters in the project, can I get the same using `gcloud`? Having to visit <https://console.cloud.google.com/> just so I can issue `gcloud dataproc clusters list --region europe-west1` seems a bit of a limitation.<issue_comment>username_1: Having to specify `--region` is how Dataproc command group in gcloud works. Developers Console issues lists requests against all regions (you could [request](https://cloud.google.com/support/docs/issue-trackers#search_and_create_feature_requests_by_product) for gcloud to do the same). Alternatively, you can use the `global` mutiregion (which is the gcloud default). This will interact well with your organization policies. If your Organization has region-restricted VM locations you will be able to create VMs in europe but will get an error when doing so elsewhere). Upvotes: 2 <issue_comment>username_2: The underlying regional services are by-design isolated from each other such that there's no single URL that returns the combined list (because that would be a global dependency and failure mode), and unfortunately, at the moment the layout of the gcloud libraries is such that there's no option for specifying a list of regions or shorthand for "all regions" when listing dataproc clusters or jobs. However, you can work around this by obtaining the list of possible regional stacks from the Compute API: ``` gcloud compute regions list --format="value(name)" | \ xargs -n 1 gcloud dataproc clusters list --region ``` The only dataproc region that doesn't match up to one of the Compute regions is the special "global" Dataproc region, which is a separate Dataproc service that spans all compute regions. For convenience you can also just add `global` to a for-loop: ``` for REGION in global $(gcloud compute regions list --format="value(name)"); do gcloud dataproc clusters list --region ${REGION}; done ``` Upvotes: 4 [selected_answer]
2018/03/21
912
2,643
<issue_start>username_0: I would like to use C++ `std::map` to access the value associated to a given key in log(n) time. Since the keys of a `std::map` are sorted, technically, I can access the keys by the location in the sorted order. I know std::map does not have a random access iterator. Is there any "map like" data structure providing both access through the keys (by using the [] operator) and also providing the (read-only) random access though the location of the key in the sorted order. Here is a basic example: ``` my_fancy_map['a'] = 'value_for_a' my_fancy_map['b'] = 'value_for_b' assert my_fancy_map.get_key_at_location(0) == 'a' assert my_fancy_map.get_key_at_location(1) == 'b' assert my_fancy_map.get_value_at_location(1) == 'value_for_b' assert my_fancy_map['a'] == 'value_for_a' ```<issue_comment>username_1: You could just iterate through them: ``` my_fancy_map['a'] = 'value_for_a' my_fancy_map['b'] = 'value_for_b' auto location = std::begin(my_fancy_map); assert location.first == 'a' ++location; assert location.first == 'b' assert location.second == 'value_for_b' assert my_fancy_map['a'] == 'value_for_a' ``` Upvotes: 0 <issue_comment>username_2: You can use Boost.MultiIndex's [ranked indices](http://www.boost.org/libs/multi_index/doc/tutorial/indices.html#rnk_indices): **`[Live On Coliru](http://coliru.stacked-crooked.com/a/b5e1a65715d0d2ce)`** ``` #include #include #include using namespace boost::multi\_index; template using ranked\_map=multi\_index\_container< std::pair, indexed\_by< ranked\_unique,K,&std::pair::first>> > >; #include #include int main() { ranked\_map m; m.emplace("a","value for a"); m.emplace("b","value for b"); assert(m.nth(0)->first=="a"); assert(m.nth(1)->first=="b"); assert(m.nth(1)->second=="value for b"); assert(m.find("a")->second=="value for a"); } ``` Note, however, that `nth` is not O(1) but logarithmic, so ranked indices are not exactly random-access. **Postscript:** Another alternative with true random access is Boost.Container's [flat associative containers](http://www.boost.org/doc/html/container/non_standard_containers.html#container.non_standard_containers.flat_xxx): **`[Live On Coliru](http://coliru.stacked-crooked.com/a/5b060201918ccb0b)`** ``` #include #include #include int main() { boost::container::flat\_map m; m["a"]="value for a"; m["b"]="value for b"; assert(m.nth(0)->first=="a"); assert(m.nth(1)->first=="b"); assert(m.nth(1)->second=="value for b"); assert(m["a"]=="value for a"); } ``` The downside here is that insertion takes linear rather than logarithmic time. Upvotes: 2 [selected_answer]
2018/03/21
2,485
8,076
<issue_start>username_0: I've been working on this for too long and need some help. I'm trying to create a dictionary using faker. If it were only that simple. Initially the dictionary is flat. A key and item. If the first letter of the key is 'B' or 'M' it will then turn that string, into a dictionary with 5 keys and keep doing that until it finds none starting with either of those two letters. I know, there's no recursion happening now. That's why I need help. I'm trying to figure out how to properly recurse rather than hard code the depth. ``` Starting Dictionary: { "Marcia": "https://www.skinner.biz/categories/tags/terms.htm", "Nicholas": "https://scott-tran.com/", "Christopher": "https://www.ellis.com/", "Paul": "https://lopez.com/index/", "Jennifer": "https://www.sosa.com/wp-content/main/login.php" } ``` Marcia should expand to this... ``` Example: "Marcia": { "Alexander": "http://hicks.net/home.html", "Barry": { "Jared": "https://www.parker-robinson.com/faq.html", "Eddie": "https://www.smith-thomas.com/", "Ryan": "https://www.phillips.org/homepage/", "Mary": { "Alex": "http://www.perry.com/tags/explore/post.htm", "Joseph": "https://www.hansen.com/main/list/list/index/", "Alicia": "https://www.tran.biz/wp-content/explore/posts/", "Anna": "http://lee-mclaughlin.biz/search/login/", "Kevin": "https://blake.net/main/index/" } "Evan": "http://carroll.com/homepage.html" } "Sharon": "https://www.watson.org/categories/app/login/", "Hayley": "https://www.parks.com/", "William": "https://www.wyatt-ware.com/" } ``` My code is more manual than dynamic in that I must explicitly know now many levels deep the dictionary goes rather than dynamically figuring it out. Here's what I have that works to the depth of 2 levels but I want to to find any key starting with 'B' or 'M' and acting on it. ``` import json from build_a_dictionary import add_dic from faker import Faker dic = add_dic(10) dic1 = {} dic2 = {} def build_dic(dic_len): dic1 = {} fake = Faker() if len(dic1) == 0: dic1 = add_dic(dic_len) print(json.dumps(dic1, indent=4)) for k, v in dic1.items(): dic2[k] = add_dic(dic_len) for key in dic2[k].keys(): for f in key: if f == 'B' or f == 'M': dic2[k][key] = add_dic(dic_len) return dic2 ``` Here is the code from add\_dic() I wrote: ``` import string, time from faker import Faker #had to install with pip fake = Faker() dic = {} dics = {} key = "" def add_dic(x): dic={} start = time.time() if x > 690: print("Please select a value under 690") sys.exit() for n in range(x): while len(dic) < x: key = fake.first_name() if key in dic.keys(): break val = fake.uri() dic[key] = val end = time.time() runtime = end - start return dic ```<issue_comment>username_1: You're just doing it wrong, if you want it to be recursive, write the function as a recursive function. It's essentially a custom (recursive) map function for a dictionary. As for your expected dictionary, I'm not sure how you'd ever get `Faker` to deterministically give you that same output every time. It's random... Note: There is nothing "dynamic" about this, it's just a recursive map function. ``` from faker import Faker import pprint pp = pprint.PrettyPrinter(indent=4) fake = Faker() def map_val(key, val): if key[0] == 'M' or key[0] == 'B': names = [(fake.first_name(), fake.uri()) for i in range(5)] return {k : map_val(k, v) for k,v in names} else: return val #uncomment below to generate 5 initial names #names = [(fake.first_name(), fake.uri()) for i in range(5)] #initial_dict = {k : v for k,v in names} initial_dict = { "Marcia": "https://www.skinner.biz/categories/tags/terms.htm", "Nicholas": "https://scott-tran.com/", "Christopher": "https://www.ellis.com/", "Paul": "https://lopez.com/index/", "Jennifer": "https://www.sosa.com/wp-content/main/login.php" } dict_2 = {k : map_val(k,v) for k,v in initial_dict.items()} pp.pprint(dict_2) ``` Output: ``` rpg711$ python nested_dicts.py { 'Christopher': 'https://www.ellis.com/', 'Jennifer': 'https://www.sosa.com/wp-content/main/login.php', 'Marcia': { 'Chelsea': 'http://francis.org/category.jsp', 'Heather': 'http://www.rodgers.com/privacy.jsp', 'Jaime': 'https://bates-molina.com/register/', 'John': 'http://www.doyle.com/author.htm', 'Kimberly': 'https://www.harris.org/homepage/'}, 'Nicholas': 'https://scott-tran.com/', 'Paul': 'https://lopez.com/index/' } ``` Upvotes: 0 <issue_comment>username_2: Recursion is when a function calls itself; when designing a recursive function, it's important to have an exit condition in mind (i.e. when will the recursion stop). Let's consider a contrived example to increment a number until it reaches a certain value: ``` def increment_until_equal_to_or_greater_than_value(item, target): print 'item is', item, if item < target: print 'incrementing' item += 1 increment_until_equal_to_or_greater_than_value(item, target) else: print 'returning' return item increment_until_equal_to_or_greater_than_value(1, 10) ``` And the output ``` item is 1 incrementing item is 2 incrementing item is 3 incrementing item is 4 incrementing item is 5 incrementing item is 6 incrementing item is 7 incrementing item is 8 incrementing item is 9 incrementing item is 10 returning ``` You can see we've defined our recursive part in the `if` statement and the exit condition in the `else`. I've put together a snippet that shows a recursive function on a nested data structure. It doesn't solve exactly your issue, this way you can learn by dissecting it and making it fit for your use case. ``` # our recursive method def deep_do_something_if_string(source, something): # if source is a dict, iterate through it's values if isinstance(source, dict): for v in source.itervalues(): # call this method on the value deep_do_something_if_string(v, something) # if source is a list, tuple or set, iterate through it's items elif isinstance(source, (list, tuple, set)): for v in source: deep_do_something_if_string(v, something) # otherwise do something with the value else: return something(source) # a test something to do with the value def print_it_out(value): print value # an example data structure some_dict = { 'a': 'value a', 'b': [ { 'c': 'value c', 'd': 'value d', }, ], 'e': { 'f': 'value f', 'g': { 'h': { 'i': { 'j': 'value j' } } } } } deep_do_something_if_string(some_dict, print_it_out) ``` And the output ``` value a value c value d value j value f ``` Upvotes: -1 <issue_comment>username_3: Thank you all for your help. I've managed to figure it out. It now builds a dynamic dictionary or dynamic json for whatever need. ``` import sys, json from faker import Faker fake = Faker() def build_dic(dic_len, dic): if isinstance(dic, (list, tuple)): dic = dict(dic) if isinstance(dic, dict): for counter in range(len(dic)): for k,v in dic.items(): if k[0] == 'B' or k[0] == "M": update = [(fake.first_name(), fake.uri()) for i in range(5)] update = dict(update) dic.update({k: update}) return dic def walk(dic): for key, item in dic.items(): #print(type(item)) if isinstance(item, dict): build_dic(5, item) walk(item) return dic a = build_dic(10, ([(fake.first_name(), fake.uri()) for i in range(10)])) walk(a) print(json.dumps(a, indent=4)) ``` Upvotes: 0
2018/03/21
291
1,116
<issue_start>username_0: I have a producer application that needs unit testing. I don't want to spin up a Zookeeper and Kafka server for this purpose. Is there a simpler way to test it using Mockito?<issue_comment>username_1: For such testing I've used EmbeddedKafka from the spring-kafka-test library (even though I wasn't using Spring in my app, that proved to be the easiest way of setting up unit tests). Here's an example : <https://www.codenotfound.com/spring-kafka-embedded-unit-test-example.html> It actually spins up a Kafka and Zookeeper in the same process for you, so you're not really mocking anything out and so you don't need mockito for this. I used plain JUnit. Upvotes: 1 <issue_comment>username_2: If you don't want to start Kafka and Zookeeper, you can use the Mock clients that come with Kafka to fake sending and receiving messages from a Kafka cluster: * MockProducer: <http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/MockProducer.html> * MockConsumer: <http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/MockConsumer.html> Upvotes: 4 [selected_answer]
2018/03/21
460
1,644
<issue_start>username_0: I am trying to make a slider, and while the code below to calculate the widths of the slides works, it throws an error in the console. What is the correct way to loop through and a width to these elements? Javascript ``` var calcSlideWidth = function(divId){ let slDiv = document.getElementById(divId); let slides = slDiv.getElementsByClassName('slide'); slDiv.style = "width:"+ 100*slides.length +"%"; slideWidth = 100/slides.length; for (let i = 0; i <= slides.length; i++ ){ slides[i].style = " width:"+ slideWidth +"%"; console.log(i); } } window.onload = function(){ calcSlideWidth("slider"); } ``` HTML ``` * Slide 1 * Slide 2 * Slide 3 ```<issue_comment>username_1: For such testing I've used EmbeddedKafka from the spring-kafka-test library (even though I wasn't using Spring in my app, that proved to be the easiest way of setting up unit tests). Here's an example : <https://www.codenotfound.com/spring-kafka-embedded-unit-test-example.html> It actually spins up a Kafka and Zookeeper in the same process for you, so you're not really mocking anything out and so you don't need mockito for this. I used plain JUnit. Upvotes: 1 <issue_comment>username_2: If you don't want to start Kafka and Zookeeper, you can use the Mock clients that come with Kafka to fake sending and receiving messages from a Kafka cluster: * MockProducer: <http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/MockProducer.html> * MockConsumer: <http://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/MockConsumer.html> Upvotes: 4 [selected_answer]
2018/03/21
805
2,876
<issue_start>username_0: I'm trying to speed up my updates and converting my update / set statement to a merge into / using. Old Version ``` ALTER SESSION ENABLE PARALLEL DML; UPDATE /*+ PARALLEL(16) */ TEST_REPORT_2 rep SET ( title ) = ( SELECT /*+ PARALLEL(16) */ doctitle.valstr Title FROM MV_LLATTRDATA_SHRUNK_V3 doctitle WHERE doctitle.id = rep.dataid AND doctitle.defid = 3072256 AND doctitle.attrid = 5 AND doctitle.vernum = (SELECT MV.MAX_VERNUM FROM MV_LLATTRDATA_MAX_VERSIONS_V1 MV WHERE MV.id = rep.dataid AND defid = 3072256 AND attrid = 5) AND doctitle.defvern = (SELECT MV.MAX_DEFVERN FROM MV_LLATTRDATA_MAX_VERSIONS_V1 MV WHERE MV.id = rep.dataid AND defid = 3072256 AND attrid = 5)); ``` New Version ``` MERGE INTO TEST_REPORT_2 REP USING MV_LLATTRDATA_SHRUNK_V3 doctitle ON (REP.DATAID = doctitle.ID AND doctitle.defid = 3072256 AND doctitle.attrid = 5 AND doctitle.vernum = (SELECT MV.MAX_VERNUM FROM MV_LLATTRDATA_MAX_VERSIONS_V1 MV WHERE MV.id = rep.dataid AND defid = 3072256 AND attrid = 5) AND doctitle.defvern = (SELECT MV.MAX_DEFVERN FROM MV_LLATTRDATA_MAX_VERSIONS_V1 MV WHERE MV.id = rep.dataid)) WHEN MATCHED THEN UPDATE SET TITLE = doctitle.VALSTR; ``` However I'm getting an error saying: "ORA-01427: single-row subquery returns more than one row"<issue_comment>username_1: One of the select statements of this form: ``` some_column = (select x from y where z) ``` is retuning multiple values. A simple fix is: ``` some_column = (select max(x) from y where z) ``` But one way or another you must force this select to return only one value Upvotes: 0 <issue_comment>username_2: Perhaps the logic for `defvern` needs to include `defid` and attrid`? ``` AND doctitle.vernum = (SELECT MV.MAX_VERNUM FROM MV_LLATTRDATA_MAX_VERSIONS_V1 MV WHERE MV.id = rep.dataid AND defid = 3072256 AND attrid = 5) AND doctitle.defvern = (SELECT MV.MAX_DEFVERN FROM MV_LLATTRDATA_MAX_VERSIONS_V1 MV WHERE MV.id = rep.dataid AND defid = 3072256 AND attrid = 5) ``` That is how the logic is structured in the `update`. Upvotes: 1
2018/03/21
493
1,829
<issue_start>username_0: I am using Python 3.6.4 on Windows 10 with Fall Creators Update. I am attempting to read a XML file using the following code: ``` with open('file.xml', 'rt', encoding='utf8') as file: for line in file.readline(): do_something(line) ``` `readline()` is returning a single character on each call, not a complete line. The file was produced on Linux, is definitely encoded as UTF8, has nothing special such as a BOM at the beginning and has been verified with a hex dump to contain valid data. The line end is `0x0a` since it comes from Linux. I tried specifying `-1` as the argument to `readline()`, which should be the default, without any change in behavior. The file is very large (>240GB) but the problem is occurring at the start of the file. Any suggestions as to what I might be doing wrong?<issue_comment>username_1: `readline()` will return a single line as a string (which you then iterate over). You should probably use `readlines()` instead, as this will give you a list of lines which your for-loop will iterate over, one line at a time. Even better, and more efficient: ``` for line in file: do_something(line) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: readline() returns a string representing a line in the file while readlines() returns a list, each item is a line. So it's clear that ``` for line in file.readline() ``` is iterating over a string, that's why you got a character If you want to iterate over the file and avoid jamming your memory, try this: ``` line = '1' while line: line = f.readline() if !line: break do_something(line) ``` or: ``` line = f.readline() while line: do_something(line) line = f.readline() ``` By the way, beautifulsoup is a useful package for xml phrasing. Upvotes: 2
2018/03/21
1,381
4,912
<issue_start>username_0: If I process the input from `stdin` with `scanLeft`, the resulting output is always one line behind my last input: ``` io.Source.stdin .getLines .scanLeft("START:")((accu, line) => accu + " " + line) .foreach(println(_)) ``` Results in (my manual inputs are preceded by `>`): ``` > first START: > second START: first > third START: first second ``` The sensible output I want is: ``` > first START: first > second START: first second > third START: first second third ``` As you can see, the output following the first input line should already contain the string of the first input line. I already tried it using `.scanLeft(...).drop(1).foreach(...)`, but this leads to the following result: ``` > first > second START: first > third START: first second ``` How do I correctly omit the pure seed to get the desired result? [UPDATE] For the time being I am content with <NAME>'s nifty workaround. Many thanks for suggesting it. **But of course, if there is any alternative to `scanLeft` that does not send the seed as first item into the following iteration chain, I will prefer that solution.** **[UPDATE]** **User *jwvh* understood my objective and provided an excellent solution to it. To round off their suggestion I seek a way of preprocessing the lines before sending them into the accumulation callback. Thus the `readLine` command should not be called in the accumulation callback but in a different chain link I can prepend.**<issue_comment>username_1: **Edit Summary:** Added a `map` to demonstrate that the preprocessing of lines returned by `getLines` is just as trivial. --- You could move `println` into the body of `scanLeft` itself, to force immediate execution without the lag: ``` io.Source.stdin .getLines .scanLeft("START:") { (accu, line) => accu + " " + line val res = accu + " " + line println(res) res }.foreach{_ => } ``` However, this seems to behave exactly the same as a shorter and more intuitive `foldLeft`: ``` io.Source.stdin .getLines .foldLeft("START:") { (accu, line) => accu + " " + line val res = accu + " " + line println(res) res } ``` Example interaction: ``` first START: first second START: first second third START: first second third fourth START: first second third fourth fifth START: first second third fourth fifth sixth START: first second third fourth fifth sixth seventh START: first second third fourth fifth sixth seventh end START: first second third fourth fifth sixth seventh end ``` --- **EDIT** You can of course add a `map`-step to preprocess the lines: ``` io.Source.stdin .getLines .map(_.toUpperCase) .foldLeft("START:") { (accu, line) => accu + " " + line val res = accu + " " + line println(res) res } ``` Example interaction (typed lowercase, printed uppercase): ``` > foo START: FOO > bar START: FOO BAR > baz START: FOO BAR BAZ ``` Upvotes: 2 <issue_comment>username_2: You can get something pretty similar with `Stream.iterate()` in place of `scanLeft()` and `StdIn.readLine` in place of `stdin.getLines`. ``` def input = Stream.iterate("START:"){prev => val next = s"$prev ${io.StdIn.readLine}" println(next) next } ``` Since a `Stream` is evaluated lazily you'll need some means to materialize it. ``` val inStr = input.takeWhile(! _.contains("quit")).last START: one //after input "one" START: one two //after input "two" START: one two brit //after input "brit" START: one two brit quit //after input "quit" //inStr: String = START: one two brit ``` --- You actually don't have to give up on the `getLines` iterator if that's a requirement. ``` def inItr = io.Source.stdin.getLines def input = Stream.iterate("START:"){prev => val next = s"$prev ${inItr.next}" println(next) next } ``` --- Not sure if this addresses your comments or not. Lots depends on where possible errors might come from and how they are determined. ``` Stream.iterate(document()){ doc => val line = io.StdIn.readLine //blocks here .trim .filterNot(_.isControl) //other String or Char manipulations doc.update(line) /* at this point you have both input line and updated document to play with */ ... //handle error and logging requirements doc //for the next iteration } ``` I've assumed that `.update()` modifies the source document and returns nothing (returns `Unit`). That's the usual signature for an `update()` method. Much of this can be done in a call chain (`_.method1.method2.` etc.) but sometimes that just makes things more complicated. Methods that don't return a value of interest can still be added to a call chain by using something called the [kestrel pattern](https://stackoverflow.com/questions/41815793/filter-and-report-multiple-predicates/41816281#41816281). Upvotes: 3 [selected_answer]
2018/03/21
227
738
<issue_start>username_0: The docs for [UIGestureRecognizer](https://developer.apple.com/documentation/uikit/uigesturerecognizer/1620004-reset) reference a `reset` function. However, calling `reset()` on a `UIPanGestureRecognizer`, which is a child of `UIGestureRecognizer`, generates this error message: > > Value of type 'UIPanGestureRecognizer' has no member 'reset' > > > How do you reset a `UIPanGestureRecognizer` during the "changed" state?<issue_comment>username_1: If the `UIPanGestureRecognizer` is called `recognizer`, here's how you can reset it: ``` recognizer.setTranslation(CGPoint(x: 0, y: 0), in: recognizer.view!) ``` Upvotes: 1 <issue_comment>username_2: try import UIKit.UIGestureRecognizerSubclass Upvotes: 0
2018/03/21
440
1,455
<issue_start>username_0: I'm relatively new to SQL but have learned some cool stuff. I'm getting results that don't make sense. I've got a query with several subqueries and what-not but I have a windowed function that isn't working like I'm expecting. The part that isn't working is this (simplified from the 300 line query): ``` SELECT AVG(table.sales_amount) OVER (PARTITION BY table.month, table.sales_rep, table.department) FROM table ``` The problem is that when I pull the data non aggregated I get a value different (107) than the above returns (95). I've used windowed functions for `COUNT` and `SUM` and they work fine, but `AVG` is acting strangely. Am I missing something about how this works with `AVG`? The subquery that `table` is a standin for looks like: sales\_rep, month, department, sales\_amount `1, 2017-1, abc, 125.20 1, 2017-2, abc, 120.00 2, 2017-1, def, 100.00 ...etc` Working out of Sql Server Management studio SOLVED: I did finally figure it out, the results i was joining this subquery to had the sales rep multiple times in a month selling objects A&B which caused whoever sold both to be counted twice. whoops, my bad.<issue_comment>username_1: If the `UIPanGestureRecognizer` is called `recognizer`, here's how you can reset it: ``` recognizer.setTranslation(CGPoint(x: 0, y: 0), in: recognizer.view!) ``` Upvotes: 1 <issue_comment>username_2: try import UIKit.UIGestureRecognizerSubclass Upvotes: 0
2018/03/21
812
2,791
<issue_start>username_0: I added a button to my layout. When I try to write a callback for it, I get the following error: ``` dash.exceptions.NonExistantEventException: Attempting to assign a callback with the event "click" but the component "get_custom_vndr" doesn't have "click" as an event. Here is a list of the available events in "get_custom_vndr": [] ``` Here's how I'm adding it to my layout: ``` app_vndr.layout = html.Div([ html.Button( '+', id='get_custom_vndr', type='submit' ) ]) ``` and here's the callback function that's giving the above error: ``` @app_vndr.callback( dash.dependencies.Output('overlay', 'className'), events=[dash.dependencies.Event('get_custom_vndr', 'click'), dash.dependencies.Event('add_vndr_id_submit', 'click')]) def show_input(b1_n, b2_n): if b1_n>0: return '' elif b1_n>0: return 'hidden' ``` Did I miss something when I added the button to my layout? or when I tried to write the callback? I got it working for ``` dash.dependencies.Input('get_custom_vndr', 'n_clicks') ``` but I'd like to use two buttons for the same output and with the n\_clicks event, I'd need to try to figure out which button was clicked by comparing the current n\_clicks to the previous n\_clicks for each button, which seems like a pretty hacky way to do it.<issue_comment>username_1: I am not really sure if I understood your question right, but see, if this helps... > > but I'd like to use two buttons for the same output > > > Easy! Just define the explicit dash component as Output, followed by two buttons as Input. No matter, which buton is clicked, both of them will trigger the same function. Example: ``` @app.callback( dash.dependencies.Output('overlay', 'className'), [dash.dependencies.Input('button1', 'n_clicks'), dash.dependencies.Input('button2', 'n_clicks')]) def update_output(n_clicks1, n_clicks2): # This is, where the magic happens ``` > > and with the n\_clicks event, I'd need to try to figure out which button was clicked > > > If you want to use two buttons for the same function **and** differ, which one is being used, separate the solution above into two functions: ``` @app.callback( dash.dependencies.Output('overlay', 'className'), [dash.dependencies.Input('button1', 'n_clicks')]) def update_output(n_clicks1): # This is, where the magic happens for button 1 @app.callback( dash.dependencies.Output('overlay', 'className'), [dash.dependencies.Input('button2', 'n_clicks')]) def update_output(n_clicks2): # This is, where the magic happens for button 2 ``` Upvotes: -1 <issue_comment>username_2: Dash doens't allow multiple callback for the same Output() Upvotes: 2 [selected_answer]
2018/03/21
703
2,385
<issue_start>username_0: I'm a very beginner in docker world, and I'm not able to make two containers communicate using docker compose. I have two containers: * Service Registry: a simple spring boot application using Netflix Eureka to implement a service registry feature. * API Gateway: a simple spring boot application using Netflix Zuul. This application will try periodically to be registered into the Service Registry application by connecting to a given URL All work fine without docker !! And now with docker-compose, the gateway is not able to find the Eureka server URL. docker-compose.yml file: ``` version: '3.5' services: gateway: build: context: ../robots-store-gateway dockerfile: Dockerfile image: robots-store-gateway ports: - 8000:8000 networks: - robots-net serviceregistry: build: context: ../robots-sotre-serviceregistry image: robots-sotre-serviceregistry ports: - 8761:8761 networks: - robots-net networks: robots-net: name: custom_network driver: bridge ``` The application.yml file of the gateway is: ``` eureka: client: service-url: default-zone: http://serviceregistry:8761/eureka/ ``` I receive this exception: > > com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: > Connection refused (Connection refused) > > > I tried different ways to configure the Eureka client but no way !! it doesn't work. Thanks in advance.<issue_comment>username_1: You didn't set the container name. [Link](https://docs.docker.com/compose/compose-file/#container_name) And the url or ip should replace to container name. Example: mongodb://{container\_name}:27017 Upvotes: -1 <issue_comment>username_2: I don't exactly why !!! But finally, this works for me: ``` version: '3.5' services: gateway: container_name: gateway build: context: ../robots-store-gateway dockerfile: Dockerfile image: robots-store-gateway ports: - 8000:8000 hostname: gateway environment: eureka.client.serviceUrl.defaultZone: http://serviceregistry:8761/eureka/ serviceregistry: container_name: serviceregistry build: context: ../robots-sotre-serviceregistry image: robots-sotre-serviceregistry ports: - 8761:8761 hostname: serviceregistry environment: eureka.client.serviceUrl.defaultZone: http://serviceregistry:8761/eureka/ ``` Upvotes: 0
2018/03/21
355
1,501
<issue_start>username_0: For some projects I do or work on sometimes it is usually best that we squash/rebase all changes into a single commit. However, I was wondering how this affects the contributions page on github. For example, if I spent 2 months pushing changes to a project I created and then after 2 months decided to rebase it to one single commit, would github remove all the contribution cubes on the map for the past two months?<issue_comment>username_1: The reference page is "[Why are my contributions not showing up on my profile?](https://help.github.com/articles/why-are-my-contributions-not-showing-up-on-my-profile/)" > > Commits will appear on your contributions graph if they meet all of the following conditions: > > > * The email address used for the commits is associated with your GitHub account. > * The commits were made in a standalone repository, not a fork. > * The commits were made: > + In the repository's default branch (usually `master`) > > > So if your rebase affect commits in `master`, chances are your contribution page would reflect that. Upvotes: 2 <issue_comment>username_2: I saw this still here so I figured I might as well answer the question. So the answer is YES. It will remove the contributions from the graph. It won't do it right away because commits that are no longer being pointed to by anything can technically still be reached for awhile but are eventually garbage collected and thus removed from your contributions page. Upvotes: 2
2018/03/21
1,961
7,638
<issue_start>username_0: We have some code that uses Facebook Open Graph API to display some posts on our home page. It was originally developed by a previous developer and I rewrote it in ASP.NET MVC for our home page (where before it was PHP which I believe was loaded in an iframe). At that time, I used the app ID and secret that were left to me. This has functioned fine for a couple of years. This afternoon, we started getting an error back on our site: "Access to this data is temporarily disabled for non-active accounts due to changes we are making to the Facebook Platform". No sweat. I figured I just needed to update our ID and secret. Unfortunately, no one seems to remember the user ID that was in control of that app ID. No sweat. I'll make my own. Unfortunately, any ID and secret I use to access posts -- even my own posts on a page totally not related to work -- returns the same access error. I can get name or cover or some other fields, but as soon as I request any posts, I get the error. Here's an example of what I'm trying: `https://graph.facebook.com/MyCompanyName?fields=cover,name,likes,link,posts.limit(5){created_time,message,link,type,full_picture,picture,source,icon}&access_token=<PASSWORD>|b<PASSWORD>` I am aware of the status post at <https://developers.facebook.com/status/issues/205942813488872/>, but I think I must be doing something wrong since I can't even create new appIDs to get posts with. Why does Facebook Graph API say my account is non-active? Thanks.<issue_comment>username_1: My understanding is that if your not a production app, they are limiting your for specific reasons. Unclear if thats because of Cambridge Leak, or upgrading the instagram api. I also received the same error, however, if you are testing, you can hard code the graph api explorer token into your app to continue testing... ``` var data { 'accessToken': 'EEAC...', } FB.api('/' + id, getData, data, (_response) => { console.log(_response); }); ``` Upvotes: 1 <issue_comment>username_2: I had this problem. It's solved automatically. I think it's a Facebook issue. Upvotes: 0 <issue_comment>username_3: Please read this article: [*<NAME> apologises for Facebook's 'mistakes' over Cambridge Analytica*](https://www.theguardian.com/technology/2018/mar/21/mark-zuckerberg-response-facebook-cambridge-analytica) Cambridge University researcher named <NAME> had used an app to extract the information of more than 50 million people, and then transferred it to Cambridge Analytica for commercial and political use. So facebook is changing its policies so that the personal data could be made more secure. Until then you cant do anything about it. Upvotes: 6 [selected_answer]<issue_comment>username_4: I solved the problem on my website by removing the **events** from the fetched fields list Upvotes: 2 <issue_comment>username_5: For me it work if I leave just one field - "name". If I add "link" and/or "events" fields it returns error Upvotes: 0 <issue_comment>username_6: What is issue? This error is due to recent action taken by Facebook. They said ” Access to certain types of API data is paused for non-active accounts due to changes we are making to the Facebook Platform” So if your account is non-active and you have created App using it then it might possible you get this error in your Plugin. Facebook Issue link is When it will resolved? Facebook has temporarily disabled some non-active accounts as they mentioned they haven’t given any estimated time to fix issue but it should get activated soon. You can find more update on [facebook Event API here](https://xylusthemes.com/error-200-access-to-this-data-is-temporarily-disabled-for-non-active-accounts-due-to-changes-we-are-making-to-the-facebook-platform/) Upvotes: -1 <issue_comment>username_7: We started seeing this same error message on our platform today. I think there are a few things going on that all tie together: 1. As others have mentioned, there have been rapid and major responses by Facebook to increase data protection and privacy in light of the Cambridge Analytica incident. From what I understand, the bad actors exploited the ability access the data of Users (via the graph) that the app did not have an active, first-party relationship with. So, sort of like how "6 degrees of separation" would get you the whole planet, the 1 degree of separation on the few-hundred-thousand Users that connected with the app directly gave the app access to roughly 50 Million users...or something like that. FB is doing what they can to lock that stuff down now, big time. 2. The specific cause of your error is that something you're asking for in the `fields` parameter makes a leap (from the either the `myCompany` or the OAuth'd User/App whose `access_token` you are using) to a related item/items that FB now deems must have an "active" first-party/direct relationship with your Company/App/User in order to access. This is why you see the somewhat cryptic "`non-active accounts`" mentioning. I think they really mean that it's not "active with you or your app". I'm not sure which one of the `fields` you request is at fault, but some trial-and-error will lead you to it. For us, it was clear: we were asking for the Members of all the Groups that User had access to. We didn't need that, so we cut it out and the error went away. Upvotes: 2 <issue_comment>username_8: Right now I am working with Facebook Open Graph API. And I was having this error every time I wanted to access/get the member (and their basic info) of the groups I am Admin. ``` { "error": { "message": "(#200) Access to this data is temporarily disabled for non-active accounts due to changes we are making to the Facebook Platform", "type": "OAuthException", "code": 200, "fbtrace_id": "Byueyj6MtkoIx" } } ``` In between trial and error @JoshChristy was getting all the desired results! And after couple hours of research we discovered that facebook recognize some account as "non-active" and some "active" (I don't know based on what!) because I am pretty much active in facebook. So, if you are getting this error that means you are not active enough for facebook ;) Upvotes: 1 <issue_comment>username_9: Facebook today updated the term and conditions <https://developers.facebook.com/docs/graph-api/changelog/breaking-changes/?translation&hc_location=ufi#groups-4-4> Upvotes: 2 <issue_comment>username_10: I was able to get Facebook Page Access token using the method below. For anyone who may **already have** an app which has been reviewed can use that app's details as a temporary fix until Facebook is done with their API enhancements. Meaning you'll have to add the relevant redirect uris to the reviewed app as well as use that app's App Id and App Secret. This works for retrieving page feeds and leads, I wasn't able to retrieve conversations. Also the permissions I requested were **{ scope: 'ads\_management,ads\_read,manage\_pages' }** Upvotes: 0 <issue_comment>username_11: In our case, we retrieved, for example, a page access\_token with a page ID using like this: ``` this.call('v2.12/'+pageid, 'GET', {fields: "access_token"}, token) ``` ended up with the error you mentioned. However, we took a normal approach and all looks good now. <https://developers.facebook.com/docs/facebook-login/access-tokens#pagetokens> Upvotes: 0 <issue_comment>username_12: Same thing I just noticed too, and they kept my lead gen ads running and charging me eventhough they blocked the data. Luckily going into ad manager directly you can still download the CSV/XLS files. Upvotes: 0
2018/03/21
1,894
7,150
<issue_start>username_0: I meet some problems using traefik with docker and I don't know why. With some containers, it's works like a charm and for other ones, I have an error when I try to access to these ones : Bad gateway (error 502). Here is my traefik.toml : ``` # Service logs (here debug mode) debug = true logLevel = "DEBUG" defaultEntryPoints = ["http", "https"] # Access log filePath = "/var/log/traefik/access.log" format = "common" ################################################################ # Web configuration backend ################################################################ [web] address = ":8080" ################################################################ # Entry-points configuration ################################################################ [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" [entryPoints.https.tls] ################################################################ # Docker configuration backend ################################################################ [docker] domain = "domain.tld" watch = true exposedbydefault = false endpoint = "unix:///var/run/docker.sock" ################################################################ # Let's encrypt ################################################################ [acme] email = "<EMAIL>" storageFile = "acme.json" onDemand = false onHostRule = true entryPoint = "https" [acme.httpChallenge] entryPoint = "http" [[acme.domains]] main = "domain.tld" sans = ["docker.domain.tld", "traefik.domain.tld", "phpmyadmin.domain.tld", "perso.domain.tld", "muximux.domain.tld", "wekan.domain.tld", "wiki.domain.tld", "cloud.domain.tld", "email.domain.tld"] ``` Here is my docker-compose.yml (for portainer, which is a container which works) : ``` version: '2' services: portainer: restart: always image: portainer/portainer:latest container_name: "portainer" #Automatically choose 'Manage the Docker instance where Portainer is running' by adding <--host=unix:///var/run/docker.sock> to the command ports: - "9000:9000" networks: - traefik-network volumes: - /var/run/docker.sock:/var/run/docker.sock - ../portainer:/data labels: - traefik.enable=true - traefik.backend=portainer - traefik.frontend.rule=Host:docker.domain.tld - traefik.docker.network=traefik-network - traefik.port=9000 - traefik.default.protocol=http networks: traefik-network: external : true ``` If I go to docker.domain.tld, it works ! and in https, with valide let's encrypt certificate :) Here is my docker-compose.yml (for dokuwiki, which is a container which does not work) : ``` version: '2' services: dokuwiki: container_name: "dokuwiki" image: bitnami/dokuwiki:latest restart: always volumes: - ../dokuwiki/data:/bitnami ports: - "8085:80" - "7443:443" networks: - traefik-network labels: - traefik.backend=dokuwiki - traefik.docker.network=traefik-network - traefik.frontend.rule=Host:wiki.domain.tld - traefik.enable=true - traefik.port=8085 - traefik.default.protocol=http networks: traefik-network: external: true ``` If I go to wiki.domain.tld, it does not work ! I have a bad gateway error on the browser. I have tried to change the traefik.port to 7443 and the traefik.default.protocol to https but I have the same error. Of course it works when I try to access the wiki with the IP and the port (in http / https). I have bad gateway only when I type wiki.domain.tld. So, I don't understand why it works for some containers and not for other ones with the same declaration.<issue_comment>username_1: The traefik port should be the http port of the container, not the published port on the host. It communicates over the docker network, so publishing the port is unnecessary and against the goals of only having a single port published with a reverse proxy to access all the containers. In short, you need: ``` traefik.port=80 ``` --- Since this question has gotten lots of views, the other reason lots of people see a 502 from traefik is placing the containers on a different docker network from the traefik instance, or having a container on multiple networks and not telling traefik which network to use. This doesn't apply in your case since you have the following lines in your compose file that match up with the traefik service's network: ``` services: dokuwiki: networks: - traefik-network labels: - traefik.docker.network=traefik-network networks: traefik-network: external : true ``` Even if you only assign a service to a single network, some actions like publishing a port will result in your service being attached to two different networks (the ingress network being the second). The network name in the label needs to be the external name, which in your case is the same, but for others that do not specify their network as external, it may have a project or stack name prefixed which you can see in the `docker network ls` output. Upvotes: 7 [selected_answer]<issue_comment>username_2: `traefik.docker.network` must also be the fully qualified network name. Either externally defined, or prefixed with the stack name. You can alternatively define a default network with `providers.docker.network=traefik-network` which means you don't have to add the label to every container. Upvotes: 2 <issue_comment>username_3: Verify Apply: ``` firewall-cmd --add-masquerade --permanent ``` FROM: <https://www.reddit.com/r/linuxadmin/comments/7iom6e/what_does_firewallcmd_addmasquerade_do/> > > Masquerading is a fancy term for Source NAT. > > > firewall-cmd in this instance will be adding an iptables rule, > specifically to the POSTROUTING chain in the nat table. > > > You can see what it has actually done by running iptables -t nat -nvL > POSTROUTING. A typical command to manually create a masquerading rule > would be iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE, which > translates to "For packets leaving interface eth0 after they have been > routed, change their source address to the interface address of eth0". > > > This automatically adds a connection tracking entry so that packets > for connections that are masqueraded in this way have their original > address and port information reinstated as they return back through > the system. > > > None of this makes your Linux system into a router; that is separate > behaviour which is enabled (for IPv4) either by doing sysctl -w > net.ipv4.ip\_forward=1 or echo 1 > /proc/sys/net/ipv4/ip\_forward. > > > Routing simply means that the system will dumbly traffic it receives > according to the destination of that traffic; the iptables NAT stuff > allows you to alter the packets which are emitted after that routing > takes place. > > > This is a really simple overview and there is a lot more complexity > and possibilities available by configuring it in different ways. > > > Upvotes: 0
2018/03/21
1,242
4,614
<issue_start>username_0: I am trying to connect to a remote SQL Server from my Spring Boot App. I have the login credentials inside application.properties ``` spring.datasource.url=jdbc:sqlserver://XXXXXXXXX;databaseName=ABC;integratedSecurity=false spring.datasource.username=user spring.datasoruce.password=<PASSWORD> spring.datasource.driver-class-name=com.microsoft.sqlserver.jdbc.SQLServerDriver ``` It gives the error: ``` com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for user 'user'. ClientConnectionId:212427fc-5a34-4b59-944a-cdd3856116e5 at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216) ~[sqljdbc4-4.0.jar:na] at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:254) ~[sqljdbc4-4.0.jar:na] .... ``` But if I try to connect from SQL Server Management using the exact same login credentials(SQL Server Authentication) it works perfectly. It even works with pymssql. My python code is ``` import pymssql conn = pymssql.connect(host="XXXXXXXXX", user="user", password="<PASSWORD>", database="ABC") ``` I ran a couple of SELECT statements in python and they worked fine. I don't understand why I am unable to connect through Java. Any help is appreciated. Thanks.<issue_comment>username_1: The traefik port should be the http port of the container, not the published port on the host. It communicates over the docker network, so publishing the port is unnecessary and against the goals of only having a single port published with a reverse proxy to access all the containers. In short, you need: ``` traefik.port=80 ``` --- Since this question has gotten lots of views, the other reason lots of people see a 502 from traefik is placing the containers on a different docker network from the traefik instance, or having a container on multiple networks and not telling traefik which network to use. This doesn't apply in your case since you have the following lines in your compose file that match up with the traefik service's network: ``` services: dokuwiki: networks: - traefik-network labels: - traefik.docker.network=traefik-network networks: traefik-network: external : true ``` Even if you only assign a service to a single network, some actions like publishing a port will result in your service being attached to two different networks (the ingress network being the second). The network name in the label needs to be the external name, which in your case is the same, but for others that do not specify their network as external, it may have a project or stack name prefixed which you can see in the `docker network ls` output. Upvotes: 7 [selected_answer]<issue_comment>username_2: `traefik.docker.network` must also be the fully qualified network name. Either externally defined, or prefixed with the stack name. You can alternatively define a default network with `providers.docker.network=traefik-network` which means you don't have to add the label to every container. Upvotes: 2 <issue_comment>username_3: Verify Apply: ``` firewall-cmd --add-masquerade --permanent ``` FROM: <https://www.reddit.com/r/linuxadmin/comments/7iom6e/what_does_firewallcmd_addmasquerade_do/> > > Masquerading is a fancy term for Source NAT. > > > firewall-cmd in this instance will be adding an iptables rule, > specifically to the POSTROUTING chain in the nat table. > > > You can see what it has actually done by running iptables -t nat -nvL > POSTROUTING. A typical command to manually create a masquerading rule > would be iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE, which > translates to "For packets leaving interface eth0 after they have been > routed, change their source address to the interface address of eth0". > > > This automatically adds a connection tracking entry so that packets > for connections that are masqueraded in this way have their original > address and port information reinstated as they return back through > the system. > > > None of this makes your Linux system into a router; that is separate > behaviour which is enabled (for IPv4) either by doing sysctl -w > net.ipv4.ip\_forward=1 or echo 1 > /proc/sys/net/ipv4/ip\_forward. > > > Routing simply means that the system will dumbly traffic it receives > according to the destination of that traffic; the iptables NAT stuff > allows you to alter the packets which are emitted after that routing > takes place. > > > This is a really simple overview and there is a lot more complexity > and possibilities available by configuring it in different ways. > > > Upvotes: 0
2018/03/21
605
2,195
<issue_start>username_0: I am loading an external svg into my Vue application as a Vue Component using the **vue-svg-loader**: <https://www.npmjs.com/package/vue-svg-loader?activeTab=readme>. I modified the loader configuration to make sure the IDs don't get dropped: ``` { test: /\.svg$/, loader: 'vue-svg-loader', // `vue-svg` for webpack 1.x options: { // optional [svgo](https://github.com/svg/svgo) options svgo: { plugins: [ {removeDoctype: true}, {removeComments: true}, {cleanupIDs: false} ] } } } ``` The svg I am trying to load looks something like this: ``` ``` Using the loader, the svg loads successfully, but some of the tags get dropped. The resulting looks as follows: ``` ``` "group-2", "group-4" and "group-5" get dropped, but the paths inside are intact. Has anyone else encountered this issue or know a good solution to this? Thanks!<issue_comment>username_1: I found a solution to this issue and for anyone else who might be having similar problems, here is the solution: Change the loader configuration to the following: ``` { test: /\.svg$/, loader: 'vue-svg-loader', // `vue-svg` for webpack 1.x options: { svgo: { plugins: [ {removeDoctype: true}, {removeComments: true}, {cleanupIDs: false}, {collapseGroups: false}, {removeEmptyContainers: false} ] } } } ``` Other configurations for the loader are available here: <https://github.com/svg/svgo> Upvotes: 3 <issue_comment>username_2: For anyone who uses vue cli: ``` *vue.config.js* module.exports = { chainWebpack: (config) => { const svgRule = config.module.rule('svg'); svgRule.uses.clear(); svgRule .use('babel-loader') .loader('babel-loader') .end() .use('vue-svg-loader') .loader('vue-svg-loader') .options({ svgo: { plugins: [ { cleanupIDs: false }, { collapseGroups: false }, { removeEmptyContainers: false }, ], }, }); }, }; ``` you are probably searching for this in order to get rid of cleaning up ids or collapsing groups Upvotes: 0
2018/03/21
1,406
5,174
<issue_start>username_0: I have a grid which shows records from a table. On this grid I am using customized pagination and sorting, so I need to use customized column filtering as well. ``` var expression = ExpressionBuilder.Expression(request.Filters); ``` The above code snippet gets filter condition from Kendo Grid in the controller of type `System.Linq.Expressions.Expression>` expression, and am converting it to string, and passing it to DAL code as shown below, ``` string filterExpression = ExpressionBuilder.Expression(request.Filters).ToString(); List eventModelList = new List(); eventModelList = eventComponent.GetEventData(request.PageSize, request.Page, searchstring, sortDirection, sortColumnName, filterExpression, ref recCount); ``` In the DAL I need to convert `filterExpression` from string to `System.Linq.Expressions.Expression>` ![Converting to Expression type](https://i.stack.imgur.com/oENPD.png) ``` var res = eventInfo.AsQueryable().Where(filterExpression);//Gets error here lstEventInfo = lstEventInfo.AsQueryable().Where(res); ``` Am getting an error can not convert from string to System.Linq.Expressions.Expression>'. So could anyone tell me how could I convert a string to `System.Linq.Expressions.Expression>` type in C#.<issue_comment>username_1: Here is a simple example how to create where dynamically. ``` public class Mock { public int Id { get; set; } public int ForeignId { get; set; } public decimal Total { get; set; } } class Program { static void Main(string[] args) { var list = new List() { new Mock{ Id = 1, ForeignId = 1, Total = 100, }, }; var query = list.AsQueryable(); // t var parameter = Expression.Parameter(typeof(Mock), "t"); // t.Total var propertyExpression = Expression.PropertyOrField(parameter, "Total"); // 100.00M var constant = Expression.Constant(100M, typeof(decimal)); // t.Total == 100.00M var equalExpression = Expression.Equal(propertyExpression, constant); // t => t.Total == 100.00M var lambda = Expression.Lambda(equalExpression, parameter); // calls where. var whereExpression = Expression.Call(typeof(Queryable), "Where", new[] { query.ElementType }, query.Expression, lambda); // add where to query. query = query.Provider.CreateQuery(whereExpression) as IQueryable; Console.ReadKey(); } } ``` --- But You can use this <https://github.com/PoweredSoft/DynamicLinq> here is the Nuget package <https://www.nuget.org/packages/PoweredSoft.DynamicLinq/> There is a litle sample here how to do simple web filtering <https://github.com/PoweredSoft/DynamicLinq#how-it-can-be-used-in-a-web-api> You could adapt it to fit your filter expression model. ``` [HttpGet][Route("FindClients")] public IHttpActionResult FindClients(string filterField = null, string filterValue = null, string sortProperty = "Id", int? page = null, int pageSize = 50) { var ctx = new MyDbContext(); var query = ctx.Clients.AsQueryable(); if (!string.IsNullOrEmpty(filterField) && !string.IsNullOrEmpty(filterValue)) query = query.Query(t => t.Contains(filterField, filterValue)).OrderBy(sortProperty); // count. var clientCount = query.Count(); int? pages = null; if (page.HasValue && pageSize > 0) { if (clientCount == 0) pages = 0; else pages = clientCount / pageSize + (clientCount % pageSize != 0 ? 1 : 0); } if (page.HasValue) query = query.Skip((page.Value-1) * pageSize).Take(pageSize); var clients = query.ToList(); return Ok(new { total = clientCount, pages = pages, data = clients }); } ``` --- An alternative is to use DynamicLinq <https://weblogs.asp.net/scottgu/dynamic-linq-part-1-using-the-linq-dynamic-query-library> Upvotes: 2 <issue_comment>username_2: I wrote this code snippet for converting from string to Expression. ``` public List Get(string filter = null) { var p = Expression.Parameter(typeof(Product), "x"); var e = (Expression)DynamicExpressionParser.ParseLambda(new[] { p }, null, filter); var typedExpression = (Expression>)e; var res = \_productDal.GetList(typedExpression); return res; } ``` I used **System.Linq.Dynamic.Core** namespace for Asp.Net Core. You can use **System.Linq.Dynamic** namespace for Asp.Net. If you use classic Asp.Net instead of Asp.Net Core, you should write `var e = (Expression)DynamicExpression.ParseLambda(new[] { p }, null, filter);` instead of `var e = (Expression)DynamicExpressionParser.ParseLambda(new[] { p }, null, filter);` And your string parameter (which is named as a ***filter***) should be like *"(x.ProductID > 10)"* If your string parameter is different, you can use following code snippet to convert from expression to string for getting same string parameter into *Get* method. ``` public static string Select(this Grid _grid, Expression> filter = null) { //filter = {x => (x.ProductID > 1)} BinaryExpression be = filter.Body as BinaryExpression; //be = {(x.ProductID > 1)} return be.ToString(); } ``` Upvotes: 1
2018/03/21
930
3,527
<issue_start>username_0: Having some issues getting strikethrough to work. Currently I'm doing the following: ``` theString.addAttributes([ NSAttributedStringKey.strikethroughStyle: NSUnderlineStyle.styleSingle.rawValue, NSAttributedStringKey.strikethroughColor: UIColor.white ], range: NSMakeRange(0, 1)) ``` It's not showing any sort of strikethrough though. Any ideas? I can't seem to find anything that works.<issue_comment>username_1: I am sharing latest updates. For Swift4, iOS 11.3 ``` attributedText.addAttributes([ NSStrikethroughStyleAttributeName: NSUnderlineStyle.styleSingle.rawValue, NSStrikethroughColorAttributeName: UIColor.black], range: textRange) ``` Upvotes: 0 <issue_comment>username_2: I use this to strike through a text in Xcode9/iOS11/Swift4 env ``` let strokeEffect: [NSAttributedStringKey : Any] = [ NSAttributedStringKey.strikethroughStyle: NSUnderlineStyle.styleSingle.rawValue, NSAttributedStringKey.strikethroughColor: UIColor.red, ] let strokeString = NSAttributedString(string: "text", attributes: strokeEffect) ``` Upvotes: 4 <issue_comment>username_3: swift 4.1 ``` attributedText.addAttributes([ NSAttributedStringKey.strikethroughStyle:NSUnderlineStyle.styleSingle.rawValue, NSAttributedStringKey.strikethroughColor:UIColor.black], range: NSMakeRange(0, attributedText.length)) ``` Upvotes: 2 <issue_comment>username_4: swift 4.2 ``` let mutPricesString : NSMutableAttributedString = NSMutableAttributedString() var attrPrice_1 : NSAttributedString = NSAttributedString() var attrPrice_2 : NSAttributedString = NSAttributedString() attrPrice_1 = NSAttributedString(string: "14000 KD", attributes: [NSAttributedString.Key.strikethroughStyle: NSUnderlineStyle.single.rawValue, NSAttributedString.Key.foregroundColor: UIColor.lightGray, NSAttributedString.Key.font: ARFont.Regular(fontSize: 15)]) attrPrice_2 = NSAttributedString(string: " 12460 KD", attributes: [NSAttributedString.Key.foregroundColor: UIColor.black, NSAttributedString.Key.font: ARFont.Semibold(fontSize: 17)]) mutPricesString.append(attrPrice_1) mutPricesString.append(attrPrice_2) lbl.attributedText = mutPricesString ``` ANSWER : [![enter image description here](https://i.stack.imgur.com/7Y1AC.png)](https://i.stack.imgur.com/7Y1AC.png) Upvotes: 1 <issue_comment>username_5: **Swift 5.1** ``` let attributedText : NSMutableAttributedString = NSMutableAttributedString(string: "Your Text") attributedText.addAttributes([ NSAttributedString.Key.strikethroughStyle: NSUnderlineStyle.single.rawValue, NSAttributedString.Key.strikethroughColor: UIColor.lightGray, NSAttributedString.Key.font : UIFont.systemFont(ofSize: 12.0) ], range: NSMakeRange(0, attributedText.length)) ``` Upvotes: 3 <issue_comment>username_6: Try this ``` func strikethrough(_ text:String) -> NSAttributedString{ //strikethrough let range = (self as NSString).range(of: text) let attributedString = NSMutableAttributedString(string:self) attributedString.addAttribute(NSAttributedString.Key.strikethroughColor, value: UIColor.gray, range: range) attributedString.addAttribute(NSAttributedString.Key.strikethroughStyle, value: 2, range: range) return attributedString } ``` ``` label.attributedText = "Hello world".strikethrough("world") ``` Upvotes: 1
2018/03/21
968
3,816
<issue_start>username_0: Thanks for the tip below • You need to go through the string one character at a time (for loop or while loop) When you hit a < you know you have hit a tag, so store the position of this character • Keep going (in a sub-loop, preferably) until you hit a >, that's your end marker • Now check the character immediately before the >. Is it /? • YES: Peek at the top of the stack. Is that string the same as the one between < and />? If yes, pop that item and break out of the subloop (you found a match!). If no, return false from the method - your work is done (the HTML is not valid). • NO: then push the whole string between < and > onto the stack and break out of this subloop, and continue the main loop. As @seesharper suggested, turn the above into psuedocode then into C#. Good luck on your journey learning to program!<issue_comment>username_1: Here's my analysis of the problem (I'm not going to give you the code solution to the problem, as others have pointed out this defeats the purpose of this sort of exercise). I'm also not dealing with inconsistently formatted (but still valid) HTML and open-close tag special cases such as , which are common in real HTML: * You need to go through the string one character at a time (`for` loop or `while` loop) When you hit a `<` you know you have hit a tag, so store the position of this character * Keep going (in a sub-loop, preferably) until you hit a `>`, that's your end marker * Now check the character immediately before the `>`. Is it `/`? * YES: Peek at the top of the stack. Is that string the same as the one between `<` and `/>`? + If yes, pop that item and break out of the subloop (you found a match!). + If no, return false from the method - your work is done (the HTML is not valid). * NO: then push the whole string between `<` and `>` onto the stack and break out of this subloop, and continue the main loop. As @seesharper suggested, turn the above into psuedocode then into C#. Good luck on your journey learning to program! Upvotes: 3 [selected_answer]<issue_comment>username_2: Your current code simply checks that "{open}" is complete, and that whatever the next tag is, is also complete, not that it is paired with its close tag. You need to be operating with strings instead of characters. You're going to read in "{open}" and "{/open}" and you need to operate on them. Start by making a list of your use cases: 1. You start with a close tag - Work out how do identify it is a close tag, and then when you try to pop your empty stack you know it fails your check. 2. You start with an open tag - Work out how to identify it is an open tag, and then push it onto your stack. 3. You find a "complete tag" - one in this format "" - Work out how to identify this type of tag. Do nothing with it, he does not need to be paired once identified correctly. 4. You encounter multiple open tags in succession. push each onto the stack. 5. You encounter a close tag - determine if it is properly paired with the top element of the stack - pop and continue if they are properly paired - fail if they are not. 6. You encounter multiple close tags in succession. Rinse and repeat 5 until tags do not match or you have an empty stack and an unmatched close. You have a lot of good logic in your current code, but it needs to be expanded to properly perform the task assigned. NOTE: I have intentionally not provided code, but some logic to help you toward your solution because this is a homework assignment. You will be working almost exclusively with 1) Reading a file. 2) Strings. 3) Stack. Resources for syntax, properties, and methods of each are readily available should you need to look them up. Also, I used the wrong braces because just tags weren't showing up and it was a quick edit. Upvotes: 2
2018/03/21
1,150
4,300
<issue_start>username_0: When calling history.push('/packages') the url is updated but the component will not mount (render) unless the page is reloaded. If I call createHistory({forceRefresh: true}) or manually reload the page the UI is rendered correctly. How can I configure react-router-dom to load the component without explicitly reloading the page or using forceRefresh? ***index.jsx*** ``` import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter } from 'react-router-dom' import store from './store' import {Provider} from 'react-redux' import App from './App' ReactDOM.render( , document.getElementById('app') ); ``` ***App.jsx*** ``` import React, { Component } from 'react' import { Switch, Route } from 'react-router-dom' import PropTypes from 'prop-types' import { Navbar, PageHeader, Grid, Row, Col } from 'react-bootstrap' import LoginFormContainer from './components/Login/LoginFormContainer' import PackageContainer from './components/Package/PackageContainer' class App extends Component { render() { return ( ### Mythos } /> } /> ) } } export default App ``` ***loginActions.jsx*** ``` import * as types from './actionTypes' import LoginApi from '../api/loginApi' import createHistory from 'history/createBrowserHistory' const history = createHistory() export function loginUser(user) { return function(dispatch) { return LoginApi.login(user).then(creds => { dispatch(loginUserSuccess(creds)); }).catch(error => { throw(error); }); } } export function loginUserSuccess(creds) { sessionStorage.setItem('credentials', JSON.stringify(creds.data)) history.push('/packages') return { type: types.LOGIN_USER_SUCCESS, state: creds.data } } ``` ***PackageContainer.jsx*** ``` import React, { Component } from 'react' import {connect} from 'react-redux' import PropTypes from 'prop-types' import {loadPackages} from '../../actions/packageActions' import PackageList from './PackageList' import ImmutablePropTypes from 'react-immutable-proptypes' import {Map,fromJS,List} from 'immutable' import {withRouter} from 'react-router-dom' class PackageContainer extends Component { constructor(props, context) { super(props, context); } componentDidMount() { this.props.dispatch(loadPackages()); } render() { return ( {this.props.results ? : ### No Packages Available } ); } } PackageContainer.propTypes = { results: ImmutablePropTypes.list.isRequired, }; const mapStateToProps = (state, ownProps) => { return { results: !state.getIn(['packages','packages','results']) ? List() : state.getIn(['packages','packages','results']) }; } PackageContainer = withRouter(connect(mapStateToProps)(PackageContainer)) export default PackageContainer ```<issue_comment>username_1: I assume, that issue that you are create new instance of `history` object but `BrowserRouter` doesn't know about the changes which happens inside of it. So, you should create history object and export it in the `index.jsx` and use `Router` instead of `BrowserRouter` and pass as `history` property, so then you can just import it whenever you need. For example: **index.jsx** ``` import { Router } from 'react-router-dom' import createHistory from 'history/createBrowserHistory' ... export const history = createHistory() ReactDOM.render( , document.getElementById('app') ); ``` Then, in `loginActions` you just import `history` and use `.push` method as before. **loginActions.jsx** ``` import * as types from './actionTypes' import LoginApi from '../api/loginApi' import { history } from './index' export function loginUser(user) { return function(dispatch) { return LoginApi.login(user).then(creds => { dispatch(loginUserSuccess(creds)); }).catch(error => { throw(error); }); } } export function loginUserSuccess(creds) { sessionStorage.setItem('credentials', JSON.stringify(creds.data)) history.push('/packages') return { type: types.LOGIN_USER_SUCCESS, state: creds.data } } ``` Hope it will helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: In the App.jsx ``` } /> } /> ``` Now both **PackageContainer** and **LoginFormContainer** have access to ***history*** object Upvotes: 2
2018/03/21
688
2,443
<issue_start>username_0: I'm very new to JavaScript and I'm trying to learn some basics practicing with it. I've got stuck with this: ``` var name = prompt('enter your name', ''); if( name == null ) { alert('Cancelled'); } else if ( name == 'admin' ) { alert('hi admin'); } else { alert('I don\'t know you'); } ``` If I press esc (or cancel button) I should get 'Cancelled' message, but it's 'I don\'t know you' by some reason. But the fun part is if I'll rename variable to something else, for ex.: ``` var usr = prompt('enter your name', ''); if( usr == null ) { alert('Cancelled'); } else if ( usr == 'admin' ) { alert('hi admin'); } else { alert('I don\'t know you'); } ``` ...It will work just fine. What's wrong? I've tried it in different browsers, I've googled forbidden variable names, but I have no answer. PS: I know that esc or cancel will return empty string in safari, but it happens in all browsers<issue_comment>username_1: I assume, that issue that you are create new instance of `history` object but `BrowserRouter` doesn't know about the changes which happens inside of it. So, you should create history object and export it in the `index.jsx` and use `Router` instead of `BrowserRouter` and pass as `history` property, so then you can just import it whenever you need. For example: **index.jsx** ``` import { Router } from 'react-router-dom' import createHistory from 'history/createBrowserHistory' ... export const history = createHistory() ReactDOM.render( , document.getElementById('app') ); ``` Then, in `loginActions` you just import `history` and use `.push` method as before. **loginActions.jsx** ``` import * as types from './actionTypes' import LoginApi from '../api/loginApi' import { history } from './index' export function loginUser(user) { return function(dispatch) { return LoginApi.login(user).then(creds => { dispatch(loginUserSuccess(creds)); }).catch(error => { throw(error); }); } } export function loginUserSuccess(creds) { sessionStorage.setItem('credentials', JSON.stringify(creds.data)) history.push('/packages') return { type: types.LOGIN_USER_SUCCESS, state: creds.data } } ``` Hope it will helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: In the App.jsx ``` } /> } /> ``` Now both **PackageContainer** and **LoginFormContainer** have access to ***history*** object Upvotes: 2
2018/03/21
1,256
2,878
<issue_start>username_0: I am struggling with regex to extract IP and subnet mask info from a snmp walk output. Here is how output looks like. ``` [, , , , ``` I have organized the output in different lines just so it is easy to understand (actual output in my code is the above one without lines): ``` [, , , , ``` So between each block, we have subnet mask (value='255.255.255.0') and ip address (oid='iso.3.6.1.2.1.4.21.1.11.**10.10.2.0**') I need to to extract that info and save it in an array/list so it will look like this: ``` (10.10.2.0/255.255.255.0, 10.0.0.0/255.255.255.192, and so on ...) ``` I believe regex would be the best solution but despite many hours of research and trying I can't seem to find a solution. Here is what I have so far: ``` # contains SNMP output displayed above print snmp\_output str\_result = ''.join(str(snmp\_output)) regex = re.findall(r"\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b", str\_result) print regex ``` It gives me this output: ``` ['255.255.255.0', '3.6.1.2', '172.16.31.10', '1172.16.31.10', '255.255.255.192', '3.6.1.2', '172.16.31.10', '172.16.58.3', '255.255.255.0', and so on ...] ``` I guess the first step would be to get out that only gives me mask and IP and not there #s in between. Any help will be appreciated. Thanks Damon Edit: ``` for item in system_items: print '{oid}.{oid_index} {snmp_type} = {value}'.format( oid=item.oid, oid_index=item.oid_index, snmp_type=item.snmp_type, value=item.value ```<issue_comment>username_1: You can use following regexp: ``` value='([\d.]+)'\s*\(oid='iso.*?\.(\d+\.\d+.\d+.\d+)' ``` [Demo](https://regex101.com/r/RXKcx8/1/) And in python: ``` >>> import re >>> str = r"[, , , ," >>> re.findall(r"value='([\d.]+)'\s\*\(oid='iso.\*?\.(\d+\.\d+.\d+.\d+)'", str) # => [('255.255.255.0', '10.10.2.0'), ('255.255.255.192', '10.0.0.0'), ('255.255.255.0', '10.10.10.0'), ('255.255.255.252', '10.11.0.0')] ``` Then you can take tuples and format them in strings as you need. Upvotes: 2 <issue_comment>username_2: To get the output as you have it in your question: ``` >>> import re >>> regex = re.compile(r'(?:\d+\.){3}\d+$') >>> tuple('{}/{}'.format(regex.search(item.oid).group(0), item.value) ... for item in system_items) ``` I don't have `PySNMP` installed, but here's a test: ``` >>> class SNMPVariable(object): ... def __init__(self, value, oid): ... self.value = value ... self.oid = oid ... >>> s1 = SNMPVariable('255.255.255.0', 'iso.3.6.1.2.1.4.21.1.11.10.10.2.0') >>> s2 = SNMPVariable('255.255.255.252', 'iso.3.6.1.2.1.4.21.1.11.10.11.0.0') >>> system_items = [s1, s2] >>> tuple('{}/{}'.format(regex.search(item.oid).group(0), item.value) ... for item in system_items) ... ('10.10.2.0/255.255.255.0', '10.11.0.0/255.255.255.252') ``` Upvotes: 2 [selected_answer]
2018/03/21
892
3,213
<issue_start>username_0: I have a directory that contains around 3 million files. Once a day I need to run a process that creates a separate file containing file names from that large directory that have `.html` extension. Typically, of the 3 million files that are there, 500,000 will have that `.html` extension. I am using the following: ``` find dirname -name "*.html" ``` However, this runs for about 3 hours before it completes. Is there a faster way to do this? Update: I did some testing with Perl and Java. Using Perl to get the contents of this directory and create a file of `.html` I tried the following (note the times): ``` my @files = # 45 minutes ``` When I tried this with Java using: ``` final File[] files = dirname.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String name) { return name.endsWith(".html"); } }); ``` How is Java able to do this in 3 minutes as opposed to Perl or any Unix command I can think of?<issue_comment>username_1: You should use "getdents" in place of ls/find ls and practically every other method of listing a directory (including python os.listdir, find .) rely on libc readdir(). However readdir() only reads 32K of directory entries at a time, which means that if you have a lot of files in the same directory (i.e. 500M of directory entries) it is going to take an insanely long time to read all the directory entries, especially on a slow disk. For directories containing a large number of files, you'll need to dig deeper than tools that rely on readdir(). You will need to use the getdents() syscall directly, rather than helper methods from libc. You can find the C code to list the files using getdents() from [here](http://man7.org/linux/man-pages/man2/getdents.2.html): There are two modifications you will need to do in order quickly list all the files in a directory. First, increase the buffer size from X to something like 5 megabytes. ``` #define BUF_SIZE 1024*1024*5 ``` Then modify the main loop where it prints out the information about each file in the directory to skip entries with inode == 0. I did this by adding ``` if (dp->d_ino != 0) printf(...); ``` In my case I also really only cared about the file names in the directory so I also rewrote the printf() statement to only print the filename. ``` if(d->d_ino) printf("%sn ", (char *) d->d_name); ``` Compile it (it doesn't need any external libraries, so it's super simple to do) ``` gcc listdir.c -o listdir ``` Now just run ``` ./listdir [directory with insane number of files] ``` Upvotes: 1 <issue_comment>username_2: The default file glob() sorts the file list; that's why it's taking a long time. ``` my @files = # 45 minutes ``` Try reading the directory directly: ``` my @files = (); opendir my $dh, $dirname or die "could not open $dirname: $!\n"; while( my $file = readdir $dh ){ push @files, $file if $file =~ /\.html$/; } closedir $dh or die "could not close $dirname: $!\n"; ``` Upvotes: 2 <issue_comment>username_3: You can use ls like below ``` \ls -U ``` -U do not sort; list entries in directory order Upvotes: 0
2018/03/21
594
1,908
<issue_start>username_0: here's my code : ``` SELECT p.ID, p.post_title, MAX(IF(pm.meta_key = 'featured_image', pm.meta_value, NULL)) AS event_imgID FROM wpgp_posts AS p LEFT JOIN wpgp_postmeta AS pm on pm.post_id = p.ID WHERE p.post_type = 'product' and p.post_status = 'publish' GROUP BY p.ID ``` event\_imgID retrieve the ID from postmeta but the img url is stored in the wpgp\_posts in the guid column How to achieve this ?<issue_comment>username_1: You can join in the table to get more information about the image: ``` SELECT . . . FROM (SELECT p.ID, p.post_title, MAX(CASE WHEN pm.meta_key = 'featured_image' THEN pm.meta_value END) AS event_imgID FROM wpgp_posts p LEFT JOIN wpgp_postmeta pm ON pm.post_id = p.ID WHERE p.post_type = 'product' and p.post_status = 'publish' GROUP BY p.ID ) pi LEFT JOIN wpgp_posts wp ON pi.event_imgID = wp.guid ``` Upvotes: 2 <issue_comment>username_2: I finally found the way to do it. I had to search thumbnail\_id instead of guid : ``` SELECT p1.ID, p1.post_title, MAX(IF(pm.meta_key = 'listing_event_date', pm.meta_value, NULL)) AS event_date, wm2.meta_value as event_img FROM wpgp_posts p1 LEFT JOIN wpgp_postmeta AS pm on (pm.post_id = p1.ID) LEFT JOIN wpgp_postmeta wm1 ON ( wm1.post_id = p1.id AND wm1.meta_value IS NOT NULL AND wm1.meta_key = '_thumbnail_id' ) LEFT JOIN wpgp_postmeta wm2 ON ( wm1.meta_value = wm2.post_id AND wm2.meta_key = '_wp_attached_file' AND wm2.meta_value IS NOT NULL ) WHERE p1.post_status='publish' AND p1.post_type='product' GROUP BY p1.ID, wm2.meta_id ORDER BY event_date ASC ``` Thanks Upvotes: 2 [selected_answer]
2018/03/21
720
2,400
<issue_start>username_0: So I am trying to read a file using a scanner. This file contains data where there are two towns, and the distance between them follows them on each line. So like this: Ebor,Guyra,90 I am trying to get each town individual, allowing for duplicates. This is what I have so far: ``` // Create scanner for file for data Scanner scanner = new Scanner(new File(file)).useDelimiter("(\\p{javaWhitespace}|\\.|,)+"); // First, count total number of elements in data set int dataCount = 0; while(scanner.hasNext()) { System.out.print(scanner.next()); System.out.println(); dataCount++; } ``` Right now, the program prints out each piece of information, whether it is a town name, or an integer value. Like so: Ebor Guyra 90 How can I make it so I have an output like this for each line: Ebor Guyra Thank you!<issue_comment>username_1: Assuming well-formed input, just modify the loop as: ``` while(scanner.hasNext()) { System.out.print(scanner.next()); System.out.print(scanner.next()); System.out.println(); scanner.next(); dataCount += 3; } ``` Otherwise, if the input is not well-formed, check with `hasNext()` before each `next()` call if you need to break the loop there. Upvotes: 2 <issue_comment>username_2: Try it that way: ``` Scanner scanner = new Scanner(new File(file)); int dataCount = 0; while(scanner.hasNext()) { String[] line = scanner.nextLine().split(","); for(String e : line) { if (!e.matches("-?\\d+")) System.out.println(e);; } System.out.println(); dataCount++; } } ``` We will go line by line, split it to array and check with regular expression if it is integer. ``` -? stays for negative sign, could have none or one \\d+ stays for one or more digits ``` Example input: ``` Ebor,Guyra,90 Warsaw,Paris,1000 ``` Output: ``` Ebor Guyra Warsaw Paris ``` Upvotes: 1 <issue_comment>username_3: I wrote a method called intParsable: ``` public static boolean intParsable(String str) { int n = -1; try { n = Integer.parseInt(str); } catch(Exception e) {} return n != -1; } ``` Then in your while loop I would have: ``` String input = scanner.next(); if(!intParsable(input)) { System.out.print(input); System.out.println(); dataCount++; } ``` Upvotes: 1
2018/03/21
1,202
4,353
<issue_start>username_0: I don't understand how colab works with directories, I created a notebook, and colab put it in /Google Drive/Colab Notebooks. Now I need to import a file (data.py) where I have a bunch of functions I need. Intuition tells me to put the file in that same directory and import it with: import data but apparently that's not the way... I also tried adding the directory to the set of paths but I am specifying the directory incorrectly.. Can anyone help with this? Thanks in advance!<issue_comment>username_1: Colab notebooks are stored on Google Drive. But it is run on another virtual machine. So, you need to copy your data.py there too. Do this to upload data.py through Colab. ``` from google.colab import files files.upload() # choose the file on your computer to upload it then import data ``` Upvotes: 4 <issue_comment>username_2: **To upload Local files from system to collab storage/directory.** ``` from google.colab import files def getLocalFiles(): _files = files.upload() if len(_files) >0: for k,v in _files.items(): open(k,'wb').write(v) getLocalFiles() ``` [![enter image description here](https://i.stack.imgur.com/R1zAm.png)](https://i.stack.imgur.com/R1zAm.png) Upvotes: 3 <issue_comment>username_3: So, here is how I finally solved this. I have to point out however, that in my case I had to work with several files and proprietary modules that were changing all the time. The best solution I found to do this was to use a FUSE wrapper to "link" colab to my google account. I used this particular tool: <https://github.com/astrada/google-drive-ocamlfuse> There is an example of how to set up your environment there, but here is how I did it: ``` # Install a Drive FUSE wrapper. !apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse # Generate auth tokens for Colab from google.colab import auth auth.authenticate_user() # Generate creds for the Drive FUSE library. from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} ``` At this point you'll have installed the wrapper and the code above will generate a couple of links for you to authorize access to your google drive account. The you have to create a folder in the colab file system (remember this is not persistent, as far as I know...) and mount your drive there: ``` # Create a directory and mount Google Drive using that directory. !mkdir -p drive !google-drive-ocamlfuse drive print ('Files in Drive:') !ls drive/ ``` the *!ls* command will print the directory contents so you can check it works, and that's it. You now have all the files you need and you can make changes to them with no further complications. Remember that you may need to restar the kernel to update the imports and variables. Hope this works for someone! Upvotes: 1 <issue_comment>username_4: To easily upload a local file you can use the new Google Colab feature: * click on right arrow on the left of your screen (below the Google Colab logo) [![enter image description here](https://i.stack.imgur.com/8UfS1.png)](https://i.stack.imgur.com/8UfS1.png) * select Files tab * click Upload button It will open a popup to choose file to upload from your local filesystem. Upvotes: 4 <issue_comment>username_5: Now google is officially providing support for accessing and working with Gdrive at ease. You can use the below code to mount your drive to Colab: ``` from google.colab import drive drive.mount('/gdrive') %cd /gdrive/My\ Drive/{location you want to move} ``` Upvotes: 5 [selected_answer]<issue_comment>username_6: you can write following commands in colab to mount the drive ``` from google.colab import drive drive.mount('/content/gdrive') ``` and you can download from some external url into the drive through simple linux command wget like this ``` !wget 'https://dataverse.harvard.edu/dataset' ``` Upvotes: 0
2018/03/21
577
1,766
<issue_start>username_0: I have to write a recursive function (we'll call it `arrow(n)`) which draws an arrow that works like this: ``` arrow(4) ``` Printed output : ``` * ** *** **** *** ** * ``` where `arrow` can only take one parameter like shown above. Is it possible only by using one parameter with recursion? I'm curious because it was a test question and I can't find any solution. Thanks<issue_comment>username_1: No, even when recursing, you'll need two variables (one to keep track of the current count, and another to keep track of the size). You can get cute and use an inner function. ``` def arrow(n): def _arrow(k, n): print('*' * (n - k + 1)) if k > 1: _arrow(k - 1, n) print('*' * (n - k + 1)) _arrow(n, n) ``` ``` arrow(4) # * # ** # *** # **** # *** # ** # * ``` It's essentially the more unreadable equivalent of loops, but hey, that's the nature of exam questions. Upvotes: 0 <issue_comment>username_2: The recursion is in the helper functions, rather than `arrow`, but it's still single-parameter recursion in each case. ``` def arrow_top(n): if n > 0: arrow_top(n-1) print('*' * n) def arrow_bot(n): if n > 0: print('*' * n) arrow_bot(n-1) def arrow(n): arrow_top(n) arrow_bot(n-1) arrow(4) ``` Output: ``` * ** *** **** *** ** * ``` Upvotes: 0 <issue_comment>username_3: This works by setting a global to remember where to iterate to: ``` def arrow(n): # remember max, but only once global top try: top except NameError: top = n n = 1 if n < top: print(n * '*') arrow(n + 1) print(n * '*') elif n == top: print(n * '*') arrow(4) ``` Upvotes: -1
2018/03/21
1,063
4,331
<issue_start>username_0: My question is rather conceptual. I noticed that there are different packages for the same architecture, like x86-64, but for different OSes. For example, RPM offers different packages for Fedora and OpenSUSE for the same x86-64 architecture: <http://www.rpmfind.net/linux/rpm2html/search.php?query=wget> - not to mention different packages served up by YUM and APT (for Ubuntu), all for x86-64. My understanding is that a package contains binary instructions suitable for a given CPU architecture, so that as long as CPU is of that architecture, it should be able to execute those instructions natively. So why do packages built for the same architecture differ for different OSes?<issue_comment>username_1: These packages contain native binaries that require a particular [Application Binary Interface](https://en.wikipedia.org/wiki/Application_binary_interface) (ABI) to run. The CPU architecture is only one part of the ABI. Different Linux distros have different ABIs and therefore the same binary may not be [compatible](https://en.wikipedia.org/wiki/Binary-code_compatibility) across them. That's why there are different packages for the same architecture, but different OSes. The [Linux Standard Base](https://en.wikipedia.org/wiki/Linux_Standard_Base) project aims at standardizing the ABIs of Linux distros so that it's easier to build portable packages. Upvotes: 2 <issue_comment>username_2: **Considering just different Linux distros:** Besides being compiled against different library versions (as Hadi described), the packaging itself and default config files can be different. Maybe one distro wants `/etc/wget.conf`, while maybe another wants `/etc/default/wget.conf`, or for those files to have different contents. (I forget if wget specifically has a global config file; some packages definitely do, and not just servers like Exim or Apache.) Or different distros could enable / disable different sets of compile-time options. (Traditionally set with `./configure --enable-foo --disable-bar` before `make -j4 && make install`). For `wget`, choices may include which TLS library to compile against (OpenSSL vs. gnutls), not just which version. So ABIs (library versions) are important, but there are other reasons why every distro has their own package of things. --- **Completely different OSes**, like Linux vs. Windows vs. OS X, have different executable file formats. ELF vs. PE vs. Mach-O. All three of those formats contain x86-64 machine code, but the metadata is different. (And OS differences mean you'd want the machine code to do different things. For example, opening a file on Linux or OS X (or any POSIX OS) can be done with an [`int open(const char *pathname, int flags, mode_t mode);`](http://man7.org/linux/man-pages/man2/open.2.html) system call. So the same source code works for both those platforms, although it can still compile to different machine code, or actually in this case very similar machine code to call a libc wrapper around the system call ([OS X and Linux use the same function calling convention](https://stackoverflow.com/questions/2535989/what-are-the-calling-conventions-for-unix-linux-system-calls-on-i386-and-x86-6)), but with a different symbol name. OS X would compile to a call to `_open`, but Linux doesn't prepend underscores to symbol names, so the dynamic linker symbol name would be `open`. The mode constants for `open` might be different. e.g. maybe OS X defines `O_RDWR` as `4`, but maybe Linux defines it as `2`. This would be an ABI difference: same source compiles to different machine code, where the program and the library agree on what means what. But **Windows isn't a POSIX system**. The WinAPI function for opening a file is [`HFILE WINAPI OpenFile(LPCSTR lpFileName, LPOFSTRUCT lpReOpenBuff, UINT uStyle);`](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365430(v=vs.85).aspx) If you want to do anything invented more recently than opening / closing files, especially drawing a GUI, things are even less similar between platforms and you will use different libraries. (Or a cross platform GUI library will use a different back-end on different platforms). OS X and Linux both have Unix heritage (real or as a clone implementation), so the low-level file stuff is similar. Upvotes: 3 [selected_answer]
2018/03/21
1,083
3,419
<issue_start>username_0: After a massive amount of headaches, I was able to get this almost functioning. Problem: In the error/output Robocopy appears to be treating $args[4] (ref: $sourcePath) as everysingle IP in the range instead of just one object. I'm assuming the rest of the syntax is correct, because if I switch `$ip = 101..119 | foreach { "192.168.1.$_" }` to `$ip = "192.168.1.101"` everything works correctly. The Robocopy dumps into the console -source as all of the IP addresses in the range from $ip. What am I doing wrong here? ``` ##################################################### #Purpose: to ping an IP range of network locations in a loop until successful. When successful, copy new files from source onto storage. #Each ping and transfer needs to be ran individually and simultaneously due to time constraints. ##################################################### #define the IP range, source path & credentials, and storage path $ip = 101..119 | foreach { "192.168.1.$_" } #$ip = "192.168.1.101" #everything works if I comment above and uncomment here $source = "\\$ip" $sourcePath = "$source\admin\" $dest = "C:\Storage\" $destFull = "$dest$ip\" $username = "user" $password = "<PASSWORD>" #This is how to test connection. Once returns TRUE, then copy new files only from source to destination. #copy all subdirectories & files in restartable mode foreach ($src in $ip){ Start-Job -ScriptBlock { DO {$ping = Test-Connection $args[0] -BufferSize 16 -Count 4 -quiet} until ($ping) net use \\$args[1] $args[2] /USER:$args[3] robocopy $args[4] $args[5] /E /Z } -ArgumentList $src, $source, $password, $username, $sourcePath, $destFull -Name "$src" #pass arguments to Start-Job's scriptblock } #get all jobs in session, supress command prompt until all jobs are complete. Then once complete, get the results. Get-Job | Wait-Job Get-Job | Receive-Job ```<issue_comment>username_1: At the point you create $source, $ip is an array, so $source ends up as a very long string of all the items concatenated: ``` \\192.168.1.101 192.168.1.102 192.168.1.103 192.168.1.104 ... ``` You can see this for yourself, by running just these two lines, then examining the contents of $source: ``` $ip = 101..119 | foreach { "192.168.1.$_" } $source = "\\$ip" ``` This has a knock-on effect to $sourcePath which is used as $args[4] in your call to RoboCopy. You should build your paths inside your foreach loop, where you have access to each IP address ($src) from the $ip collection. Upvotes: 2 [selected_answer]<issue_comment>username_2: Some sources/etc. are different, but that's just due to test environment. I decided to use the [io.path] for the paths since I was running into problems with `$args` when defining variables. Thank you username_1 for the above help. I was completely overlooking that fact. ``` $ScriptBlock = { $source = [io.path]::Combine('\\',$args[0]) $sourcePath = [io.path]::Combine('\\',$args[0],'c$','admin\') $dest = "C:\Storage\" $destFull = [io.path]::Combine($dest,$args[0]) DO {$ping = Test-Connection $args[0] -BufferSize 16 -Count 1 -quiet} until ($ping) net use $source password /USER:user robocopy $sourcePath $destFull /E /Z } $ip = 101..119 | foreach { "192.168.1.$_" } foreach ($dvr in $ip){ Start-Job $ScriptBlock -ArgumentList $dvr } Get-Job | Wait-Job Get-Job | Receive-Job ``` Upvotes: 0
2018/03/21
383
1,482
<issue_start>username_0: We have a private docker registry (Sonatype nexus) which holds all our private docker images. I was looking for an open source vulnerability and security scanner for scanning all the images on the private registry also I want to install the tool on the linux box and also integrate with Jenkins. I came across Twistlock, Anchore, Dagda. None of these seems to provide a tool which can be installed and used without any license. Any inputs?<issue_comment>username_1: You can use Clair: <https://github.com/coreos/clair> Simply follow the instruction: <https://github.com/arminc/clair-scanner#run> replace the date on the clair-db image with either latest or a specific date from <https://hub.docker.com/r/arminc/clair-db/tags/> you can get the scanner binaries here: <https://github.com/arminc/clair-scanner/releases> Upvotes: 3 [selected_answer]<issue_comment>username_2: I too would say to use CoreOS Clair for this purpose. However a few things to consider, 1. How you gonna run Clair i.e. As Docker compose or in your kube cluster? 2. Clair provides Static Scanning only (only the container image, not the running container) 3. What sort of integration tools you are using? If you thinking to integrate with your CI/CD pipelines, the best Clair client I can recommend is Klar because it is much simple and straightforward to use. clairctl can also be used but it gives some hard time when troubleshooting with permission errors. Upvotes: 0
2018/03/21
328
1,190
<issue_start>username_0: I need a Regex patter to Validate /A-Za-z/. I want the pattern to have a forward slash required at the start and end of the characters. Ex: /<NAME>/<issue_comment>username_1: You can use Clair: <https://github.com/coreos/clair> Simply follow the instruction: <https://github.com/arminc/clair-scanner#run> replace the date on the clair-db image with either latest or a specific date from <https://hub.docker.com/r/arminc/clair-db/tags/> you can get the scanner binaries here: <https://github.com/arminc/clair-scanner/releases> Upvotes: 3 [selected_answer]<issue_comment>username_2: I too would say to use CoreOS Clair for this purpose. However a few things to consider, 1. How you gonna run Clair i.e. As Docker compose or in your kube cluster? 2. Clair provides Static Scanning only (only the container image, not the running container) 3. What sort of integration tools you are using? If you thinking to integrate with your CI/CD pipelines, the best Clair client I can recommend is Klar because it is much simple and straightforward to use. clairctl can also be used but it gives some hard time when troubleshooting with permission errors. Upvotes: 0
2018/03/21
332
1,235
<issue_start>username_0: I'm trying to use input to variable to create a file using the input the filename The only examples I've seen are print(input) I'm new to Python but trying to write a functional program thanks<issue_comment>username_1: You can use Clair: <https://github.com/coreos/clair> Simply follow the instruction: <https://github.com/arminc/clair-scanner#run> replace the date on the clair-db image with either latest or a specific date from <https://hub.docker.com/r/arminc/clair-db/tags/> you can get the scanner binaries here: <https://github.com/arminc/clair-scanner/releases> Upvotes: 3 [selected_answer]<issue_comment>username_2: I too would say to use CoreOS Clair for this purpose. However a few things to consider, 1. How you gonna run Clair i.e. As Docker compose or in your kube cluster? 2. Clair provides Static Scanning only (only the container image, not the running container) 3. What sort of integration tools you are using? If you thinking to integrate with your CI/CD pipelines, the best Clair client I can recommend is Klar because it is much simple and straightforward to use. clairctl can also be used but it gives some hard time when troubleshooting with permission errors. Upvotes: 0
2018/03/21
857
3,430
<issue_start>username_0: I am trying to have my animation ease the screen from black to white to black again and repeat that a certain amount of times. Currently with the code I have the animation eases from black to white then jumps back to black. Is there anyway to run an animation in reverse or add an animation that runs after the first animation is completed? ``` override func viewDidAppear(_ animated: Bool) { let viewColorAnimator: UIViewPropertyAnimator = UIViewPropertyAnimator.runningPropertyAnimator( withDuration: 4.0, delay: 0.0, options: [.curveEaseInOut], animations: { UIView.setAnimationRepeatCount(3); self.lightView.backgroundColor = .white }) viewColorAnimator.startAnimation() } ``` I tried adding this block of code to the project but the outcome was the same: ``` viewColorAnimator.addCompletion {_ in let secondAnimator = UIViewPropertyAnimator(duration: 4.0, curve: .linear) { self.lightView.backgroundColor = .black } secondAnimator.startAnimation() } ``` EDIT: I've found out it is because of the setAnimationRepeatCount because the last of the iterations works properly. How do I run the animation multiple times without the repeat count?<issue_comment>username_1: There's an easy way to run animations. And for this method: if you want the animation to repeat after completing, you can add the .autoreverse, and the .repeat option. Like this: ``` UIView.animate(withDuration: TimeInterval(randomTime), delay: 0, options: [.repeat,.autoreverse], animations: { //Animation code here }, completion: {(bool) //Do something after completion }) ``` You can put the codes you want to execute after the animation in the completion block. And, you can use a Timer as a way to run the animation for certain times. Upvotes: 0 <issue_comment>username_2: **Xcode 9.4.1 (Swift 4.1.2)** and **Xcode 10 beta 3 (Swift 4.2)**: Here's the way to do it using **UIViewPropertyAnimator** -- basically, we just add **.repeat** and **.autoreverse** to the options. You were 99% of the way there: ``` override func viewDidAppear(_ animated: Bool) { let viewColorAnimator: UIViewPropertyAnimator = UIViewPropertyAnimator.runningPropertyAnimator( withDuration: 4.0, delay: 0.0, options: [.curveEaseInOut, .repeat, .autoreverse], animations: { UIView.setAnimationRepeatCount(3) self.lightView.backgroundColor = .white }) viewColorAnimator.startAnimation() } ``` Upvotes: 0 <issue_comment>username_3: Documentation says that the options related to direction are ignored. you can check the image attached [here](https://developer.apple.com/documentation/uikit/uiviewpropertyanimator/1648367-runningpropertyanimator). To achieve reverse animation: Create an animator property. ``` private var animator: UIViewPropertyAnimator = { let propertyAnimator = UIViewPropertyAnimator.runningPropertyAnimator( withDuration: 4.0, delay: 0.0, options: [.curveEaseInOut, .autoreverse], animations: { UIView.setAnimationRepeatCount(3) UIView.setAnimationRepeatAutoreverses(true) self.lightView.backgroundColor = .white }) return propertyAnimator }() ``` P.S. we need to mention the .autoreverse in the options. Otherwise UIView.setAnimationRepeatAutoreverses(true) won't work. Upvotes: 2
2018/03/21
858
2,920
<issue_start>username_0: While programming a algorithm that makes use of only using integer arithmetic I notice that Python wasn't taking advantage of it. So I tried the following code to see the "explicitly" declaration effect ``` import time repeat = 1000000 start = time.time() x = 0 for i in range(repeat): x += 1 no_type_time = time.time() - start start = time.time() y = int(0) for i in range(repeat): y += 1 int_time = time.time() - start print('{} - No type'.format(no_type_time)) print('{} - Int'.format(int_time)) ``` The code output was the following: ``` 0.0692429542542 - No type 0.0545210838318 - Int ``` I assume it has something to do with Python being a dynamic typed language. But when I try to find out the type of the variables, using type(x) and type(y) both output int. Which is curious, because I also ran some tests using x = float(0) and the result is very close to the one with no type "declaration". I'd like to know why it happens and if possible to get some reference from Python documentation explaining it.<issue_comment>username_1: I am not able to reproduce on linux. Mark that: • real: The actual time spent in running the process from start to finish, as if it was measured by a human with a stopwatch • user: The cumulative time spent by all the CPUs during the computation • sys: The cumulative time spent by all the CPUs during system-related tasks such as memory allocation. ``` → time python type.py real 0m0.219s user 0m0.000s sys 0m0.000s → time python without_type.py real 0m0.133s user 0m0.000s sys 0m0.000s ``` Upvotes: 1 <issue_comment>username_2: This is happening because python caches and reuses some immutable built-in objects, even if they're "stored" as different variables ``` >>> a = 1 >>> id(a) 56188528L >>> b = int(1) >>> id(b) 56188528L ``` Python didn't have to allocate any memory or instantiate a new object for the second variable. It's just reusing the immutable integer object that had already been created. If you had put your timing tests in different files and ran them separately, or if you had ran the `int(1)` test first, you would have seen different results. Upvotes: -1 <issue_comment>username_3: From the precision on the floats in your `str.format` output (12 significant digits), we can see that you're probably on Python 2. Python 2 creates an explicit list of a million ints when you run `range(repeat)`, which is slow. It also keeps the memory for all of those ints, so `range(repeat)` is less slow the second time. This is most likely the source of the timing difference, not anything to do with calling `int`. On Python 2, it is almost always better to use `xrange` instead of `range`. `xrange` generates ints on demand, avoiding the cost in memory and allocation time of generating a whole list up front: ``` for i in xrange(repeat): do_stuff() ``` Upvotes: 4 [selected_answer]
2018/03/21
849
3,431
<issue_start>username_0: I have some code that I do not want included in the jar file based on a condition. My build script looks like ``` plugins { id 'java' id 'org.springframework.boot' version '2.0.0.RELEASE' } sourceSets { main { java { if (project.environment == 'prod') { exclude '**/dangerous/**' } forEach { println it.absolutePath } } } } ``` Now, when I run the script with `gradlew clean build bootJar -Penvironment=prod` the absolute paths of everything but the dangerous java files is printed, but they are *still* included in the jar. If I remove the boot plugin and run the `jar` task, the dangerous class files are *still* included in the jar. `gradlew clean build jar -Penvironment=prod` ``` plugins { id 'java' } sourceSets { main { java { if (project.environment == 'prod') { exclude '**/dangerous/**' } forEach { println it.absolutePath } } } } ``` If I add an `exclude` clause to the jar task, the dangerous files are not printed, *and* they are not included in the jar. `gradlew clean build jar -Penvironment=prod` ``` plugins { id 'java' } sourceSets { main { java { if (project.environment == 'prod') { exclude '**/dangerous/**' } forEach { println it.absolutePath } } } } jar { exclude '**/dangerous/**' } ``` If I enable the boot plugin, and use the bootJar task (which inherits from the Jar task) (`gradlew clean build bootJar -Penvironment=prod`), I do not see the dangerous files printed, **but the files are still included in the jar**. ``` plugins { id 'java' id 'org.springframework.boot' version '2.0.0.RELEASE' } sourceSets { main { java { if (project.environment == 'prod') { exclude '**/dangerous/**' } forEach { println it.absolutePath } } } } bootJar { exclude '**/dangerous/**' } ``` How can I exclude a java file conditionally with the Spring Boot Gradle Plugin and bootJar task?<issue_comment>username_1: I narrowed down the problem. I didn't put in all of the plugins up above, because I thought the only important ones were java and spring boot. However, my actual code also uses the `protobuf` plugin. If I remove the configuration property `generatedFilesBaseDir`, then it successfully excludes the `dangerous` directory. However, this opens up a new question of, *what the hell is happening?* I was specifying the generated files base dir property so I could reference the generated classes in my source code, but I think I may need to create a different project just for the proto, and add that project as a reference to my main module. Edit ==== Making a separate project for the protobuf files and referencing it as a project seems to be a viable workaround for this issue. Upvotes: 0 <issue_comment>username_2: I was having same issue when i was using 2.0.1.RELEASE. I created jar using bootJar option. Add exclude inside it with file patterns which you want to exclude from executable jar. This worked fine with spring 2.0.4.RELEASE version. ``` bootJar { exclude("**/dangerous/*") } ``` Upvotes: 1
2018/03/21
1,490
4,311
<issue_start>username_0: I am reviewing for an exam and have a practice problem that I'm stuck on. I need to write the function `find_sequence(unsigned int num, unsigned int patter) {}.` I have tried comparing `num & (pattern << i) == (pattern << i)` and other things like that but it keeps saying there is a pattern when there isn't. I see why it is doing that but I can not fix it. The `num` I'm using is `unsigned int a = 82937` and I'm searching for pattern `unsigned int b = 0x05`. ``` Pattern: 00000000000000000000000000000101 Original bitmap: 00000000000000010100001111111001 ``` The code so far: ``` int find_sequence(unsigned int num, unsigned int pattern) { for (int i=0; i<32; i++) { if ((num & (pattern << i)) == (pattern << i)) { return i; } } return -9999; } int main() { unsigned int a = 82937; unsigned int b = 0x05; printf("Pattern: "); printBits(b); printf("\n"); printf("Original bitmap: "); printBits(a); printf("\n"); int test = find_sequence(a, b); printf("%d\n", test); return 0; } ``` Here is what I have so far. This keeps returning 3, and I see why but I do not know how to avoid it.<issue_comment>username_1: In this case you could make a bitmask that 0's out all the spaces you aren't looking for so in this case Pattern: 00000000000000000000000000000101 Bitmask: 00000000000000000000000000000111 So in the case of the number you are looking at Original: 00000000000000010100001111111001 If you and that with this bitmask you end of with Number after &: 00000000000000000000000000000001 And compare the new number with your pattern to see if equal. Then >> the original number Original: 00000000000000010100001111111001 Right shifted: 00000000000000001010000111111100 And repeat the & and compare to check the next 3 numbers in the sequence. Upvotes: -1 [selected_answer]<issue_comment>username_2: ``` for (int i=0; i<32; i++) { if ((num & (pattern << i)) == (pattern << i)) ``` is bad: - it works only when pattern consists of `1` entirely - you generate at the end of the loop `pattern << 31` which is `0` when pattern is even. Condition will hold every time then. Knowing the length of the pattern would simplify the loop above; just go until `32 - size`. When not given by the API, the length can be calculated either by a `clz()` function or manually by looping over the bits. Now, you can generate the mask as `mask = (1u << length) - 1u` (note: you have to handle the `length == 32` case in a special way) and write ``` for (int i=0; i < (32 - length); i++) { if ((num & (mask << i)) == (pattern << i)) ``` or ``` for (int i=0; i < (32 - length); i++) { if (((num >> i) & mask) == pattern) ``` Upvotes: 0 <issue_comment>username_3: `((num & (pattern << i)) == (pattern << i))` won't give you the desire results. Let's say you pattern is `0b101` and the value is `0b1111`, then ``` 0101 pattern 1111 value & ---- 0101 pattern ``` Even though the value has not the pattern `0b101`, the check would return true. You've got to create a mask where all bits of the pattern (until the most significant bit) are 1 and the rest are 0. So for the pattern `0b101` the mask must be `b111`. So first you need to calculate the position of the most significant bit of the pattern, then create the mask and then you can apply (bitwise AND) the mask to the value. If the result is the same as the pattern, then you've found your pattern: ``` int find_sequence(unsigned int num, unsigned int pattern) { unsigned int copy = pattern; // checking edge cases if(num == 0 && pattern == 0) return 0; if(num == 0) return -1; // calculating msb of pattern int msb = -1; while(copy) { msb++; copy >>= 1; } printf("msb of pattern at pos: %d\n", msb); // creating mask unsigned int mask = (1U << msb + 1) - 1; int pos = 0; while(num) { if((num & mask) == pattern) return pos; num >>= 1; pos++; } return -1; } ``` Using this function I get the value 14, where your `0b101` pattern is found in `a`. Upvotes: 0
2018/03/21
529
1,687
<issue_start>username_0: How do I convert a string array: ``` var names = [ "Bob", "Michael", "Lanny" ]; ``` into an object like this? ``` var names = [ {name:"Bob"}, {name:"Michael"}, {name:"Lanny"} ]; ```<issue_comment>username_1: Super simple [`Array.prototype.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) job ``` names.map(name => ({ name })) ``` That is... map each entry (`name`) to an object with key "name" and value `name`. ```js var names = [ "Bob", "Michael", "Lanny" ]; console.info(names.map(name => ({ name }))) ``` --- Silly me, I forgot the most important part ``` names.map(name => name === 'Bob' ? 'Saab' : name) .map(name => ({ name })) ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: You can do this too: ``` var names = [ "Bob", "Michael", "Lanny" ]; var objNames = [] names.forEach(name => { objNames.push({ name }) }) ``` Using ES6 you can set `name` and it is equal to `name: name` Upvotes: 2 <issue_comment>username_3: you can use the map function. In general, list.map(f) will produce a new list where each element at position i is the result of applying f to the element at the same position in the original list. For example: ``` names.map(function(s) { return {name: s} }); ``` Upvotes: 2 <issue_comment>username_4: Use the Array.map() function to map the array to objects. The map() function will iterate through the array and return a new array holding the result of executing the function on each element in the original array. Eg: ``` names = names.map(function(ele){return {"name":ele}}); ``` Upvotes: 2
2018/03/21
1,394
3,919
<issue_start>username_0: I am trying to perform some analysis on data. I got csv file and I convert it into pandas dataframe. the data looks like this. Its has several columns, but I am trying to draw x-axis as date column. . the pandas dataframe looks like this ``` print (df.head(10) cus-id date value_limit 0 10173 2011-06-12 455 1 95062 2011-09-11 455 2 171081 2011-07-05 212 3 122867 2011-08-18 123 4 107186 2011-11-23 334 5 171085 2011-09-02 376 6 169767 2011-07-03 34 7 80170 2011-03-23 34 8 154178 2011-10-02 34 9 3494 2011-01-01 34 ``` I am trying to plot date data because there are multiple values for same date. for this purpose I am trying to plot x-asis ticks as date. since the minimum date in date column is 2011-01-01 and maximum date is 2012-04-20. I tried something like this ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import datetime import matplotlib.dates as mdates df = pd.read_csv('rio_data.csv', delimiter=',') print (df.head(10)) d = [] for dat in df.date: # print (dat) d.append(datetime.strptime(df['date'], '%Y-%m-%d')) days = dates.DayLocator() datemin = datetime(2011, 1, 1) datemax = datetime(2012, 4, 20) fig = plt.figure() ax = fig.add_subplot(111) ax.xaxis.set_major_locator(days) ax.set_xlim(datemin, datemax) ax.set_ylabel('Count values') ``` But I am getting this error. ``` AttributeError: 'DataFrame' object has no attribute 'date' ``` I am trying to draw date as x-axis, it should look like this. [![enter image description here](https://i.stack.imgur.com/HE7h1.png)](https://i.stack.imgur.com/HE7h1.png) Can someone help me to draw the x-axis as date column. I would be grateful.<issue_comment>username_1: You missed a **'** line 12. It cause the SyntaxError. This should correct the error. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import datetime import matplotlib.dates as mdates df = pd.read_csv('rio_data.csv', delimiter=',') print (df.head(10)) d = [] for dat in df.date: # print (dat) d.append(datetime.strptime(df['date'], '%Y-%m-%d')) days = dates.DayLocator() datemin = datetime(2011, 1, 1) datemax = datetime(2012, 4, 20) fig = plt.figure() ax = fig.add_subplot(111) ax.xaxis.set_major_locator(days) ax.set_xlim(datemin, datemax) ax.set_ylabel('Count values') ``` Upvotes: 2 <issue_comment>username_2: ### Set the index as a `datetime dtype` If you set the index to the datetime series by converting the dates with `pd.to_datetime(...)`, matplotlib will handle the x axis for you. Here is a minimal example of how you might deal with this visualization. Plot directly with `pandas.DataFrame.plot`, which uses `matplotlib` as the default backend. ### Simple example: ``` import pandas as pd import matplotlib.pyplot as plt date_time = ["2011-09-01", "2011-08-01", "2011-07-01", "2011-06-01", "2011-05-01"] # convert the list of strings to a datetime and .date will remove the time component date_time = pd.to_datetime(date_time).date temp = [2, 4, 6, 4, 6] DF = pd.DataFrame({'temp': temp}, index=date_time) ax = DF.plot(x_compat=True, rot=90, figsize=(6, 5)) ``` ### This will yield a plot that looks like the following: [![enter image description here](https://i.stack.imgur.com/XQimY.png)](https://i.stack.imgur.com/XQimY.png) ### Setting the index makes things easier The important note is that setting the DataFrame index to the datetime series allows matplotlib to deal with x axis on time series data without much help. [Follow this link for detailed explanation on spacing axis ticks (specifically dates)](https://stackoverflow.com/questions/12608788/changing-the-tick-frequency-on-x-or-y-axis-in-matplotlib/36229671#36229671) Upvotes: 4 [selected_answer]
2018/03/21
363
1,090
<issue_start>username_0: wkhtmltopdf does not render images from other sites. I have discovered what many developers suggest just prefix an image file with : ``` file:// ``` and add a full path. But this approach does not fit my needs. I need to render images from other site, because I have a separate image provider service.Also, I have tried --images flag. Also I tried with google.com. But still I get pdf without any images. ``` xvfb-run wkhtmltopdf 'https://google.com' '/home/project/src/uploads/google.pdf' ``` P.S. I use wkhtmltopdf 0.12.3.2.<issue_comment>username_1: Adding `-a -s "-screen 0 640x480x16"` as arguments for `xvfb-run` is indeed what it solved for me as well. So as already suggested above, this does render images: `xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf google.com test.pdf` Upvotes: 3 <issue_comment>username_2: yes, this solved my problem, here is my full command : ``` exec('xvfb-run -a -s "-screen 0 640x480x16" -- /usr/bin/wkhtmltopdf --quiet --enable-local-file-access --page-size Letter /tmp/html.html /tmp/pdf.pdf'); ``` Upvotes: 0
2018/03/21
1,077
4,162
<issue_start>username_0: I've been trying to use the javacscript version of the Eclipse Paho MQTT client to access the Google IOTCore MQTT Bridge, as suggested here: <https://cloud.google.com/iot/docs/how-tos/mqtt-bridge> However, whatever I do, any attempt to connect with known good credentials (working with other clients) results in this connection error: ``` errorCode: 7, errorMessage: "AMQJS0007E Socket error:undefined." ``` Not much to go on there, so I'm wondering if anyone has ever been successful connecting to the MQTT Bridge via Javascript with Eclipse Paho, the client implementation suggested by Google in their documentation. I've gone through their troubleshooting steps, and things seem to be on the up and up, so no help there either. <https://cloud.google.com/iot/docs/troubleshooting> I have noticed that in their docs they have sample code for Java/Python, etc, but not Javascript, so I'm wondering if it's simply not supported and their documentation just fails to mention as such. I've simplified my code to just use the 'Hello World' example in the Paho documentation, and as far as I can tell I've done things correctly (including using my device path as the ClientID, the JWT token as the password, specifying an 'unused' userName field and explicitly requiring MQTT v3.1.1). In the meantime I'm falling back to polling via their HTTP bridge, but that has obvious latency and network traffic shortcomings. ``` // Create a client instance client = new Paho.MQTT.Client("mqtt.googleapis.com", Number(8883), "projects/[my-project-id]/locations/us-central1/registries/[my registry name]/devices/[my device id]"); // set callback handlers client.onConnectionLost = onConnectionLost; client.onMessageArrived = onMessageArrived; // connect the client client.connect({ mqttVersion: 4, // maps to MQTT V3.1.1, required by IOTCore onSuccess:onConnect, onFailure: onFailure, userName: 'unused', // suggested by Google for this field password: '[<PASSWORD>]' // working JWT token function onFailure(resp) { console.log(resp); } // called when the client connects function onConnect() { // Once a connection has been made, make a subscription and send a message. console.log("onConnect"); client.subscribe("World"); message = new Paho.MQTT.Message("Hello"); message.destinationName = "World"; client.send(message); } // called when the client loses its connection function onConnectionLost(responseObject) { if (responseObject.errorCode !== 0) { console.log("onConnectionLost:"+responseObject.errorMessage); } } // called when a message arrives function onMessageArrived(message) { console.log("onMessageArrived:"+message.payloadString); } ```<issue_comment>username_1: I'm a Googler (but I don't work in Cloud IoT). Your code looks good to me and it should work. I will try it for myself this evening or tomorrow and report back to you. I've spent the past day working on a Golang version of the samples published on Google's documentation. Like you, I was disappointed to not see all Google's regular languages covered by samples. Are you running the code from a browser or is it running on Node.JS? Do you have a package.json (if Node) that you would share too please? **Update** Here's a Node.JS (JavaScript but non-browser) that connects to Cloud IoT, subscribes to `/devices/${DEVICE}/config` and publishes to `/devices/${DEVICE}/events`. <https://gist.github.com/username_1/65ad8890d5f58eae9612632d594af2de> * Place all the files in the same directory * Replace values in `index.js` of the location of Google's CA and your key * Replaces [[YOUR-X]] values in `config.json` * Use "npm install" to pull the packages * Use `node index.js` You should be able to pull messages from the Pub/Sub subscription and you should be able to send config messages to the device. Upvotes: 3 [selected_answer]<issue_comment>username_2: Short answer is no. Google Cloud IoT Core doesn't support WebSockets. All the JavaScript MQTT libraries use WebSocket because JavaScript is restricted to perform HTTP requests and WebSocket connections only. Upvotes: 1
2018/03/21
834
3,067
<issue_start>username_0: A structure TsMyStruct is given as parameter to some functions : ``` typedef struct { uint16_t inc1; uint16_t inc2; }TsMyStruct; void func1(TsMyStruct* myStruct) { myStruct->inc1 += 1; } void func2(TsMyStruct* myStruct) { myStruct->inc1 += 2; myStruct->inc2 += 3; } ``` func1 is called under non-interrupt context and func2 is called under interrupt context. Call stack of func2 has an interrupt vector as origin. C compiler does not know func2 can be called (but code isn't considered as "unused" code as linker needs it in interrupt vector table memory section), so some code reading myStruct->inc2 outside func2 can be possibly optimized preventing myStruct->inc2 to be reloaded from ram. It is true for C basic types, but is it true for inc2 structure member or some array...? Is it true for function parameters? As a general rule, can I say "every memory zone (of basic type? or not?) modified in interrupt context and read elsewhere must be declared as volatile"?<issue_comment>username_1: no. `volatile` is not enough. You have both to set an optimization barrier for the compiler (which **can** be `volatile`) and for the processor. E.g. when a CPU core writes data, this can go into some cache and won't be visible for another core. Usually, you need some locking in your code (spin locks, or mutex). Such functions contain usually an optimization barrier so you do not need an `volatile`. Your code is racy, with proper locking it would look like ``` void func1(TsMyStruct* myStruct) { lock(); myStruct->inc1 += 1; unlock(); } void func2(TsMyStruct* myStruct) { lock(); myStruct->inc1 += 2; unlock(); myStruct->inc1 += 3; } ``` and the `lock()` + `unlock()` functions contain optimization barriers (e.g. `__asm__ __volatile__("" ::: "memory")` or just a call to a global function) which will cause the compiler to reload `myStruct`. For nitpicking: `lock()` and `unlock()` are expected to do the right thing (e.g. disable irqs). Real world implementations would be e.g. `spin_lock_irqsave()` + `spin_lock_irqrestore()` in linux. Upvotes: 0 <issue_comment>username_2: Yes, any memory that is used both inside and outside of an interrupt handler should be `volatile`, including structs and arrays, and pointers passed as function parameters. Assuming that you are targeting a single-core device, you do not need additional synchronization. Still, you have to consider that `func1` could be interrupted anywhere, which may lead to inconsistent results if you're not careful. For instance, consider this: ``` void func1(volatile TsMyStruct* myStruct) { myStruct->inc1 += 1; if (myStruct->inc1 == 4) { print(myStruct->inc1); // assume "print" exists } } void func2(volatile TsMyStruct* myStruct) { myStruct->inc1 += 2; myStruct->inc2 += 3; } ``` Since interrupts are asynchronous, this could print numbers different from 4. That would happen, for instance, if `func1` is interrupted after the check but before the `print` call. Upvotes: 2
2018/03/21
4,235
16,084
<issue_start>username_0: I would like to prevent my application from changing its orientation and force the layout to stick to "portrait". In the main.dart, I put: ``` void main(){ SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, DeviceOrientation.portraitDown ]); runApp(new MyApp()); } ``` but when I use the Android Simulator rotate buttons, the layout "follows" the new device orientation... How could I solve this? Thanks<issue_comment>username_1: Import `package:flutter/services.dart`, then Put the `SystemChrome.setPreferredOrientations` inside the `Widget build()` method. Example: ``` class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, DeviceOrientation.portraitDown, ]); return new MaterialApp(...); } } ``` **Update** This solution mightn't work for some IOS devices as mentioned in the updated flutter documentation on Oct 2019. They Advise to fixed the orientation by setting UISupportedInterfaceOrientations in Info.plist like this ``` UIInterfaceOrientationPortrait ``` For more information <https://github.com/flutter/flutter/issues/27235#issuecomment-508995063> Upvotes: 9 <issue_comment>username_2: @boeledi, If you want to “lock” the device orientation and not allow it to change as the user rotates their phone, this was easily set as below, ```dart // This did not work as required void main() { SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, ]); runApp(new MyApp()); } ``` > > You have to wait until **`setPreferredOrientations`** is done and then > start the app > > > ```dart // This will always work for lock screen Orientation. void main() async { WidgetsFlutterBinding.ensureInitialized(); await SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, ]); runApp(new MyApp()); } ``` Upvotes: 7 <issue_comment>username_3: ### iOS: Calling `SystemChrome.setPreferredOrientations()` doesn't work for me, and I had to change the `Device Orientation` in the [Xcode project](https://developer.apple.com/library/archive/technotes/tn2244/_index.html#//apple_ref/doc/uid/DTS40009012-CH1-STATUS_BAR) as following: [![enter image description here](https://i.stack.imgur.com/hswoe.png)](https://i.stack.imgur.com/hswoe.png) ### Android: Set the [`screenOrientation`](https://developer.android.com/guide/topics/manifest/activity-element#screen) attribute to `portrait` for the main activity in the file `android/app/src/main/AndroidManifest.xml` as following: [![enter image description here](https://i.stack.imgur.com/grDaZ.png)](https://i.stack.imgur.com/grDaZ.png) Upvotes: 7 <issue_comment>username_4: The 'setPreferredOrientations' method returns a Future object. Per documentation a Future represents some value that will be available somewhere in future. That's why you shall wait until it's available and then move on with the application. Hence, there shall be used 'then' method, which, per definition, "registers callbacks to be called when the Future completes". Hence, you shall use this code: ``` void main() { SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]).then((_) { runApp(new App()); }); } ``` Also, the following file must be imported: 'package:flutter/services.dart' Upvotes: 5 <issue_comment>username_5: First of all import this in **main.dart** file ``` import 'package:flutter/services.dart'; ``` Then **don't copy paste** rather see(remember) and write below code in **main.dart** file To force in **portrait** mode: ``` void main() { SystemChrome.setPreferredOrientations( [DeviceOrientation.portraitUp,DeviceOrientation.portraitDown]) .then((_) => runApp(MyApp()), ); ``` To force in **landscape** mode: ``` void main() { SystemChrome.setPreferredOrientations( [DeviceOrientation.landscapeLeft,DeviceOrientation.landscapeRight]) .then((_) => runApp(MyApp()), ); ``` Upvotes: 4 <issue_comment>username_6: Open android/app/src/main/AndroidManifest.xml and add the following line in the MainActivity: ``` android:screenOrientation="portrait" ``` If you have this: ``` ``` You should end up with something like this: ``` ``` This works for Android. On iOS, you will have to change this from the Xcode page: <https://i.stack.imgur.com/hswoe.png> (as username_3 said) Upvotes: 6 <issue_comment>username_7: `setPreferredOrientation` returns a `Future`, so it is asynchronous. The most readable approach is to define `main` as asynchronous: ```dart Future main() async { await SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]); return runApp(new MyApp()); } ``` Upvotes: 3 <issue_comment>username_8: Put the `WidgetsFlutterBinding.ensureInitialized()` else you will get an error while building. ``` import 'package:flutter/services.dart'; void main() async => { WidgetsFlutterBinding.ensureInitialized(); await SystemChrome.setPreferredOrientations( [DeviceOrientation.portraitUp], ); // To turn off landscape mode runApp(MainApp()); }; ``` Upvotes: 6 <issue_comment>username_9: As of new flutter versions along with setting the `preferred Orientation` we need to add one extra line i.e ``` WidgetsFlutterBinding.ensureInitialized(); ``` So working code for this is - ``` import 'package:flutter/services.dart'; void main() { WidgetsFlutterBinding.ensureInitialized(); SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, DeviceOrientation.portraitDown ]); runApp(MyApp()); } ``` Upvotes: 1 <issue_comment>username_10: Try ``` void main() async { WidgetsFlutterBinding.ensureInitialized(); await SystemChrome.setPreferredOrientations( [DeviceOrientation.portraitUp, DeviceOrientation.portraitDown]); runApp(MyApp()); } ``` You can also change the screen orientation settings in the android manifest and ios info.plist file. Upvotes: 2 <issue_comment>username_11: Import `import 'package:flutter/services.dart';` Then you include the line of code below in your `main.dart` file, and in your main method like so: ``` WidgetsFlutterBinding.ensureInitialized(); SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitDown, DeviceOrientation.portraitUp, ]); runApp(myApp()); ``` Upvotes: 3 <issue_comment>username_12: Below is the official example of the flutter team. <https://github.com/flutter/samples/blob/master/veggieseasons/lib/main.dart> ``` import 'package:flutter/services.dart' show DeviceOrientation, SystemChrome; void main() { WidgetsFlutterBinding.ensureInitialized(); SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, DeviceOrientation.portraitDown, ]); runApp(HomeScreen()); } ``` Upvotes: 2 <issue_comment>username_13: Just add the following line of code in the main.dart file. ``` SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, ]); ``` and remember to import services.dart file. An example is given below! ``` import 'package:flutter/material.dart'; import 'package:flutter/services.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, ]); return MaterialApp( home: Scaffold( body: Center(child: Text("A Flutter Example!")), ), ); } } ``` Upvotes: 3 <issue_comment>username_14: If you are editing this from the xcode interface, it will not be changed for ipad. For the ipad section, you must edit the info.plist file from the android studio. You can see in the array list like that "~ ipad". There are two sections available in info.plist, you have to manually edit it for both iphone and ipad. Upvotes: 0 <issue_comment>username_15: This solution has worked for me on two different projects for Android. Can't guarantee it will work for iOS too. I've read all the previous answers and I think the best and simplest solution is to do it like this: Android: -------- This way you avoid all the "await"s, "async"s and "then"s that might mess around with your code ```dart /// this is in main.dart main(){ WidgetsFlutterBinding.ensureInitialized(); runApp( MaterialApp( initialRoute: '/root', routes: { '/root': (context) => MyApp(), }, title: "Your App Title", ), ); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { /// ENFORCE DEVICE ORIENTATION PORTRAIT ONLY SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, DeviceOrientation.portraitDown ]); /// RUN THE APP return MaterialApp( home: HomeScreen(), ); } } ``` iOS: ---- I think [this answer's update](https://stackoverflow.com/a/50884081/9185411) is your best bet for iOS. !! DISCLAIMER !! I did a little bit of research and apparently, the documentation [here](https://api.flutter.dev/flutter/services/SystemChrome/setPreferredOrientations.html) specifically says: ``` setPreferredOrientations method Limitations: "This setting will only be respected on iPad if multitasking is disabled." ``` [Here](https://github.com/flutter/flutter/issues/27235#issuecomment-507984206) is how you can disable multitasking (AKA turn on "Requires Full Screen" option) from XCode Another possible fix for iOS ---------------------------- You might also try [this one](https://github.com/flutter/flutter/issues/27235#issuecomment-508995063) out. I haven't tried it personally, since I don't have an iPad or XCode on my PC, but it's worth a shot Upvotes: 1 <issue_comment>username_16: For people, that they are reading this question now. The easiest way that I found and it worked on both android & ios devices. ``` void main() { WidgetsFlutterBinding.ensureInitialized(); SystemChrome.setPreferredOrientations( [ DeviceOrientation.portraitUp, DeviceOrientation.portraitDown, ], ).then((val) { runApp(YourAppName()); }); } ``` Upvotes: 3 <issue_comment>username_17: It's possible to enable the `requires fullscreen` option in iOS case. ``` void main() async { WidgetsFlutterBinding.ensureInitialized(); SystemChrome.setPreferredOrientations([ DeviceOrientation.landscapeRight, DeviceOrientation.landscapeLeft, ]).then((_) { runApp(MediaQuery( data: MediaQueryData(), child: MaterialApp(debugShowCheckedModeBanner: false, home: LoginPage()))); }); } ``` [![enter image description here](https://i.stack.imgur.com/EYTBN.png)](https://i.stack.imgur.com/EYTBN.png) Upvotes: 2 <issue_comment>username_18: To make `SystemChrome.setPreferredOrientations` to work on iPad, enable "Requires full screen" in Xcode project editor or simply add the following lines in /ios/Runner/Info.plist in your project folder. ``` UIRequiresFullScreen ``` Upvotes: 0 <issue_comment>username_19: This is simple way -> ``` SystemChrome.setPreferredOrientations( [DeviceOrientation.portraitUp, DeviceOrientation.portraitDown]); ``` Upvotes: 0 <issue_comment>username_20: > > **You have two options for for android** > > > ***1. Write for on main*** ``` main() { WidgetsFlutterBinding.ensureInitialized(); SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]) .then((_) { runApp( child: MyApp(), ); }); } ``` ***2. Set from natively on `AndroidManifest.xml` file*** [![enter image description here](https://i.stack.imgur.com/StqJo.png)](https://i.stack.imgur.com/StqJo.png) > > **You have also two option for iOS** > > > ***1. from `info.plist`*** add this line to your `info.plist` file ``` UIInterfaceOrientationPortrait ``` ***2. from Runner*** open your `Runner.xcworkspace` from iOS folder on your Xcode. Click on Runner not Pods. You can find this option on General>Deployment Info. just check what you want [![enter image description here](https://i.stack.imgur.com/Ojeyx.jpg)](https://i.stack.imgur.com/Ojeyx.jpg) Upvotes: 3 <issue_comment>username_21: **Flutter orientation lock: portrait only** In Flutter we have SystemChrome.setPreferredOrientation() ([see documentation](https://api.flutter.dev/flutter/services/SystemChrome/setPreferredOrientations.html)) to define preferred orientation. Exactly this method you can find in all those articles about setup orientation in Flutter. Let’s see how we can use it: ``` import 'package:flutter/material.dart'; import 'package:flutter/services.dart'; void main() { // We need to call it manually, // because we going to call setPreferredOrientations() // before the runApp() call WidgetsFlutterBinding.ensureInitialized(); // Than we setup preferred orientations, // and only after it finished we run our app SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]) .then((value) => runApp(MyApp())); } ``` **iOS** Open project in Xcode `(ios/Runner.xcworkspace)`, choose `Runner` in the project navigator, select Target `Runner` and on Tab `General` in section D`eployment Info` we can setup Device Orientation: [![enter image description here](https://i.stack.imgur.com/7rJ0d.png)](https://i.stack.imgur.com/7rJ0d.png) Also, we can do it manually without opening Xcode at all — just edit `ios/Runner/Info.plist`. Open it as a text file, find key `UISupportedInterfaceOrientation` and leave only desired values: ``` UISupportedInterfaceOrientations UIInterfaceOrientationPortrait ``` **Android** To define orientation for `Android` we need to edit `ApplicationManifest`. Open `android/app/src/main/AndroidManifest.xml` and add an attribute screenOrientation for a `main` activity: ``` ... ... ``` And that’s all. [Here](https://github.com/greymag/flutter_orientation_lock_example) is the repository with an example app: Hope this was helpful. Upvotes: 4 <issue_comment>username_22: To force portrait in iPad, you also have to modify key `UISupportedInterfaceOrientations~ipad` in `ios/Runner/Info.plist`: ``` UISupportedInterfaceOrientations~ipad UIInterfaceOrientationPortrait UIInterfaceOrientationPortraitUpsideDown ``` This can also be done through XCode. Above will work locally. In order to upload to the App Store through XCode, you also have the add the following to `Info.plist`. This can also be done by checking "Requires full screen" under Target->General->Deployment Info. ``` UIRequiresFullScreen ``` Otherwise you'll get an error while uploading. Upvotes: 2 <issue_comment>username_23: If your camera screen is still rotate in some case (iPhone 14 Plus, my case), after setting ``` SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]); in ``` App Startup dart file then add below code where you initiate camera controller. ``` await cameraController.lockCaptureOrientation(DeviceOrientation.portraitUp); ``` Upvotes: 0 <issue_comment>username_24: One scenario where `SystemChrome.setPreferredOrientations()` won't work (even on iPhone) is where - for whatever reason - you have put your `FlutterViewController` inside a `UINavigationController`. In this case, as is expected iOS behaviour, it is the `UINavigationController` which is responsible for handling orientation changes, not the `FlutterViewController` (which is why `SystemChrome.setPreferredOrientations()` won't work). To force a given orientation in this case, you should therefore subclass the `UINavigationController` and specify the orientations you wish to support: ```swift class MyNavigationController: UINavigationController { override var supportedInterfaceOrientations: UIInterfaceOrientationMask { .portrait } } ``` Upvotes: 0
2018/03/21
473
1,506
<issue_start>username_0: I have this code ``` #Ask for word w = input("Type in a word to create acronym with spaces between the words:") #Seperate the words to create acronym s = w.split(" ") letter = s[0] #print answer print(s.upper(letter)) ``` And I know that I need a for loop to loop over the words to get the first letter of each word but I can't figure out how to do it I tried many different types but I kept getting errors.<issue_comment>username_1: Try this. It prints a concatenated version of the first letter of each word. ``` w = input("Type in a word to create acronym with spaces between the words:") print(''.join([e[0] for e in w.split()]).upper()) ``` Upvotes: 2 <issue_comment>username_2: ``` for word in w.split(" "): first_letter = word[0] print(first_letter.upper()) ``` Upvotes: 1 <issue_comment>username_3: Try this ``` w = input("Type a phrase with a space between the words:") w_up_split = w.upper().split() acronym = "" for i in w_up_split: acronym += (i[0]) print(acronym) ``` Upvotes: 2 <issue_comment>username_4: In the code that you gave you are taking the first word in a list of lists. ``` s = w.split(" ") letter = s[0] ``` If someone input 'Hi how are you' this s would equal ``` s == ["Hi"]["how"]["are"]["you"] ``` And then letter would equal the first index of s which would be ["Hi"] You want to go through each word and take each letter ``` acronym = [] for x in s: acronym.append(x[0]) ``` Would get what you want. Upvotes: 1
2018/03/21
1,149
4,164
<issue_start>username_0: **The issue was different and was with how expected output was handling the json. Apologies for wasting your time.** I have a python dictionary object which I'm trying to return in a string format so that another function does a string comparision. I have no control over the other function so it's my responsibility to return in the requested format. Right now I have this myfunction(params) which returns (json.dump(dictionary object)) ``` '{"Engineering": {"employees": 3, "employees_with_outside_friends": 2}, "HR": {"employees": 1, "employees_with_outside_friends": 1}, "Business": {"employees": 1, "employees_with_outside_friends": 1}, "Directors": {"employees": 1, "employees_with_outside_friends": 0}}' ``` But I want the return to look like regular string which gets printed when i run print(dict) ``` {"Engineering": {"employees": 3, "employees_with_outside_friends": 2}, "HR": {"employees": 1, "employees_with_outside_friends": 1}, "Business": {"employees": 1, "employees_with_outside_friends": 1}, "Directors": {"employees": 1, "employees_with_outside_friends": 0}} ``` Sample Code: ``` def get_actual_output(): dict_ = dict({"Engineering": {"employees": 3, "employees_with_outside_friends": 2},"HR": {"employees": 1, "employees_with_outside_friends": 1},"Business": {"employees": 1, "employees_with_outside_friends": 1},"Directors": {"employees": 1, "employees_with_outside_friends": 0}}) return(json.dumps(dict_)) expected_output = '{"Engineering": {"employees": 3, "employees_with_outside_friends": 2},"HR": {"employees": 1, "employees_with_outside_friends": 1},"Business": {"employees": 1, "employees_with_outside_friends": 1},"Directors": {"employees": 1, "employees_with_outside_friends": 0}}' get_actual_output() == expected_output returns False ``` Adding Some more details : When I print the return of the above function and the expected values, this is what I get : ``` >>> actual_output '{"Business": {"employees": 1, "employees_with_outside_friends": 1}, "Dir ectors": {"employees": 1, "employees_with_outside_friends": 0}, "Engineer ing": {"employees": 3, "employees_with_outside_friends": 2}, "HR": {"empl oyees": 1, "employees_with_outside_friends": 1}}' >>> test['expected_output'] {'Business': {'employees': 1, 'employees_with_outside_friends': 1}, 'Directors': {'employees': 1, 'employees_with_outside_friends': 0}, 'Engineering': {'employees': 3, 'employees_with_outside_friends': 2}, 'HR': {'employees': 1, 'employees_with_outside_friends': 1}} ```<issue_comment>username_1: Your example with `get_actual_output() == expected_output` doesn't match just because you're missing whitespace before some keys. For example: `...s": 2},"HR": {"em...`, where the actual output is `...s": 2}, "HR": {"em...`. You can't get the format you want out of `json.dumps`, because you have space before some keys, but not other ones. You can get rid of all the spaces with `json.dumps(..., separators=(',', ': '))`, but that's likely just confusing the situation some more. Ideally, you should parse the string response and compare that instead. This way, you'll also avoid the issue of the order of the elements not being guaranteed. Upvotes: 1 <issue_comment>username_2: Json is different from python dictionary representation, when you print a dictionary, python doesn't use json syntax to generate the string representation of the dictionary, dict objects have their own implementation of \_\_str\_\_ which is invoked when calling str or doing a print(which calls str on the object). str(dict\_ ) produces a string that is generated by \_\_str\_\_, and does not follow the json syntax, it is similar, but not the same, these are two different syntaxes. Also, json only allows double quotes for strings, while python allows for both double and single quotes. Doing simple processing to the json to make it look like a python repr of a dict (like removing/replacing quotes..) is a bad idea, you will have issues when dealing with complex objects representation or even floats. if you want a string that looks like the string generated by printing the dict, use str(dict\_ ). Upvotes: 0
2018/03/21
523
2,148
<issue_start>username_0: Lets say you have code like this. ``` function openNewTab(url) { const newTab = window.open(url, 'windowNameHehe'); }; ``` Now every time you call this function it will open a new tab. However, if the window is already opened it will go to it and refresh/reload it. How do I stop it from reloading/refreshing and just bringing it to view? Doing something like newTab.focus() or adding the boolean (true/false) in the 4th parameter of window.open isn't working for me either. Any help?<issue_comment>username_1: Your example with `get_actual_output() == expected_output` doesn't match just because you're missing whitespace before some keys. For example: `...s": 2},"HR": {"em...`, where the actual output is `...s": 2}, "HR": {"em...`. You can't get the format you want out of `json.dumps`, because you have space before some keys, but not other ones. You can get rid of all the spaces with `json.dumps(..., separators=(',', ': '))`, but that's likely just confusing the situation some more. Ideally, you should parse the string response and compare that instead. This way, you'll also avoid the issue of the order of the elements not being guaranteed. Upvotes: 1 <issue_comment>username_2: Json is different from python dictionary representation, when you print a dictionary, python doesn't use json syntax to generate the string representation of the dictionary, dict objects have their own implementation of \_\_str\_\_ which is invoked when calling str or doing a print(which calls str on the object). str(dict\_ ) produces a string that is generated by \_\_str\_\_, and does not follow the json syntax, it is similar, but not the same, these are two different syntaxes. Also, json only allows double quotes for strings, while python allows for both double and single quotes. Doing simple processing to the json to make it look like a python repr of a dict (like removing/replacing quotes..) is a bad idea, you will have issues when dealing with complex objects representation or even floats. if you want a string that looks like the string generated by printing the dict, use str(dict\_ ). Upvotes: 0
2018/03/21
590
1,879
<issue_start>username_0: I am trying to pull a specific char from a string and convert it to an int. I have tried the following code, but I am unclear why it doesn't work nor can I find a way to do the conversion. ``` int value = 0; std::string s = "#/5"; value = std::atoi(s[2]); // want value == 5 ```<issue_comment>username_1: You should read the manual page for `atoi()` more carefully. The actual prototype is: ``` int atoi(const char *string) ``` You are attempting to pass a single character rather than a pointer to a character array. In other words, by using `s[2]` you are dereferencing the pointer. You could, instead, use: ``` value = std::atoi(s+2); ``` or alternatively: ``` value = std::atoi(&s[2]); ``` This code doesn't dereference the pointer. Upvotes: 0 <issue_comment>username_2: The argument to `std::atoi` must be `char*`, but `s[2]` is `char`. You need to use its address. And to get a valid C string from a `std::string`, you need to use the [`c_str()`](http://en.cppreference.com/w/cpp/string/basic_string/c_str) method. ``` value = std::atoi(&(s.c_str()[2])); ``` You should have gotten an error saying that the argument wasn't of the correct type. Upvotes: 0 <issue_comment>username_3: You can write: ``` std::string s = "#/5"; std::string substring = s.substr(2, 1); int value = std::stoi(substring); ``` Using the `substr` method of `std::string` to pull out the substring that you want to parse as an integer, and then using `stoi` (which takes a `std::string`) instead of `atoi` (which takes a `const char *`). Upvotes: 3 [selected_answer]<issue_comment>username_4: You can create `std::string` from one char and use `std::stoi` to convert to integer. ``` #include #include using namespace std; int main() { int value = 0; string s = "#/5"; value = stoi(string(1, s[2])); //conversion cout << value; } ``` Upvotes: 2
2018/03/22
1,189
3,816
<issue_start>username_0: Problem in parsing/identifying double quoted string from the big expression. ``` use strict; use Marpa::R2; use Data::Dumper; my $grammar = Marpa::R2::Scanless::G->new({ default_action => '[values]', source => \(<<'END_OF_SOURCE'), :start ::= expression expression ::= expression OP expression expression ::= expression COMMA expression expression ::= func LPAREN PARAM RPAREN expression ::= PARAM PARAM ::= STRING | REGEX_STRING :discard ~ sp sp ~ [\s]+ COMMA ~ [,] STRING ~ [^ \/\(\),&:\"~]+ REGEX_STRING ~ yet to identify OP ~ ' - ' | '&' LPAREN ~ '(' RPAREN ~ ')' func ~ 'func' END_OF_SOURCE }); my $recce = Marpa::R2::Scanless::R->new({grammar => $grammar}); ``` `my $input1 = "func(foo)&func(bar)";` -> able to parse it properly by parsing foo and bar as STRING LEXEME. `my $input2 = "\"foo\"";` -> Here, I want to parse foo as regex\_string LEXEME. REGEX\_STRING is something which is enclosed in double quotes. `my $input3 = "func(\"foo\") - func(\"bar\")";` -> Here, func should be taken as func LEXEME, ( should be LPAREN, ) should be RPAREN, foo as REGEX\_STRING, - as OP and same for func(\"bar\") `my $input4 = "func(\"foo\")";` -> Here, func should be taken as func LEXEME, ( should be LPAREN, ) should be RPAREN, foo as REGEX\_STRING ``` print "Trying to parse:\n$input\n\n"; $recce->read(\$input); my $value_ref = ${$recce->value}; print "Output:\n".Dumper($value_ref); ``` What did i try : **1st method:** My REGEX\_STRING should be something : `REGEX_STRING -> ~ '\"([^:]*?)\"'` If i try putting above `REGEX_STRING` in the code with input expression as `my $input4 = "func(\"foo\")";` i get error like : Error in SLIF parse: No lexeme found at line 1, column 5 \* String before error: func( \* The error was at line 1, column 5, and at character 0x0022 '"', ... \* here: "foo") Marpa::R2 exception **2nd method:** Tried including a rule like : ``` PARAM ::= STRING | REGEX_STRING REGEX_STRING ::= '"' QUOTED_STRING '"' STRING ~ [^ \/\(\),&:\"~]+ QUOTED_STRING ~ [^ ,&:\"~]+ ``` The problem here is-> Input is given using: `my $input4 = "func(\"foo\")";` So, here it gives error because there are now two ways to parse this expression, either whole thing between double quotes which is func(\"foo\") is taken as QUOTED\_STRING or func should be taken as func LEXEME and so on. Please help how do i fix this thing.<issue_comment>username_1: ``` use 5.026; use strictures; use Data::Dumper qw(Dumper); use Marpa::R2 qw(); my $grammar = Marpa::R2::Scanless::G->new({ bless_package => 'parsetree', source => \<<'', :default ::= action => [values] bless => ::lhs lexeme default = bless => ::name latm => 1 :start ::= expression expression ::= expression OP expression expression ::= expression COMMA expression expression ::= func LPAREN PARAM RPAREN expression ::= PARAM PARAM ::= STRING | REGEXSTRING :discard ~ sp sp ~ [\s]+ COMMA ~ [,] STRING ~ [^ \/\(\),&:\"~]+ REGEXSTRING ::= '"' QUOTEDSTRING '"' QUOTEDSTRING ~ [^ ,&:\"~]+ OP ~ ' - ' | '&' LPAREN ~ '(' RPAREN ~ ')' func ~ 'func' }); # say $grammar->show_rules; for my $input ( 'func(foo)&func(bar)', '"foo"', 'func("foo") - func("bar")', 'func("foo")' ) { my $r = Marpa::R2::Scanless::R->new({ grammar => $grammar, # trace_terminals => 1 }); $r->read(\$input); say "# $input"; say Dumper $r->value; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: 2nd method posted in question worked for me. I just have to include : ``` lexeme default = latm => 1 ``` in my code. Upvotes: 0
2018/03/22
609
1,497
<issue_start>username_0: I have a dataset I'm working on, and one of the columns contains multiple features that are separated by a comma. The number of features in each observation varies. ``` df <- data.frame(x=c("a", "a,b,c", "a,c", "b,c", "", "b")) x 1 a 2 a,b,c 3 a,c 4 b,c 5 6 b ``` I want to split this into multiple logical columns like this: ``` a b c 1 1 0 0 2 1 1 1 3 1 0 1 4 0 1 1 5 0 0 0 6 0 1 0 ``` where each column would represent if the observation contained that string in the original column. How can this be achieved? Is there a way to do it without specifying the output columns? For example, what if an observation contains: ``` "a,b,d" ``` How can I do it in a way that captures all unique features of the original column?<issue_comment>username_1: First split each item into the list `s` and compute the unique levels `levs`. Then use `outer` to create the desired matrix `tab` and add column names. ``` s <- strsplit(as.character(df$x), ",") levs <- unique(unlist(s)) tab <- outer(s, levs, Vectorize(function(x, y) y %in% x)) + 0 colnames(tab) <- levs ``` giving: ``` > tab a b c [1,] 1 0 0 [2,] 1 1 1 [3,] 1 0 1 [4,] 0 1 1 [5,] 0 0 0 [6,] 0 1 0 ``` Upvotes: 1 <issue_comment>username_2: ``` d=strsplit(as.character(df$x),",") > m=xtabs(z~x+y,data.frame(x=rep(df$x,lengths(d)),y=unlist(d),z=1)) > as.data.frame.matrix(m) a b c 0 0 0 a 1 0 0 a,b,c 1 1 1 a,c 1 0 1 b 0 1 0 b,c 0 1 1 ``` Upvotes: 0
2018/03/22
1,277
2,937
<issue_start>username_0: Appreciate your help. Need to split a column filled with delimited values into columns named after its delimited values and each of these new columns are to be filled with either 1 or 0 where values are found or not. ``` state <- c('ACT', 'ACT|NSW|NT|QLD|SA|VIC', 'ACT|NSW|NT|QLD|TAS|VIC|WA', 'ACT|NSW|NT|SA|TAS|VIC', 'ACT|NSW|QLD|VIC', 'ACT|NSW|SA', 'ACT|NSW|NT|QLD|TAS|VIC|WA|SA', 'NSW', 'NT', 'NT|SA', 'QLD', 'SA', 'TAS', 'VIC', 'WA') df <- data.frame(id = 1:length(state),state) id state 1 1 ACT 2 2 ACT|NSW|NT|QLD|SA|VIC 3 3 ACT|NSW|NT|QLD|TAS|VIC|WA 4 4 ACT|NSW|NT|SA|TAS|VIC ... ``` Desired state is a dataframe with the same dimensions plus the additional columns based on state populated with a 1 or 0 depending on the rows. tq, James<issue_comment>username_1: You can do something like this: ``` library(tidyr) library(dplyr) df %>% separate_rows(state) %>% unique() %>% # in case you have duplicated states for a single id mutate(exist = 1) %>% spread(state, exist, fill=0) # id ACT NSW NT QLD SA TAS VIC WA #1 1 1 0 0 0 0 0 0 0 #2 2 1 1 1 1 1 0 1 0 #3 3 1 1 1 1 0 1 1 1 #4 4 1 1 1 0 1 1 1 0 #5 5 1 1 0 1 0 0 1 0 #6 6 1 1 0 0 1 0 0 0 #7 7 1 1 1 1 1 1 1 1 #8 8 0 1 0 0 0 0 0 0 #9 9 0 0 1 0 0 0 0 0 #10 10 0 0 1 0 1 0 0 0 #11 11 0 0 0 1 0 0 0 0 #12 12 0 0 0 0 1 0 0 0 #13 13 0 0 0 0 0 1 0 0 #14 14 0 0 0 0 0 0 1 0 #15 15 0 0 0 0 0 0 0 1 ``` * `separate_rows` split `state` and convert the data frame to long format; * add a constant value column for reshaping purpose; * use `spread` to transform the result to wide format; Upvotes: 4 [selected_answer]<issue_comment>username_2: Here is a `base R` option to split the 'state' column by `|`, convert the `list` of vectors into a two column `data.frame` (`stack`), get the frequency with `table` and `cbind` with the first column of 'df' ``` cbind(df[1], as.data.frame.matrix(table(stack(setNames(strsplit(as.character(df$state), "[|]"), df$id))[2:1]))) # id ACT NSW NT QLD SA TAS VIC WA #1 1 1 0 0 0 0 0 0 0 #2 2 1 1 1 1 1 0 1 0 #3 3 1 1 1 1 0 1 1 1 #4 4 1 1 1 0 1 1 1 0 #5 5 1 1 0 1 0 0 1 0 #6 6 1 1 0 0 1 0 0 0 #7 7 1 1 1 1 1 1 1 1 #8 8 0 1 0 0 0 0 0 0 #9 9 0 0 1 0 0 0 0 0 #10 10 0 0 1 0 1 0 0 0 #11 11 0 0 0 1 0 0 0 0 #12 12 0 0 0 0 1 0 0 0 #13 13 0 0 0 0 0 1 0 0 #14 14 0 0 0 0 0 0 1 0 #15 15 0 0 0 0 0 0 0 1 ``` Upvotes: 2
2018/03/22
3,385
10,475
<issue_start>username_0: I have const data, and i need filter cards array by obj field company For example by company: 'Symu.co' ``` const data = { lanes: [ { id: 'lane1', header: 'Quened', label: '', cards: [ {id: 'Task1', title: 'Wordpress theme', company: 'Symu.co', price: 1500, user: 'michelle'}, ] }, { id: 'lane2', header: 'Planning', label: '', cards: [ {id: 'Task2', title: 'Landing page', company: 'Google', price: 1500, user: 'jolene'}, {id: 'Task3', title: 'New website', company: 'Symu.co', price: 1500, user: 'lyall'}, {id: 'Task4', title: 'Dashboard', company: 'Microsoft', price: 1500, user: 'john'}, {id: 'Task5', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'dominic'}, ] }, { id: 'lane3', header: 'Design', label: '', cards: [ {id: 'Task6', title: 'New Logo', company: 'Google', price: 1500, user: 'michelle'}, {id: 'Task7', title: 'New website', company: 'JCD.pl', price: 1500, user: 'dominic'}, {id: 'Task8', title: 'New website', company: 'Themeforest', price: 1500, user: 'john'}, {id: 'Task9', title: 'Dashboard', company: 'JCD.pl', price: 1500, user: 'jolene'}, ] }, { id: 'lane4', header: 'Development', label: '()', cards: [ {id: 'Task10', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'john'}, {id: 'Task11', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle'}, {id: 'Task12', title: 'Dashboard', company: 'Google', price: 1500, user: 'dominic'}, ] }, { id: 'lane5', header: 'Testing', label: '()', cards: [ {id: 'Task13', title: 'Landing page', company: 'JCD.pl', price: 1500, user: 'lyall'}, ] }, { id: 'lane6', header: 'Production', label: '()', cards: [ {id: 'Task14', title: 'Landing page', company: 'Google', price: 1500, user: 'jolene'}, {id: 'Task15', title: 'New website', company: 'Themeforest', price: 1500, user: 'michelle'}, {id: 'Task16', title: 'Dashboard', company: 'Facebook', price: 1500, user: 'lyall'}, ] }, { id: 'lane7', header: 'Completed', label: '()', cards: [ {id: 'Task17', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'john'}, {id: 'Task18', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle'}, ] }, ] ``` }; My code , but i don`t know how to right do it ``` for(let i = 0; i < data.lanes.length; i++) { var changeCards = data.lanes[i].cards.filter(function(obj) { return obj.company === 'Symu.co'; console.log(changeCards); }); } ``` in output i need to get new var new array as example in output i need to get new var new array as example in output i need to get new var new array as example in output i need to get new var new array as example in output i need to get new var new array as example in output i need to get new var new array as example in output i need to get new var new array as example ``` var newArray = { lanes: [ { id: 'lane1', header: 'Quened', label: '', cards: [ {id: 'Task1', title: 'Wordpress theme', company: 'Symu.co', price: 1500, user: 'michelle' }, ] }, { id: 'lane2', header: 'Planning', label: '', cards: [ {id: 'Task3', title: 'New website', company: 'Symu.co', price: 1500, user: 'lyall'} ] }, { id: 'lane3', header: 'Design', label: '', cards: [ ] }, { id: 'lane4', header: 'Development', label: '()', cards: [ {id: 'Task11', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle'}, ] }, { id: 'lane5', header: 'Testing', label: '()', cards: [ ] }, { id: 'lane6', header: 'Production', label: '()', cards: [ ] }, { id: 'lane7', header: 'Completed', label: '()', cards: [ {id: 'Task18', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle'}, ] }, ] }; ``` But i dont know how to do it Help please! Thanks a lot<issue_comment>username_1: ``` const filteredCards = cards.filter({ company } => company === 'Google'); ``` <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter> Upvotes: 0 <issue_comment>username_2: You can use the functions `reduce`, `forEach` and `filter` to build the desired output. ```js const data = { lanes: [ { id: 'lane1', header: 'Quened', label: '', cards: [ {id: 'Task1', title: 'Wordpress theme', company: 'Symu.co', price: 1500, user: 'michelle'}, ] }, { id: 'lane2', header: 'Planning', label: '', cards: [ {id: 'Task2', title: 'Landing page', company: 'Google', price: 1500, user: 'jolene'}, {id: 'Task3', title: 'New website', company: 'Symu.co', price: 1500, user: 'lyall'}, {id: 'Task4', title: 'Dashboard', company: 'Microsoft', price: 1500, user: 'john'}, {id: 'Task5', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'dominic'}, ] }, { id: 'lane3', header: 'Design', label: '', cards: [ {id: 'Task6', title: 'New Logo', company: 'Google', price: 1500, user: 'michelle'}, {id: 'Task7', title: 'New website', company: 'JCD.pl', price: 1500, user: 'dominic'}, {id: 'Task8', title: 'New website', company: 'Themeforest', price: 1500, user: 'john'}, {id: 'Task9', title: 'Dashboard', company: 'JCD.pl', price: 1500, user: 'jolene'}, ] }, { id: 'lane4', header: 'Development', label: '()', cards: [ {id: 'Task10', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'john'}, {id: 'Task11', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle'}, {id: 'Task12', title: 'Dashboard', company: 'Google', price: 1500, user: 'dominic'}, ] }, { id: 'lane5', header: 'Testing', label: '()', cards: [ {id: 'Task13', title: 'Landing page', company: 'JCD.pl', price: 1500, user: 'lyall'}, ] }, { id: 'lane6', header: 'Production', label: '()', cards: [ {id: 'Task14', title: 'Landing page', company: 'Google', price: 1500, user: 'jolene'}, {id: 'Task15', title: 'New website', company: 'Themeforest', price: 1500, user: 'michelle'}, {id: 'Task16', title: 'Dashboard', company: 'Facebook', price: 1500, user: 'lyall'}, ] }, { id: 'lane7', header: 'Completed', label: '()', cards: [ {id: 'Task17', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'john'}, {id: 'Task18', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle'}, ] },]}; function filterBy(target) { return data.lanes.reduce((a, {id, header, label, cards}) => { a.lanes.push({id, header, label, cards: cards.filter(({company}) => company === target)}); return a; }, {lanes: []}); } var result = filterBy('Symu.co'); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: ```js var data = { lanes: [ { id: 'lane1', header: 'Quened', label: '', cards: [ { id: 'Task1', title: 'Wordpress theme', company: 'Symu.co', price: 1500, user: 'michelle' }, ] }, { id: 'lane2', header: 'Planning', label: '', cards: [ { id: 'Task2', title: 'Landing page', company: 'Google', price: 1500, user: 'jolene' }, { id: 'Task3', title: 'New website', company: 'Symu.co', price: 1500, user: 'lyall' }, { id: 'Task4', title: 'Dashboard', company: 'Microsoft', price: 1500, user: 'john' }, { id: 'Task5', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'dominic' }, ] }, { id: 'lane3', header: 'Design', label: '', cards: [ { id: 'Task6', title: 'New Logo', company: 'Google', price: 1500, user: 'michelle' }, { id: 'Task7', title: 'New website', company: 'JCD.pl', price: 1500, user: 'dominic' }, { id: 'Task8', title: 'New website', company: 'Themeforest', price: 1500, user: 'john' }, { id: 'Task9', title: 'Dashboard', company: 'JCD.pl', price: 1500, user: 'jolene' }, ] }, { id: 'lane4', header: 'Development', label: '()', cards: [ { id: 'Task10', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'john' }, { id: 'Task11', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle' }, { id: 'Task12', title: 'Dashboard', company: 'Google', price: 1500, user: 'dominic' }, ] }, { id: 'lane5', header: 'Testing', label: '()', cards: [ { id: 'Task13', title: 'Landing page', company: 'JCD.pl', price: 1500, user: 'lyall' }, ] }, { id: 'lane6', header: 'Production', label: '()', cards: [ { id: 'Task14', title: 'Landing page', company: 'Google', price: 1500, user: 'jolene' }, { id: 'Task15', title: 'New website', company: 'Themeforest', price: 1500, user: 'michelle' }, { id: 'Task16', title: 'Dashboard', company: 'Facebook', price: 1500, user: 'lyall' }, ] }, { id: 'lane7', header: 'Completed', label: '()', cards: [ { id: 'Task17', title: 'Mobile App', company: 'Facebook', price: 1500, user: 'john' }, { id: 'Task18', title: 'New website', company: 'Symu.co', price: 1500, user: 'michelle' }, ] },] } var filter = c => data.lanes.reduce((a, v) => (a.push( { ...v, cards: v.cards.filter(v => v.company == c) } ), a), []) console.log( filter('Symu.co') ) ``` Upvotes: 0
2018/03/22
975
4,068
<issue_start>username_0: If a function's side effects are inherent within the design how do I develop such a function? For instance if I wanted to implement a function like http.get( "url" ), and I stubbed the side effects as a service with dependency injection it would look like: ``` var http = { "get": function( url, service ) { return promise(function( resolve ) { service( url ).then(function( Response ) { resolve( Response ); }); }); } } ``` ...but I would then need to implement the service which is identical to the original http.get(url) and therefore would have the same side effects and therefore put me in a development loop. Do I have to mock a server to test such a function and if so what part of the TDD development cycle does that fall under? Is it integration testing, or is it still unit testing? Another example would be a model for a database. If I'm developing code that works with a database, I'll design an interface, abstract a model implementing that interface, and pass it into my code using dependency injection. As long as my model implements the interface I can use any database and easily stub it's state and responses to implement TDD for other functions which interact with a database. What about that model though? It's going to interact with a database- it seems like that side effect is inherent within the design, and abstracting it away puts me into a development loop when I go to implement that abstraction. How do I implement the model's methods without being able to abstract them away?<issue_comment>username_1: If you are writing unit test on a module like that, focus on that module itself, not on the dependency. For example, how is it supposed to react to a db/service being down, or throwing exception/error, returning null data, returning good data, etc. That's why you mock them and return different values or set different behavior like throwing exception. Upvotes: 1 <issue_comment>username_2: > > In the TDD how do you write tests for code that inherently have side effects? > > > I don't think I've seen a particularly clear answer for this anywhere; the closest is probably [GOOS](http://a.co/cUXQYSl) -- the "London" school of TDD tends to be focused outside in. But broadly, you need to have a sense that side effects belong in the [imperative shell](https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell). They are usually implemented within an infrastructure component. So you'll typically want a higher level abstraction that you can pass to the functional part of your system. For example, reading the system clock is a side effect, producing a time since epoch value. Most of your system shouldn't care where the time comes from, so the abstraction of reading the clock *should be an input to the system*. Now, it can feel like "turtles all the way down" -- how do you test your interaction with the infrastructure? [<NAME> describes](https://stackoverflow.com/a/153565/54734) a stopping condition > > I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence.... > > > I tend to lean on [Hoare's observation](https://en.wikiquote.org/wiki/C._A._R._Hoare#The_Emperor's_Old_Clothes) > > There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies > > > Once you get down to an implementation of a side effect that is *obviously correct*, you stop worrying about it. When you are staring at a side effect, and the implementation is not obviously correct, you start looking for ways to pull the hard part back into the functional core, further isolating the side effect. The actual testing of the side effects typically happens when you start wiring all of the components together. Because of the side effects, these tests are typically slower; because they share mutable state, you often need to ensure that they are running sequentially. Upvotes: 4 [selected_answer]
2018/03/22
2,278
5,493
<issue_start>username_0: I have 365 daily netCDF files for the year 1980. These files are located in a folder that has data from multiple years (1979-2013). When I open the 1980 files using, ``` files = glob.glob("/mnt/nfs/home/solomon/Data/CFSR/NetCDFs_1979-2013/Subset/data_1980*") ds = xarray.open_mfdataset(files, engine="netcdf4") ``` The time stamp seems to be incorrect. When I print out the time, I get the following: ``` ds.time.sortby("time") Out[28]: array(['1979-01-07T00:00:00.000000000', '1979-01-07T03:00:00.000000000', '1979-01-07T06:00:00.000000000', ..., '2013-12-23T18:00:00.000000000', '2013-12-23T21:00:00.000000000', '2013-12-24T00:00:00.000000000'], dtype='datetime64[ns]') Coordinates: \* time (time) datetime64[ns] 1979-01-07 1979-01-07T03:00:00 ... Attributes: standard\_name: time axis: T ``` To check if the other files in the folder are being read, I changed the contents of the folder (i.e. I removed the 2012 files), but I still get the same time series as before. I am not sure what is wrong! ``` Out[29]: array(['1979-01-07T00:00:00.000000000', '1979-01-07T03:00:00.000000000', '1979-01-07T06:00:00.000000000', ..., '2013-12-23T18:00:00.000000000', '2013-12-23T21:00:00.000000000', '2013-12-24T00:00:00.000000000'], dtype='datetime64[ns]') Coordinates: \* time (time) datetime64[ns] 1979-01-07 1979-01-07T03:00:00 ... Attributes: standard\_name: time axis: T ``` The NetCDF data has the metadata as follows (using ncdump -h): ``` svimal@lettenmaierlab06:/mnt/nfs/home/solomon/Data/CFSR/NetCDFs_1979-2013/Subset$ ncdump -h data_19800530.nc netcdf data_19800530 { dimensions: lon = 503 ; lat = 170 ; time = UNLIMITED ; // (1 currently) variables: double lon(lon) ; lon:standard_name = "longitude" ; lon:long_name = "longitude" ; lon:units = "degrees_east" ; lon:axis = "X" ; double lat(lat) ; lat:standard_name = "latitude" ; lat:long_name = "latitude" ; lat:units = "degrees_north" ; lat:axis = "Y" ; double time(time) ; time:standard_name = "time" ; time:units = "hours since 1999-5-16 00:00:00" ; time:calendar = "standard" ; time:axis = "T" ; float air_temp(time, lat, lon) ; air_temp:long_name = "air temperuature (C)" ; air_temp:_FillValue = -9.99e+08f ; air_temp:missing_value = -9.99e+08f ; float vp(time, lat, lon) ; vp:long_name = "vapor pressure (kPa)" ; vp:_FillValue = -9.99e+08f ; vp:missing_value = -9.99e+08f ; float pressure(time, lat, lon) ; pressure:long_name = "pressure (kPa)" ; pressure:_FillValue = -9.99e+08f ; pressure:missing_value = -9.99e+08f ; float windspd(time, lat, lon) ; windspd:long_name = "wind (m/s)" ; windspd:_FillValue = -9.99e+08f ; windspd:missing_value = -9.99e+08f ; float shortwave(time, lat, lon) ; shortwave:long_name = "downward shortwave (W/m^2)" ; shortwave:_FillValue = -9.99e+08f ; shortwave:missing_value = -9.99e+08f ; float longwave(time, lat, lon) ; longwave:long_name = "downward longwave (W/m^2)" ; longwave:_FillValue = -9.99e+08f ; longwave:missing_value = -9.99e+08f ; float precip(time, lat, lon) ; precip:long_name = "precipitation (mm/hr)" ; precip:_FillValue = -9.99e+08f ; precip:missing_value = -9.99e+08f ; // global attributes: :CDI = "Climate Data Interface version ?? (http://mpimet.mpg.de/cdi)" ; :Conventions = "CF-1.4" ; :history = "Tue Mar 20 14:36:48 2018: ncea -d lat,41.375,83.625 -d lon,181.375,306.875 /mnt/nfs/home/solomon/Data/CFSR/NetCDFs_1979-2013/data_19800530.nc /mnt/nfs/home/solomon/Data/CFSR/NetCDFs_1979-2013/Subset/data_19800530.nc\n", "Tue Mar 20 14:36:44 2018: cdo -f nc import_binary /mnt/nfs/home/solomon/Data/CFSR/CFSR-LAND_Global_0.25deg_data_changed.ctl /mnt/nfs/home/solomon/Data/CFSR/NetCDFs_1979-2013/data_19800530.nc" ; :CDO = "Climate Data Operators version 1.7.0 (http://mpimet.mpg.de/cdo)" ; :NCO = "\"4.5.4\"" ; :nco_openmp_thread_number = 1 ; } ``` The time attribute says ``` time = UNLIMITED ; // (1 currently) ``` I am not sure what this means, could this be the issue?<issue_comment>username_1: Are you *sure* that your use of `glob.glob()` is only returning netCDF files with times in 1980? I would suggest a spot-check with an explicit loop (skipping `open_mfdataset`): ``` files = glob.glob("/mnt/nfs/home/solomon/Data/CFSR/NetCDFs_1979-2013/Subset/data_1980*") for path in files: ds = xarray.open_dataset(path, engine="netcdf4") print(path, ds.time.values) ``` Side note: it's best to pass the glob string directly into `open_mfdataset()` rather than explicitly calling `glob.glob()`. It is a little more succinct, and xarray also calls `sorted()` on glob strings that it parses, rather than relying on the platform specific order of the list returned by `glob.glob()`. Upvotes: 3 [selected_answer]<issue_comment>username_2: Thanks @username_1! The issue was with my files. The file names that start with '1980' contained data from other years. It happened because I modified the same input control file to create multiple netcdf files in parallel, using: ``` cdo -f nc import_binary CFSR-LAND_Global_0.25deg_data_changed.ctl data_19800530.nc ``` Making unique ctl files per parallel thread fixed the issue. Upvotes: -1
2018/03/22
353
1,365
<issue_start>username_0: Is there a way to get a message's content from its id? If so, how can I do that? I've read through its documentation but I found nothing. In the documentation, it says you can get a `discord.Message`'s ID by using `.id`, but I don't have the `discord.Message` object. Thanks.<issue_comment>username_1: In the rewrite branch of discord.py you can use the `get_message()` coroutine (documentation found [here](https://discordpy.readthedocs.io/en/rewrite/api.html#discord.TextChannel.get_message)) to find a message using its ID. I'm not sure if there's a way to do it in 0.16.12 Upvotes: 0 <issue_comment>username_2: If you don't have the channel id but only the message id, you just need to loop through your accessible channels and check which one(s)\* have the message that matches your id. ``` from discord import NotFound for channel in client.get_all_channels(): try: msg = await channel.get_message(id) except NotFound: continue print(msg.content) # where `id` is the message id and `client` is the bot or user ``` \* Message id isn't unique to the system, that is why the only way is to loop through all available channels, since message id is only unique to the channel. It is possible (quite unlikely though) to match two distinct messages with the same id from two different channels. Upvotes: 1
2018/03/22
1,722
5,084
<issue_start>username_0: A few years ago, a poster asked how to add regression line equation and R2 on ggplot graphs at the link below. [Adding Regression Line Equation and R2 on graph](https://stackoverflow.com/questions/7549694/adding-regression-line-equation-and-r2-on-graph) The top solution was this: ``` lm_eqn <- function(df){ m <- lm(y ~ x, df); eq <- substitute(italic(y) == a + b %.% italic(x)*","~~italic(r)^2~"="~r2, list(a = format(coef(m)[1], digits = 2), b = format(coef(m)[2], digits = 2), r2 = format(summary(m)$r.squared, digits = 3))) as.character(as.expression(eq)); } p1 <- p + geom_text(x = 25, y = 300, label = lm_eqn(df), parse = TRUE) ``` I am using this code and it works great. However, I was wondering if it is at all possible to make this code have the R2 value and regression line equation on separate lines, instead of being separated by a comma. Instead of like this ![Instead of like this](https://i.stack.imgur.com/EO4G2.png) Something like this ![Something like this](https://i.stack.imgur.com/Pi85N.gif) Thanks in advance for your help!<issue_comment>username_1: EDIT: In addition to inserting the equation, I have fixed the sign of the intercept value. By setting the RNG to `set.seed(2L)` will give positive intercept. The below example produces negative intercept. I also fixed the overlapping text in the `geom_text` ``` set.seed(3L) library(ggplot2) df <- data.frame(x = c(1:100)) df$y <- 2 + 3 * df$x + rnorm(100, sd = 40) lm_eqn <- function(df){ # browser() m <- lm(y ~ x, df) a <- coef(m)[1] a <- ifelse(sign(a) >= 0, paste0(" + ", format(a, digits = 4)), paste0(" - ", format(-a, digits = 4)) ) eq1 <- substitute( paste( italic(y) == b, italic(x), a ), list(a = a, b = format(coef(m)[2], digits = 4))) eq2 <- substitute( paste( italic(R)^2 == r2 ), list(r2 = format(summary(m)$r.squared, digits = 3))) c( as.character(as.expression(eq1)), as.character(as.expression(eq2))) } labels <- lm_eqn(df) p <- ggplot(data = df, aes(x = x, y = y)) + geom_smooth(method = "lm", se=FALSE, color="red", formula = y ~ x) + geom_point() + geom_text(x = 75, y = 90, label = labels[1], parse = TRUE, check_overlap = TRUE ) + geom_text(x = 75, y = 70, label = labels[2], parse = TRUE, check_overlap = TRUE ) print(p) ``` [![enter image description here](https://i.stack.imgur.com/uD58h.png)](https://i.stack.imgur.com/uD58h.png) Upvotes: 4 [selected_answer]<issue_comment>username_2: [`ggpmisc`](https://github.com/aphalo/ggpmisc) package has `stat_poly_eq` function which is built specifically for this task (but not limited to linear regression). Using the same `data` as @username_1 posted, we can add the equation and R2 separately but give `label.y.npc` different values. `label.x.npc` is adjustable if desired. ```r library(ggplot2) library(ggpmisc) #> For news about 'ggpmisc', please, see https://www.r4photobiology.info/ set.seed(21318) df <- data.frame(x = c(1:100)) df$y <- 2 + 3*df$x + rnorm(100, sd = 40) formula1 <- y ~ x ggplot(data = df, aes(x = x, y = y)) + geom_point() + geom_smooth(method = "lm", se = FALSE, formula = formula1) + stat_poly_eq(aes(label = paste(..eq.label.., sep = "~~~")), label.x.npc = "right", label.y.npc = 0.15, eq.with.lhs = "italic(hat(y))~`=`~", eq.x.rhs = "~italic(x)", formula = formula1, parse = TRUE, size = 5) + stat_poly_eq(aes(label = paste(..rr.label.., sep = "~~~")), label.x.npc = "right", label.y.npc = "bottom", formula = formula1, parse = TRUE, size = 5) + theme_bw(base_size = 16) ``` ![](https://i.stack.imgur.com/111vv.png) ```r # using `atop` ggplot(data = df, aes(x = x, y = y)) + geom_point() + geom_smooth(method = "lm", se = FALSE, formula = formula1) + stat_poly_eq(aes(label = paste0("atop(", ..eq.label.., ",", ..rr.label.., ")")), formula = formula1, parse = TRUE) + theme_bw(base_size = 16) ``` ![](https://i.stack.imgur.com/EwmcC.png) ```r ### bonus: including result table ggplot(data = df, aes(x = x, y = y)) + geom_point() + geom_smooth(method = "lm", se = FALSE, formula = formula1) + stat_fit_tb(method = "lm", method.args = list(formula = formula1), tb.vars = c(Parameter = "term", Estimate = "estimate", "s.e." = "std.error", "italic(t)" = "statistic", "italic(P)" = "p.value"), label.y = "bottom", label.x = "right", parse = TRUE) + stat_poly_eq(aes(label = paste0("atop(", ..eq.label.., ",", ..rr.label.., ")")), formula = formula1, parse = TRUE) + theme_bw(base_size = 16) ``` ![](https://i.stack.imgur.com/ombBJ.png) Created by the [reprex package](https://reprex.tidyverse.org) (v0.3.0) Upvotes: 3
2018/03/22
1,762
6,045
<issue_start>username_0: In woocommerce I have enabled Woocommerce EU VAT plugin and created a required custom checkout select field "Customer type" with 2 choices: * Individual * Business Now I am trying to show and enable EU VAT field for: * Orders amount up to `500` only * `'customer_type'` for `'business'` only, * Countries: **Denmark** and **Finland** only. Here is my code: ``` add_filter('woocommerce_checkout_fields', 'add_eu_vat_to_checkout'); function add_eu_vat_to_checkout() { if ( is_admin() && ! defined( 'DOING_AJAX' ) || ! is_checkout() ) return; $customer_type_value = WC()->session->get( 'customer_type' ); $subtotal = $wc_cart->subtotal; $minimum_order_subotal = 500; if ($customer_type_value == 'Business' && $minimum_order_subtotal > 500) { add_filter( 'woocommerce_eu_vat_number_country_codes', 'woo_custom_eu_vat_number_country_codes' ); function woo_custom_eu_vat_number_country_codes( $vat_countries ) { // only show field for users in Denmark and Finland return array( 'DK', 'FI' ); } } } ``` Any help is appreciated.<issue_comment>username_1: Finally no need of Ajax. Try the following (tested with WooCommerce EU VAT plugin): ``` // Add "Customer type" checkout field (For testing) - To be removed if you got it add_filter( 'woocommerce_checkout_fields', 'add_custom_checkout_fields', 20, 1 ); function add_custom_checkout_fields( $fields ) { // Get the "Customer type" if user is logged in if(is_user_logged_in() ) $value = get_user_meta( get_current_user_id(), 'customer_type', true ); $fields['billing']['customer_type'] = array( 'type' => 'select', 'label' => __('Customer type', 'woocommerce'), 'options' => array( '' => __('Please, select your type'), 'individual' => __('Individual'), 'business' => __('Business'), ), 'required' => true, // required 'class' => array('form-row-wide'), 'clear' => true, ); // Set the "Customer type" if is not empty (from user meta data) if( ! empty($value) ) $fields['billing']['customer_type']['default'] = $value; return $fields; } // Enabling Eu Vat for ('DK' and 'FI') when cart amount is up to 500 add_filter( 'woocommerce_eu_vat_number_country_codes', 'woo_custom_eu_vat_number_country_codes', 20, 1 ); function woo_custom_eu_vat_number_country_codes( $vat_countries ) { // HERE below your settings $countries = array( 'DK', 'FI' ); $min_amount = 500; $cart_items_amount = WC()->cart->cart_contents_total; // Avoiding errors on admin and on other pages if( is_admin() || WC()->cart->is_empty() ) return $countries; // Show EU VAT field for cart amount up to 500 & users in Denmark and Finland return $cart_items_amount >= $min_amount ? $countries : array(); } add_action( 'wp_footer', 'custom_checkout_jquery_script', 30, 1 ); function custom_checkout_jquery_script( $checkout ) { if( !is_checkout() ) return; // Only checkout ?> (function($){ var a = 'select[name=customer\_type]', b = 'business', i = 'individual', bc = '#billing\_company\_field', lbc = 'label[for=billing\_company]', lr = lbc + ' > .required', r = '<abbr class="required" title="required">\*</abbr>', vat = '#vat\_number\_field'; // On start (once DOM is loaded) $('label[for=vat\_number]').append(r); // Mark Eu Vat required // Hide EU VAT if not business and other needed things if( b != $(a).val() && $(vat).length ) { $(vat).fadeOut('fast'); // Hide EU Vat field // If is an individual we hide company field if( i == $(a).val()) $(bc).fadeOut(); // Hide company // Mark company field as required } else if( b == $(a).val() && $(vat).length ) { $(lbc).append(r); // Company required } // On "Customer Type" live event $('form.checkout').on('change', a, function(e){ e.preventDefault(); // Show EU VAT and company For "business" with other needed things if( b == $(a).val() ){ if( $(vat).length ) $(vat).fadeIn(); // Show EU Vat field $(lbc).append(r); // Company required $(bc).fadeIn(); // Show Company } else if( i == $(a).val()) { // For "individual" if( $(vat).length ) $(vat).fadeOut(); // Hide EU Vat field $(lr).remove(); // Remove Company required $(bc).fadeOut(); // Hide Company } else { // Nothing selected if( $(vat).length ) $(vat).fadeOut(); // Hide EU Vat field $(lr).remove(); // Remove Company required $(bc).fadeIn(); // Show Company } }); })(jQuery); php } // Update Order and User meta data for "Customer Type" add_action('woocommerce_checkout_create_order', 'before_checkout_create_order', 20, 2); function before_checkout_create_order( $order, $data ) { // Set customer type in the order and in user_data if( ! empty($_POST['customer_type']) ){ // Update Order meta data for 'customer_type' $order-update_meta_data( '_customer_type', sanitize_key( $_POST['customer_type'] ) ); // Update User meta data for 'customer_type' if( $order->get_user_id() > 0 ) update_user_meta( $order->get_user_id(), 'customer_type', sanitize_key( $_POST['customer_type'] ) ); } } ``` *Code goes in function.php file of your active child theme (or active theme)*. Tested and work. Upvotes: 3 [selected_answer]<issue_comment>username_2: I can confirm that this still works on the latest woocommerce storefront theme. It would be more simplistic if we could just show the user the check box instead of drop-down bar asking **"Buying as a business?"** and if checked those two fields will appear. I changed the portion of the code but the Javascript has to be updated as well, hope someone can supplement it further. ``` $fields['billing']['customer_type'] = array( 'type' => 'checkbox', 'label' => __('Checkbox label', 'woocommerce'), 'class' => array('form-row-wide'), 'clear' => true, ); ``` Upvotes: 0
2018/03/22
1,600
4,189
<issue_start>username_0: I have made a program, that allows the user to enter the year and team, that they are on. It print the values to a data sheet. When the user click on a commandbutton, the code will print the values to a calendar. My question is, can this be made smarter? ``` If Worksheets("DATA").Range("B2").Value = "2018" And Worksheets("DATA").Range("B3").Value = "Team 3" Then 'January Worksheets("Sheet1").Range("J4:J34").Copy Worksheets("2018").Range("D3:D33").PasteSpecial xlValues 'February Worksheets("Sheet1").Range("J35:J62").Copy Worksheets("2018").Range("H3:H33").PasteSpecial xlValues 'March Worksheets("Sheet1").Range("J63:J93").Copy Worksheets("2018").Range("L3:L33").PasteSpecial xlValues 'April Worksheets("Sheet1").Range("J94:J123").Copy Worksheets("2018").Range("P3:P33").PasteSpecial xlValues 'May Worksheets("Sheet1").Range("J124:J154").Copy Worksheets("2018").Range("T3:T33").PasteSpecial xlValues 'June Worksheets("Sheet1").Range("J155:J184").Copy Worksheets("2018").Range("X3:X33").PasteSpecial xlValues 'July Worksheets("Sheet1").Range("J185:J215").Copy Worksheets("2018").Range("AB3:AB33").PasteSpecial xlValues 'August Worksheets("Sheet1").Range("J216:J246").Copy Worksheets("2018").Range("AF3:AF33").PasteSpecial xlValues 'September Worksheets("Sheet1").Range("J247:J276").Copy Worksheets("2018").Range("AJ3:AJ33").PasteSpecial xlValues 'October Worksheets("Sheet1").Range("J277:J307").Copy Worksheets("2018").Range("AN3:AN33").PasteSpecial xlValues 'November Worksheets("Sheet1").Range("J308:J337").Copy Worksheets("2018").Range("AR3:AR33").PasteSpecial xlValues 'December Worksheets("Sheet1").Range("J338:J368").Copy Worksheets("2018").Range("AV3:AV33").PasteSpecial xlValues End If ``` On the Sheet1 sheet, the dates are listed in C [![2018](https://i.stack.imgur.com/3fiug.jpg)](https://i.stack.imgur.com/3fiug.jpg) [![Sheet1](https://i.stack.imgur.com/vRhmR.jpg)](https://i.stack.imgur.com/vRhmR.jpg) [![Userformdata](https://i.stack.imgur.com/rG0dq.jpg)](https://i.stack.imgur.com/rG0dq.jpg)<issue_comment>username_1: You can try to make it easier to update the ranges to be copied (mapping): --- ``` Option Explicit Public Sub CopyData() Const START_ROW = 3 If ThisWorkbook.Worksheets("DATA").Range("B2").Value = "2018" And _ ThisWorkbook.Worksheets("DATA").Range("B3").Value = "Team 3" Then Dim yr As Object, ws1 As Worksheet, ws2 As Worksheet Set ws1 = ThisWorkbook.Worksheets("Sheet1") Set ws2 = ThisWorkbook.Worksheets("2018") Set yr = CreateObject("Scripting.Dictionary") yr("J4:J34") = "D" 'Jan yr("J35:J62") = "H" 'Feb yr("J63:J93") = "L" 'Mar yr("J94:J123") = "P" 'Apr yr("J124:J154") = "T" 'May yr("J155:J184") = "X" 'Jun yr("J185:J215") = "AB" 'Jul yr("J216:J246") = "AF" 'Aug yr("J247:J276") = "AJ" 'Sep yr("J277:J307") = "AN" 'Oct yr("J308:J337") = "AR" 'Nov yr("J338:J368") = "AV" 'Dec Dim mnth As Variant, arr As Variant, toRng As String For Each mnth In yr arr = ws1.Range(mnth) toRng = yr(mnth) & START_ROW & ":" & yr(mnth) & UBound(arr) + START_ROW - 1 ws2.Range(toRng) = arr Next mnth End If End Sub ``` --- This is not ideal because there are still hard-coded values for all ranges but the columns are not the same size and I can't see the pattern for that Upvotes: 2 [selected_answer]<issue_comment>username_2: Because date and time in Excel is stored as a number of days, the source row can be found with: ``` =Date(2018, Column() / 4, Row()) - Date(2018, 1, -1) ``` and the source column index with: ``` =Match(Data!B3 & "*", '2018'!3:3, 0) ``` and combined in VBA: ``` y = [DATA!B2] Sheet1.[3:33 (D:D,H:H,L:L,P:P,T:T,X:X,AB:AB,AF:AF,AJ:AJ,AN:AN,AR:AR,AV:AV)].Formula = _ "=If(C3, Index('" & y & "'!$A:$Z, Date(" & y & ", Column() / 4, Row()) - Date(" & y _ & ", 1, -1), " & Evaluate("Match(DATA!B3 & ""*"", '" & y & "'!3:3, 0)") & " ), """")" ``` Upvotes: 0
2018/03/22
2,762
9,612
<issue_start>username_0: I did a bit of research and I think the best way to auto-fire a Macro is to use the AutoExec method in Access. I believe the script below will do the job. ``` Option Compare Database '------------------------------------------------------------ ' AutoExec ' '------------------------------------------------------------ Function AutoExec() On Error GoTo AutoExec_Err DoCmd.RunCommand acCmdWindowHide MsgBox "Welcome to the client billing application!", vbOKOnly, "Welcome" DoCmd.OpenTable "Orders", acViewNormal, acEdit AutoExec_Exit: Exit Function AutoExec_Err: MsgBox Error$ Resume AutoExec_Exit End Function ``` My question, now, is what is the best way to trigger the event of opening the Access DB? Unfortunately the Windows Task Scheduler has been turned off by my IT department (gotta love it). I'm thinking there must be a way to get Outlook to open the Access DB as a Task, or some such thing. I experimented with a few ideas, but haven't been able to get anything working. Does anyone here have any idea how to do this? To add a bit more color to this, basically I want to auto-import data from a remote SQL Server database, into Access. As you may have guessed, the SQL Server Agent has been disabled too. I am trying to run this job as a daily process, using Outlook, because that's really all I have available right now.<issue_comment>username_1: I would normally recommend Windows Task Scheduler but as you said, you don't have access to that (I'd still consider other alternatives for that - i.e. a third party scheduler or having IT add a scheduled task for you). But if you must... You can use an event in Outlook VBA to trigger code when a recurring Task reaches its reminder. In that event, you can open your Access database. Caveats: * You need to lower macro security in Outlook. You may not be allowed to do this and at the very least you should consider the ramifications of this. * The processing in Access will block Outlook while it runs. * The Task must have a reminder to trigger. The code below hides the reminder popup, but without setting a reminder, the event doesn't run. This code must be in the `ThisOutlookSession` module within the Outlook VBA IDE: ``` Private WithEvents m_reminders As Outlook.Reminders Private Sub Application_Startup() Set m_reminders = Application.Reminders End Sub Private Sub m_reminders_BeforeReminderShow(Cancel As Boolean) Dim reminderObj As Reminder For Each reminderObj In m_reminders If reminderObj.Caption = "MyDailyAccessImport" Then Dim accessApp As Object Set accessApp = CreateObject("Access.Application") accessApp.Visible = True accessApp.OpenCurrentDatabase "C:\Foo\MyDatabase.accdb" Cancel = True Exit For End If Next End Sub ``` Then, in your database, use an `AutoExec` macro to do the processing you require. Upvotes: 2 [selected_answer]<issue_comment>username_2: *For your IT team:* - Microsoft TechNet: **[Why You Shouldn’t Disable The Task Scheduler Service in Windows](https://blogs.technet.microsoft.com/askpfeplat/2013/07/14/why-you-shouldnt-disable-the-task-scheduler-service-in-windows-7-and-windows-8/)** *...on the other hand, in I.T.'s defence:* - Slashdot discussion : **[Why Everyone Hates the IT Department](https://it.slashdot.org/story/11/11/26/2113231/why-everyone-hates-the-it-department)** *...and for you:* - Stack Exchange: **[How can a developer can ask for more freedom from IT policies](https://workplace.stackexchange.com/questions/35893/as-a-developer-how-can-i-ask-for-more-freedom-when-confronted-with-a-tight-it-s?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa)** --- Your question reminds me of [one of many of] my side projects, which I've been meaning to use as example with a Q&A on I'm planning on writing, *"Data entry to Access via SMS text messages"*. It also reminded me of countless debates (battles?) of days gone by. Forgive me as I go slightly off-topic in a rant, *"Developers vs I.T."...* --- I can't speak for everyone's situation, but in my opinion, some jobs worse than others for different departments defeating each other's work by doing their own jobs, and I figure there's a clear correlation, basically that *"the larger or more 'government-associated' the company is, the bigger the headaches"...* **... ' o̲n̲e̲ ?"** Unfortunately unreasonable restrictions placed on competent developers, mandated by off-site management, can result in a bigger security hole than the "privilege" would have. [![IT vs I.T.](https://i.stack.imgur.com/ehkfK.jpg)](https://i.stack.imgur.com/ehkfK.jpg) --- *Auto-execute a procedure in Access* ------------------------------------ *Back to your question,* you have a few options to auto-run procedures in Access: * Create an **`AutoExec` Macro**, with a single command: `RunCode`. It will automatically run when the DB is opened. Instead of using the macro interface, you can just have a single command, that runs a VBA function. [![AutoExec](https://i.stack.imgur.com/nLwlT.png)](https://i.stack.imgur.com/nLwlT.png) Note that is has to be a `Function` (not a `Sub`) and the function must use zero parameters. ([More info](https://support.office.com/en-us/article/create-a-macro-that-runs-when-you-open-a-database-98ba1508-dcc6-4e0f-9698-a4755e548124)) * Set a Startup Form in File > Options > Current Database, and then set the form's `Form_Open` procedure to call your code. ([More info](https://support.office.com/en-us/article/set-the-default-form-that-appears-when-you-open-an-access-database-94961011-392f-4c3b-8dbc-e5d5adbff1df)) * If the database is usually going to be open, you could use a form's Timer events to schedule one-time or recurring tasks. I can't recall whether Access behaves like Excel, which (if you don't cancel a pending event before closing the application) will automatically re-open the application. ([More info.](https://msdn.microsoft.com/en-us/vba/access-vba/articles/form-timer-event-access)) * Include the `/X:` switch when opening Access to specify the name of a macro to execute. For example, on command-line or in a `.BAT` batch file or a desktop shortcut: `C:\Program Files (x86)\Microsoft Office\root\Office16\MSACCESS.EXE /x:myMacro` * Include the `/CMD:` switch when opening Access to specify custom options. Then, within VBA, you can test the value of the switch, if any, with the `COMMAND` function. For example: **Command Line:** `MSACCESS.EXE /cmd:Take actions one and two` **VBA:** `If InStr(Command,"two") <> 0 Then` `'do action two` `End if` Note that spaces are *allowed*, and therefore the `/CMD` must be the last switch used. More about Office's [command-line switches](https://support.office.com/en-us/article/command-line-switches-for-microsoft-office-products-079164cd-4ef5-4178-b235-441737deb3a6#ID0EAABAAA=Access) and [Access's `Command` function](https://msdn.microsoft.com/en-us/vba/language-reference-vba/articles/command-function). ***...I think I'm missing another way (besides Task Scheduler), but maybe it will come to me. Obviously, third-party application are another choice that I won't get into because I've never tried them.*** [![monitored](https://i.stack.imgur.com/EQRwu.jpg)](https://i.stack.imgur.com/EQRwu.jpg) --- *Run Access from Outlook* ------------------------- You can set rules for emails with an action of `Run a Script`, which will call an Outlook VBA procedure, which can in turn open another file as necessary. Outlook [tasks](https://msdn.microsoft.com/en-us/vba/outlook-vba/articles/taskitem-object-outlook) and calendar [appointments](https://msdn.microsoft.com/en-us/vba/outlook-vba/articles/appointmentitem-object-outlook) can also be coerced into running a script. With these two options, you can set it up so your code runs at certain times, intervals, or even on certain actions occurring to any extent of complexity as you desire. > > **For example you could**: > > > \*\*\*"Start procedure X when an email is received, but only if: > > > * it was sent from my cellphone (as a [text-to-email](https://email2sms.info/)), and, > * it contains a specific keyword in the subject line."\*\*\* > > > (hence the basis for my "data entry via text messages" project I mentioned earlier!) --- More Information: ----------------- * **[Getting Things Done – Outlook Task Automation with PowerShell](http://www.leeholmes.com/blog/2007/03/01/getting-things-done-outlook-task-automation-with-powershell/)** * **[How to Auto Create New Tasks When Receiving Specific Emails in Outlook](https://www.datanumen.com/blogs/auto-create-new-tasks-receiving-specific-emails-outlook/)** * **[How can you schedule automatic emails in Outlook 2013 for every weekday at a fixed time?](https://www.quora.com/How-can-you-schedule-automatic-emails-in-Outlook-2013-for-every-weekday-at-a-fixed-time)** * MSDN : **[Working with Outlook Events](https://msdn.microsoft.com/en-us/vba/outlook-vba/articles/working-with-outlook-events)** * MSDN : **[Getting Started: Interaction between Office Applications](https://msdn.microsoft.com/en-us/vba/office-shared-vba/articles/getting-started-with-vba-in-office#interaction-between-office-applications)** * MSDN : **[Using Visual Basic for Applications in Outlook](https://msdn.microsoft.com/en-us/vba/outlook-vba/articles/using-visual-basic-for-applications-in-outlook)** [![IT at work](https://i.stack.imgur.com/6AMKx.png)](https://i.stack.imgur.com/6AMKx.png) Upvotes: 2
2018/03/22
1,052
3,012
<issue_start>username_0: I'm relatively new to this, but here's what I'm trying to do. I have a raspberry zero connected to a raspberry pi camera, and I'm streaming this video from from the raspberry pi wirelessly via uv4l. I use this command: sudo uv4l -f -k --sched-fifo --mem-lock --driver raspicam --auto-video\_nr --encoding h264 --width 1080 --height 720 --enable-server on I'm able to access this stream on a web browser by looking at the pi's ip address. Now what I'd like to do is to be able to view the video stream in opencv. This is what I've read works, however I am running into the following error: ``` Streaming http://192.168.1.84:8080/stream Traceback (most recent call last): File "videoStream.py", line 17, in bytes+=stream.read('1024') File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 448, in read b = bytearray(amt) TypeError: string argument without an encoding ``` Here is my code. Note I'm running python 3.5 and opencv 3: ``` import cv2 import urllib.request import numpy as np import sys host = "192.168.1.84:8080" if len(sys.argv)>1: host = sys.argv[1] hoststr = 'http://' + host + '/stream' print('Streaming ' + hoststr) stream=urllib.request.urlopen(hoststr) bytes='' while True: bytes+=stream.read('1024') a = bytes.find('\xff\xd8') b = bytes.find('\xff\xd9') if a!=-1 and b!=-1: jpg = bytes[a:b+2] bytes= bytes[b+2:] i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR) cv2.imshow(hoststr,i) if cv2.waitKey(1) ==27: exit(0) ``` I'm not sure how to resolve this issue, or if there's perhaps a better approach to viewing this video stream in opencv.<issue_comment>username_1: I can't really verify your code as I don't have streaming setup with me. But first, I think `stream.read('1024')` should be `stream.read(1024)`. The `1024` is the size of the buffer in bytes, not a string of 1024. Secondly, `urllib.request.openurl().read()` return a byte object, so your code may later have the decoding problem when it hits the line `np.fromstring(jpg, dtype=np.uint8)` as `np.fromstring` is expecting `jpg` as a string, but the type of `jpg` is a byte. You will need to convert it to a string like this: ``` np.fromstring(jpg.decode('utf-8'), dtype=np.uint8) ``` Upvotes: 0 <issue_comment>username_2: Just replace ``` bytes='' ``` with ``` bytes=bytearray() ``` and ``` bytes+=stream.read('1024') ``` with ``` bytes+=bytearray(stream.read(1024)) ``` Upvotes: 0 <issue_comment>username_3: Try this out. Change ``` bytes='' while True: bytes+=stream.read('1024') a = bytes.find('\xff\xd8') b = bytes.find('\xff\xd9') ``` to ``` bytes=b'' while True: bytes+=stream.read(1024) a = bytes.find(b'\xff\xd8') b = bytes.find(b'\xff\xd9') ``` and use ``` i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.IMREAD_COLOR) ``` this worked for me in python 3.5, cv2 version 4.0.0 Upvotes: 1
2018/03/22
478
1,464
<issue_start>username_0: I am writing a script that takes user input of however many characters and I want to put each one of the characters into its own list to be then manipulated. ``` input = AVI ``` Output: ``` A = ['A'] V = ['V'] I = ['I'] ``` I was able to get it into a single list like this: ['A','V','I'] but that becomes too confusing for what I want to do later.<issue_comment>username_1: If you really *want* to have a variable named after itself, Use a [mapping](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) thus: ``` >>> s='AVI' >>> {e:e for e in s} {'A': 'A', 'I': 'I', 'V': 'V'} ``` Then access it like so: ``` >>> di={e:e for e in s} >>> di['A'] 'A' ``` Or, enumerate for an index (thanks username_2): ``` >>> {n:e for n,e in enumerate(s)} {0: 'A', 1: 'V', 2: 'I'} ``` Upvotes: 2 <issue_comment>username_2: Consider using `dict` with `enumerate`. Now you can even retrieve your letters by location. ``` x = input('Input a string:\n') # User inputs 'AVI' d = dict(enumerate(x)) # {0: 'A', 1: 'V', 2: 'I'} ``` I struggle to see how you know *which* letters to access when the user is inputting a string, so it doesn't make sense to name your variables (or, here, keys) after the letters themselves. Upvotes: 3 <issue_comment>username_3: I think you need to use a for loop and in the for write if-else conditions and for inserting any character that you want into your lists use appned() method. Upvotes: 0
2018/03/22
380
1,468
<issue_start>username_0: I am using javaFX in Eclispe to create a GUI application. I use SceneBuilder to edit the graphical parts. The GUI is linked with a Controller class. I have a button in an anchorPane, and no other elements. What i want to happen is, when i click on the button, i want to load an image "sample.png" from filesystem, and create a new ImageView and display it. Each time I click an image, I want to create a new ImageView next to the previous one, and display "sample.png" on it. I know how to load the image and display in ImageView. BUt, i'm not able to figure out the part when I need to dynamically create new ImageViews and place them next to the existing ImageView. Any pointers/ideas are appreciated :)<issue_comment>username_1: You could create a GridPane and use: ``` gridpane.add(imageview, col, row); ``` This will add the ImageView to the indicated cell or you just may want to do: ``` RowConstraints col = new RowConstraints(); gridpane.getRowConstraints().add(col); gridpane.add(imageview); ``` This will create a new col and then you add the view to that col. Upvotes: 0 <issue_comment>username_2: First, create a pane where you want the images to appear. It sounds like a FlowPane would be ideal for your situation. Then, just add a new ImageView to the pane whenever you click the button. ``` btnAddImage.setOnAction(event -> { paneImages.getChildren().add( new ImageView("filename")); } ``` Upvotes: 1
2018/03/22
901
2,472
<issue_start>username_0: I wrote this code that computes time since a sign change (from positive to negative or vice versa) in data frame columns. ``` df = pd.DataFrame({'x': [1, -4, 5, 1, -2, -4, 1, 3, 2, -4, -5, -5, -6, -1]}) for column in df.columns: days_since_sign_change = [0] for k in range(1, len(df[column])): last_different_sign_index = np.where(np.sign(df[column][:k]) != np.sign(df[column][k]))[0][-1] days_since_sign_change.append(abs(last_different_sign_index- k)) df[column+ '_days_since_sign_change'] = days_since_sign_change df[column+ '_days_since_sign_change'][df[column] < 0] = df[column+ '_days_since_sign_change'] *-1 # this final stage allows the "days_since_sign_change" column to also indicate if the sign changed # from - to positive or from positive to negative. In [302]:df Out[302]: x x_days_since_sign_change 0 1 0 1 -4 -1 2 5 1 3 1 2 4 -2 -1 5 -4 -2 6 1 1 7 3 2 8 2 3 9 -4 -1 10 -5 -2 11 -5 -3 12 -6 -4 13 -1 -5 ``` ***Issue***: with large datasets (150,000 \* 50,000), the python code is extremely slow. How can I speed this up?<issue_comment>username_1: You can surely do this without a loop. Create a sign column with -1 if value in x is less than 0 and 1 otherwise. Then group that sign column by difference in the value in the current row vs the previous one and get cumulative sum. ``` df['x_days_since_sign_change'] = (df['x'] > 0).astype(int).replace(0, -1) df.iloc[0,1] = 0 df.groupby((df['x_days_since_sign_change'] != df['x_days_since_sign_change'].shift()).cumsum()).cumsum() x x_days_since_sign_change 0 1 0 1 -4 -1 2 5 1 3 6 2 4 -2 -1 5 -6 -2 6 1 1 7 4 2 8 6 3 9 -4 -1 10 -9 -2 11 -14 -3 12 -20 -4 13 -21 -5 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can using `cumcount` ``` s=df.groupby(df.x.gt(0).astype(int).diff().ne(0).cumsum()).cumcount().add(1)*df.x.gt(0).replace({True:1,False:-1}) s.iloc[0]=0 s Out[645]: 0 0 1 -1 2 1 3 2 4 -1 5 -2 6 1 7 2 8 3 9 -1 10 -2 11 -3 12 -4 13 -5 dtype: int64 ``` Upvotes: 2
2018/03/22
501
1,802
<issue_start>username_0: Take for instance, the method below. ``` void myFunction(Class... types) { // Do some stuff that requires T to implement MyInterface } ``` Now, given the method call below assuming `MyClass1` and `MyClass2` both implement `MyInterface`. ``` myFunction(MyClass1.class, MyClass2.class) ``` I get the following error. > > Incompatible equality constraint: MyClass1 and MyClass2 > > > How do I make this work? More specifically, how would one use a variadic parameter of class types implementing an interface in Java?<issue_comment>username_1: Having a `T` means that `T` has one fixed value, which means that all of the `Class...` parameters must be the exact same type. The compiler cannot infer `T==MyInferface` because `Class` is not a subclass of `Class`. You want to allow each parameter to have a different type. That requires a different signature: ``` void myFunction(Class extends MyInterface... types) ``` There's no need for `T` at all. Upvotes: 2 <issue_comment>username_2: You've declared `T` to have an upper bound of `MyInterface`. By passing in `MyClass1.class` and `MyClass2.class`, the compiler must infer `MyInterface` for `T`. However, the type of the parameter `type` is `Class...`, restricting what is passed in to `MyInterface.class` and no subtypes. Depending on the "stuff" you're doing, you can place a wildcard upper bound on the type of `types` to get it to compile. ``` void myFunction(Class extends T... types) { ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: With the first parameter, the type variable `T` is set to `MyClass1`. `MyClass1` should implement the interface, so the constraint `extends MyInterface` is fulfilled. Of course a `Class` is not a `Class`. And that's why you get the error. Upvotes: 1
2018/03/22
492
1,730
<issue_start>username_0: I have a XML file in S3 contains the Schema for my table called sample: ``` xml version="1.0" encoding="UTF-8" standalone="no"? ``` I already write script as `sample.filter( x => x.contains("DATA_TYPE") || x.contains("ID"))`, and I would need to get each pair of the value for (ID,DATA\_TYPE) so the final output should be like ``` ("APPLICATION_ID","NUMBER"),("DESCRIPTIVE_FLEXFIELD_NAME","VARCHAR2"),etc. ``` Anyone can help me on this? Thanks!!!<issue_comment>username_1: Having a `T` means that `T` has one fixed value, which means that all of the `Class...` parameters must be the exact same type. The compiler cannot infer `T==MyInferface` because `Class` is not a subclass of `Class`. You want to allow each parameter to have a different type. That requires a different signature: ``` void myFunction(Class extends MyInterface... types) ``` There's no need for `T` at all. Upvotes: 2 <issue_comment>username_2: You've declared `T` to have an upper bound of `MyInterface`. By passing in `MyClass1.class` and `MyClass2.class`, the compiler must infer `MyInterface` for `T`. However, the type of the parameter `type` is `Class...`, restricting what is passed in to `MyInterface.class` and no subtypes. Depending on the "stuff" you're doing, you can place a wildcard upper bound on the type of `types` to get it to compile. ``` void myFunction(Class extends T... types) { ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: With the first parameter, the type variable `T` is set to `MyClass1`. `MyClass1` should implement the interface, so the constraint `extends MyInterface` is fulfilled. Of course a `Class` is not a `Class`. And that's why you get the error. Upvotes: 1
2018/03/22
316
984
<issue_start>username_0: This sounds silly but is there a way to create a empty array inside a Gremlin traversal? For the query below: ``` g.V().has('person','name', 'marko').project('a', 'b').by().by() ``` I want to project `b` as an empty array. I have tried: ``` g.V().has('person','name', 'marko').project('a', 'b').by().by(constant("").fold()) ``` But `constant("").fold()` is not actually empty `constant("").fold().count()` returns 1. This applies to `constant(null).fold()` as well.<issue_comment>username_1: Is this what you are looking for ``` g.withSideEffect('x',[]).V().has('person','name','marko').project('a','b').by(select('x')).by('name') ==>[a:[],b:marko] ``` Upvotes: 3 <issue_comment>username_2: An empty array/collection would actually be a `fold()` of nothing. You'll get nothing if you filter everything, hence: ``` g.V().has('person','name','marko'). project('a', 'b'). by(). by(__.not(identity()).fold()) ``` Upvotes: 3 [selected_answer]
2018/03/22
538
2,217
<issue_start>username_0: I have my React Router V4 routes structured this way: ``` const isAuthenticated = () => { let hasToken = localStorage.getItem("jwtToken"); if (hasToken) return true; return false; }; const AuthenticatedRoute = ({ component: Component, ...rest }) => isAuthenticated() ? : } />; class App extends Component { render() { return ( ); } } export default App; ``` As you see on code, if not authenticated I want to redirect to server page. The page is another react application to manage user registration and is located in the server but in another route tree: `/registration` What I've tried with no success: All of the redirects to page in current application. What would be the solution for this ?<issue_comment>username_1: I implemented it like so: ``` const AuthenticatedRoute = ({ component: Component, ...rest }) => (( isAuthenticated() ? () : ( window.location = "http://your\_full\_url" ) )} />); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I had a create-react-app project with react router, and the problem that when entering a path '/path/\*' it loaded the as if in the same react router, even though I had configuration for using another react project with an express server. The problem was that the service worker was running in the background, and for any route inside '/' it was using cache. The solution for me was to modify the 'registerServiceWorker.js', so that when you are inside the path, you ignore the service worker. I didn't get too deep in learning about service workers, but here is the code I used: ``` export default function register() { ... window.addEventListener('load', () => { const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`; if(window.location.href.match(/*\/path\/*/)){ // Reload the page and unregister if in path navigator.serviceWorker.ready.then(registration => { registration.unregister().then(() => { window.location.reload(); }); }); } else if (isLocalhost) { ... ``` When inside the path, it unsubscribes the service worker and reloads the page. Upvotes: 2
2018/03/22
501
2,063
<issue_start>username_0: I want a redis cluster that every redis instance can access other instance's data i.e data should be replicated among themselves.(With out master-slave concept) I'm trying to setup redis `RepilcaSet` in K8s. I tried to setup `slave-read-only no` in config which pods are restating continuously. Update1 ------- I used <https://github.com/kubernetes/examples/tree/master/staging/storage/redis> example to setup cluster which is master-salve+Redis Sentinel. But my application can't access sentinel to know who is the redis master. That's why I don's want to use sentinel.<issue_comment>username_1: I implemented it like so: ``` const AuthenticatedRoute = ({ component: Component, ...rest }) => (( isAuthenticated() ? () : ( window.location = "http://your\_full\_url" ) )} />); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I had a create-react-app project with react router, and the problem that when entering a path '/path/\*' it loaded the as if in the same react router, even though I had configuration for using another react project with an express server. The problem was that the service worker was running in the background, and for any route inside '/' it was using cache. The solution for me was to modify the 'registerServiceWorker.js', so that when you are inside the path, you ignore the service worker. I didn't get too deep in learning about service workers, but here is the code I used: ``` export default function register() { ... window.addEventListener('load', () => { const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`; if(window.location.href.match(/*\/path\/*/)){ // Reload the page and unregister if in path navigator.serviceWorker.ready.then(registration => { registration.unregister().then(() => { window.location.reload(); }); }); } else if (isLocalhost) { ... ``` When inside the path, it unsubscribes the service worker and reloads the page. Upvotes: 2
2018/03/22
926
3,504
<issue_start>username_0: I have a Json file in the following format: ``` "Adorable Kitten": {"layout": "normal","name": "<NAME>","manaCost": "{W}","cmc": 1,"colors": ["White"],"type": "Host Creature — Cat","types": ["Host","Creature"],"subtypes": ["Cat"],"text": "When this creature enters the battlefield, roll a six-sided die. You gain life equal to the result.","power": "1","toughness": "1","imageName": "adorable kitten","colorIdentity": ["W"]} ``` and I am using the following code to put it into a list: ``` using (StreamReader r = new StreamReader(filepath)) { string json = r.ReadToEnd(); List items = JsonConvert.DeserializeObject>(json); textBox1.Text = items[0].name.ToString(); } public class Item { public string layout; public string name; public string manaCost; public string cmc; public string[] colors; public string type; public string[] types; public string[] subtypes; public string text; public string power; public string toughness; public string imageName; public string[] colorIdentity; } ``` Visual Studio is telling me that the "Adorable Kitten" part of the Json can not be deserialized. Normally I would get rid of that portion of the code but it is an excerpt from a file that is nearly 40000 lines long, so removing that for each item would be impractical. Additionally when i removed "Adorable Kitten" while troubleshooting I got a similar error for "layout". The error says that i need to either put it into a Json array or change the deserialized Type so that it is a normal .Net Type. Can anyone point what I'm doing wrong?<issue_comment>username_1: If your example is really what you are doing then you're simply deserializing to the wrong type. Right now your code would work for the following: ``` [{"layout": "normal","name": "<NAME>","manaCost": "{W}","cmc": 1,"colors": ["White"],"type": "Host Creature — Cat","types": ["Host","Creature"],"subtypes": ["Cat"],"text": "When this creature enters the battlefield, roll a six-sided die. You gain life equal to the result.","power": "1","toughness": "1","imageName": "adorable kitten","colorIdentity": ["W"]}] ``` Notice that it is a single JSON object inside a JSON array. This corresponds to the type you are deserializing to (`List`). The example you posted of your JSON file isn't valid JSON (unless there are curly braces around the whole thing you left out) so you need to fix the file. If you really want there to be a list of Items in the JSON then wrapping everything in a single array will be the correct way to represent that. Upvotes: 3 [selected_answer]<issue_comment>username_2: First check if that the JSON you're receiving is a valid JSON, apparently the one you're receiving is wrong, you can check in <https://jsonlint.com> Second create a model for the JSON, you can do it here <http://json2csharp.com> ``` public class AdorableKitten { public string layout { get; set; } public string name { get; set; } public string manaCost { get; set; } public int cmc { get; set; } public List colors { get; set; } public string type { get; set; } public List types { get; set; } public List subtypes { get; set; } public string text { get; set; } public string power { get; set; } public string toughness { get; set; } public string imageName { get; set; } public List colorIdentity { get; set; } } ``` Don't forget about the getters and setters on your model. Upvotes: 0
2018/03/22
784
2,656
<issue_start>username_0: I'm trying to create a CUBESET function in Excel, but I don't know how to filter it using multiple criteria **within the same dimension**. This is what I have so far working with one criteria. --- Example 1: ``` =CUBESET("ThisWorkbookDataModel","{[Facebook].[Bucket (C)].[All].[DPA]*[Facebook].[AudienceType (C)].children}","Bucket") ``` Example 2: *with date in cell C3* ``` =CUBESET("ThisWorkbookDataModel","{[Facebook].[Week End].[All].["&TEXT($C$3,"m/d/yyyy")&"]*[Facebook].[Campaign (C)].children}","Campaign Breakout - Weekly") ``` --- And this is what I've tried to do with two criteria, but with no luck. Example 1: ``` =CUBESET("ThisWorkbookDataModel","FILTER( [Facebook].[AudienceType (C)].children,[Facebook].[Week End].[All].["&TEXT($C$3,"m/d/yyyy")&"] && [Facebook].[Bucket (C)].[All].[DPABroadAudience])","Bucket") ``` Example 2: ``` =CUBESET("ThisWorkbookDataModel","FILTER( [Facebook].[AudienceType (C)].children,AND([Facebook].[Week End].[All].["&TEXT($C$3,"m/d/yyyy")&"],[Facebook].[Bucket (C)].[All].[DPABroadAudience]))","Bucket") ``` Example 3: ``` =CUBESET("ThisWorkbookDataModel","{[Facebook].[AudienceType (C)].children *[Facebook].[Week End].[All].["&TEXT($C$3,"m/d/yyyy")&"] * [Facebook].[Bucket (C)].[All].[DPABroadAudience]})","Bucket") ``` --- Btw - while I only need two criteria right now, it would be great to see a solution that would work for 2+ criteria.<issue_comment>username_1: Please try: ``` =CUBESET("ThisWorkbookDataModel","EXISTS( [Facebook].[AudienceType (C)].children,([Facebook].[Week End].[All].["&TEXT($C$3,"m/d/yyyy")&"], [Facebook].[Bucket (C)].[All].[DPABroadAudience]) )","Bucket") ``` Since both filters are in the same Facebook dimension the `EXISTS` function should work. Feel free to add additional filters from the Facebook dimension. If you need to filter by other dimensions (not the Facebook dimension) then you will need to do the following. Choose a measure which will determine which AudienceTypes exist with the filters. ``` =CUBESET("ThisWorkbookDataModel","NONEMPTY( [Facebook].[AudienceType (C)].children,([Measures].[Your Measure], [Facebook].[Week End].[All].["&TEXT($C$3,"m/d/yyyy")&"], [Facebook].[Bucket (C)].[All].[DPABroadAudience], [Other Dimension].[Column Z].[All].[Your Filter]) )","Bucket") ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I found this approach did the trick for me: ``` =CUBESET("myDataSource","{[Dimensions].[CostCentre].[New Retail],[Dimensions].[CostCentre].[Used Retail]}","My Caption") ``` The key part is fully qualifying each item, separated by a comma, within curly braces. Upvotes: 0
2018/03/22
1,256
4,694
<issue_start>username_0: I'm trying to alter a data source for a set of Reporting Services reports, but I can't get the Powershell to work for them. I'd appreciate any help :) ``` $server = "http://My/ReportServer/" $dataSource = Get-RsDataSource -Path "/Data Sources/NewDataSource" - ReportServerUri $server $reports = Get-RsCatalogItems -RsFolder "/Testing/NewDataSOurce" -ReportServerUri $server -Recurse | Where-Object {$_.TypeName -eq "Report"} $reports | ForEach-Object { $reportDataSource = Get-RsItemDataSource -RsItem $_.Path -ReportServerUri $server $reportPath = $_.Path if ($reportDataSource.Name -eq "OldDataSource") { Set-RsItemDataSource -RsItem $reportPath -DataSource $dataSource -ReportServerUri $server } } ```<issue_comment>username_1: I wrote a function to do what you are talking about for setting data sources. Here is what I have... Unfortunately I don't have a SSRS instance any longer. The full script / module is on a gist on my GitHub account. I'll paste my gist urls at the bottom on this thread. The function that I'm pulling the snippet out of is called Deploy-NativeSSRS. I used this module + a driver script to push items that had been checked out of TFS. So they could in turn be parsed and then pushed to SSRS during CI activities. ``` $reports = New-ReportObject -files (Get-ChildItem -Path $reportPath -Filter $reportExtension) foreach($report in (($reports | Where-Object{$_.datasourcename -eq $datasourceName}).filename)) { $fileExt = $reportExtension.trim('*') $status = Set-SSRSDataSourceInfoNative -ReportName ($report.trim($fileext)) -reportPath $documentLibrary -DataSourceName $datasourceName -DataSourcePath "$dataSourceTarget/$datasourceName" -reportWebService $webservice write-output "The following $report datasource was updated to $datasourcename" } function set-SSRSDataSourceInfoNative { param ( [parameter(mandatory)] [string]$Reportname, #with no extension SSRS has no name for the file in native mode [parameter(mandatory)] [string]$reportPath, [parameter(mandatory)] [string]$DataSourceName, [parameter(mandatory)] [string]$DataSourcePath, [parameter(mandatory)] [uri]$reportWebService, [System.Management.Automation.PSCredential]$Credentials ) if ($Credentials) {$reportProxy = new-webserviceproxy -uri $reportWebService -Credential $credentials -namespace 'SSRSProxy' -class 'ReportService2010'} else {$reportProxy = new-webserviceproxy -uri $reportWebService -UseDefaultCredential -namespace 'SSRSProxy' -class 'ReportService2010'} $f = $ReportName.ToLower() try { $dataSources = $reportProxy.GetItemDataSources("$reportpath/$reportname") } catch { "Error was $_" $line = $_.InvocationInfo.ScriptLineNumber "Error was in Line $line" "ReportName: $reportname" "ReportPath: $reportpath" } $proxyNameSpace = $dataSources.gettype().Namespace $dc = $reportProxy.GetDataSourceContents($DataSourcePath) if ($dc) { $d = $dataSources | Where-Object {$_.name -like $DataSourceName } $newDataSource = New-Object ("$proxyNameSpace.DataSource") $newDataSource.Name = $datasourcename $newDataSource.Item = New-Object ("$proxyNamespace.DataSourceReference") $newDataSource.Item.Reference = $DatasourcePath $d.item = $newDataSource.item $reportProxy.SetItemDataSources("$reportpath/$f", $d) $set = ($reportproxy.GetItemDataSources("$reportPath/$f")).name write-verbose "$reportname set to data source $set" $returnobj = 'success' } $returnobj } ``` <https://gist.github.com/crshnbrn66/40c6be436e7c2e69b4de5cd625ce0902> <https://gist.github.com/crshnbrn66/b10e43ef0dadf7f4eeae620428b2cdd9> Upvotes: 1 <issue_comment>username_2: Here something that works with Power BI Report Server Rest API: ``` [string] $uri = "https://xxx/Reports" $session = New-RsRestSession -ReportPortalUri $uri $reports = Get-RsRestFolderContent -WebSession $session -RsFolder / -Recurse | Where-Object {$_.Type -eq "PowerBIReport"} $reports | ForEach-Object { $dataSources = Get-RsRestItemDataSource -WebSession $session -RsItem $_.Path | Where-Object {$_.ConnectionString -eq "yyy;zzz"} #$dataSources[0].DataModelDataSource.AuthType = 'Windows' $dataSources[0].DataModelDataSource.Username = 'domain\user' $dataSources[0].DataModelDataSource.Secret = '<PASSWORD>' Set-RsRestItemDataSource -WebSession $session -RsItem $_.Path -RsItemType 'PowerBIReport' -DataSources $dataSources } ``` Upvotes: 0
2018/03/22
598
2,101
<issue_start>username_0: I'm trying to progress through the Spotify developer API tutorial but when I try to access the user login page I get this error. I've triple checked that the URI in the code matches the one on MyApplications page but it still won't work. Here's the script, ``` var express = require('express'); // Express web server framework var request = require('request'); // "Request" library var querystring = require('querystring'); var cookieParser = require('cookie-parser'); var client_id = id; var client_secret = secret; var redirect_uri = "http://localhost:8888/callback"; ``` [Image of error code and MyApplications page](https://i.stack.imgur.com/nEzeo.png) I'm not sure what I'm doing wrong but I've been going over it for hours now, can someone help?<issue_comment>username_1: You need your redirect URIs to be *exactly* the same. The URI you have registered in the Dashboard is <http://localhost:8888/callback/> with a trailing slash. The version you use in your code does not have the trailing slash. Just change your redirect\_uri to be: ``` var redirect_uri = "http://localhost:8888/callback/"; ``` You can verify that this works with this example authorize URL I made: <https://accounts.spotify.com/en/authorize?client_id=df5c5a57b94a4817ae3ac4760c701983&redirect_uri=http:%2F%2Flocalhost:8888%2Fcallback%2F&scope=streaming%20user-read-birthdate%20user-read-private%20user-modify-playback-state&response_type=token&show_dialog=true> Upvotes: 4 <issue_comment>username_2: I just needed to restart my Node server! Steps to fix: 1. Ensure your redirect\_uri has a trailing slash after `callback`. Mine is: `http://localhost:8888/callback/` 2. Ensure your project in your [dashboard](https://developer.spotify.com/dashboard/applications) has the **EXACT** same URL as the one in step 1 under the 'redirect URI' section. Make sure to press the green 'ADD' button to the right and the 'SAVE' button at the bottom. 3. Save your file and **RESTART YOUR NODE SERVER**. this may seem trivial. But took me 30 minutes until I finally tried restarting it. Upvotes: 3
2018/03/22
849
3,080
<issue_start>username_0: I have code on my website that uses a SweetAlert2 popup to let users request songs: ``` $('#request-song').click(async function() { const { value: song } = await swal({ title: "Request a Song (please note song request won't be played unless we are live)", input: 'text', inputPlaceholder: 'Enter Artist - Song Name', }); if (song) { $.post("functions/request.php", {request: song}, function(data) { console.log(data); }); swal({type: 'success', title: 'Success!'}); } }); ``` But when I add another input it will only read the second one. How do I add another input so listeners can include their name/username for shoutouts?<issue_comment>username_1: Note your selector `#request-song`, which is an **ID** selector. ID should be unique in a single web page, therefore the selector only returns the first matching element (I guess you are using jQuery). To select multiple elements, try to use class or other types of selectors instead of ID selector. For more information about CSS selectors, take a look at the [MDN page](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors). Upvotes: 0 <issue_comment>username_2: From [SweetAlert2's documentation](https://sweetalert2.github.io/#multiple-inputs): > > Multiple inputs aren't supported, you can achieve them by using html and preConfirm parameters. Inside the preConfirm() function you can return (or, if async, resolve with) the custom result: > > > ``` const {value: formValues} = await swal({ title: 'Multiple inputs', html: '' + '', focusConfirm: false, preConfirm: () => { return [ document.getElementById('swal-input1').value, document.getElementById('swal-input2').value ] } }) if (formValues) { swal(JSON.stringify(formValues)) } ``` Adapting this to your code: ```js $("#request-song").click(async function() { const {value: songRequest} = await swal ({ title: 'Request a Song', html: '' + '', preConfirm: () => ({ song: $('#song').val(), listener: $('#listener').val() }) }); if (songRequest) swal(`${songRequest.listener} requests ${songRequest.song}`); }); ``` ```html Request a Song ``` The `preConfirm` property contains a function that returns the object that will eventually be returned from your `swal` call. Before, `swal().value` was a string: the name of the requested song. Now, it's an object: `{song: 'the song I want to hear', listener: 'me'}`. You can pass this object to `$.post` and modify `request.php` to handle it: ``` if (songRequest) { $.post("functions/request.php", {request: songRequest}, function(data) { console.log(data); }); swal({type: 'success', title: 'Success!'}); } ``` Or, if you don't want to modify the PHP, you can convert the object to a string and pass it that way: ``` if (songRequest) { $.post("functions/request.php", {request: JSON.stringify(songRequest)}, function(data) { console.log(data); }); swal({type: 'success', title: 'Success!'}); } ``` Upvotes: 3
2018/03/22
877
3,147
<issue_start>username_0: I have One table with two fields that I would like to update where each field has different conditions as follows. ``` one table: TableA first field1: QtyToGenerate1 if QtyToGenerate1 = 0 then QtyToGenerate1 = QtyOrdered Else QtyToGenerate1 = QtyOrdered - QtyGenerated1 Second Field2: QtyToGenerate2 if QtyToGenerate2 = 0 then QtyToGenerate2 = QtyOrdered Else QtyToGenerate2 = QtyOrdered - QtyGenerated2 ``` Knowing there are a lot of ways to do this, I would appreciate if you can give me the 'update' and the 'if' clause together because this is what I was trying to do. Any other simpler logic will also be sure appreciated and I would not mind a little of hints on thinking method. Thanks<issue_comment>username_1: Note your selector `#request-song`, which is an **ID** selector. ID should be unique in a single web page, therefore the selector only returns the first matching element (I guess you are using jQuery). To select multiple elements, try to use class or other types of selectors instead of ID selector. For more information about CSS selectors, take a look at the [MDN page](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors). Upvotes: 0 <issue_comment>username_2: From [SweetAlert2's documentation](https://sweetalert2.github.io/#multiple-inputs): > > Multiple inputs aren't supported, you can achieve them by using html and preConfirm parameters. Inside the preConfirm() function you can return (or, if async, resolve with) the custom result: > > > ``` const {value: formValues} = await swal({ title: 'Multiple inputs', html: '' + '', focusConfirm: false, preConfirm: () => { return [ document.getElementById('swal-input1').value, document.getElementById('swal-input2').value ] } }) if (formValues) { swal(JSON.stringify(formValues)) } ``` Adapting this to your code: ```js $("#request-song").click(async function() { const {value: songRequest} = await swal ({ title: 'Request a Song', html: '' + '', preConfirm: () => ({ song: $('#song').val(), listener: $('#listener').val() }) }); if (songRequest) swal(`${songRequest.listener} requests ${songRequest.song}`); }); ``` ```html Request a Song ``` The `preConfirm` property contains a function that returns the object that will eventually be returned from your `swal` call. Before, `swal().value` was a string: the name of the requested song. Now, it's an object: `{song: 'the song I want to hear', listener: 'me'}`. You can pass this object to `$.post` and modify `request.php` to handle it: ``` if (songRequest) { $.post("functions/request.php", {request: songRequest}, function(data) { console.log(data); }); swal({type: 'success', title: 'Success!'}); } ``` Or, if you don't want to modify the PHP, you can convert the object to a string and pass it that way: ``` if (songRequest) { $.post("functions/request.php", {request: JSON.stringify(songRequest)}, function(data) { console.log(data); }); swal({type: 'success', title: 'Success!'}); } ``` Upvotes: 3
2018/03/22
956
2,606
<issue_start>username_0: So I'm gathering 3 month intervals using Date\_Year and Date\_month columns but having some issues For example, the first group is Oct-Dec 2016 which is fine. The second group is the one I'm having trouble with , Nov-Dec 2016 along with Jan 2017.. Here is the sample code: ``` SELECT [2016_Oct_Dec] = SUM(CASE WHEN date_year = 2016 AND date_month IN (10,11,12) THEN Sales_amt END ) , [2016/17_Nov_Jan] = SUM(CASE WHEN (date_year = 2016 AND date_month IN (11,12)) AND ((date_year = 2017 AND date_month = 1)) THEN Sales_amt END ) From Sales ``` I tried to create two conditions (Nov-Dec 2016) and (Jan 2017) but no luck. Any help appreciated.<issue_comment>username_1: `date_year` cannot be `2016` AND `2017`. Try using an `OR` instead of `AND`: ``` SUM(CASE WHEN (date_year = 2016 AND date_month IN (11,12)) OR (date_year = 2017 AND date_month = 1) THEN Sales_amt END ) ``` Here is a [[DEMO]](http://rextester.com/SZYHL54400) --- Test setup: ``` WITH Sales AS( SELECT * FROM (VALUES ('2016','08',100), ('2016','10',100), --Capture in 1st CASE ('2016','11',100), --Capture in 1st CASE --Capture in 2nd CASE ('2016','12',100), --Capture in 1st CASE --Capture in 2nd CASE ('2017','1',100), --Capture in 2nd CASE ('2017','1',100), --Capture in 2nd CASE ('2017','1',100), --Capture in 2nd CASE ('2018','2',100)) T(date_year, date_month, sales_amt)) --Expect 300, 500 --Instead use OR SELECT [2016_Oct_Dec] = SUM(CASE WHEN date_year = 2016 AND date_month IN (10,11,12) THEN Sales_amt END ) , [2016/17_Nov_Jan] = SUM(CASE WHEN (date_year = 2016 AND date_month IN (11,12)) OR ((date_year = 2017 AND date_month = 1)) THEN Sales_amt END ) From Sales ``` [![enter image description here](https://i.stack.imgur.com/2TwTx.png)](https://i.stack.imgur.com/2TwTx.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` SELECT [2016_Oct_Dec] = SUM(CASE WHEN date_year = 2016 AND date_month IN (10,11,12) THEN Sales_amt END ) , [2016/17_Nov_Jan] = SUM(CASE WHEN (date_year = 2016 AND date_month IN (11,12)) OR ((date_year = 2017 AND date_month = 1)) THEN Sales_amt END ) From Sales ``` date\_year is cannot be 2016 and 2017 . It should be 2016 or 2017 in [2016/17\_Nov\_Jan] Upvotes: 1
2018/03/22
672
2,559
<issue_start>username_0: I have this code in Angular ``` this.provider.getUids() .then(uidObservable => this.uidsSubscription$ = uidObservable.subscribe((uids: string[]) => { console.log('res', uids); // array of uids const uidsSet = new Set(uids); // deletes duplicates uidsSet.forEach(uid => { this.userObservables.push(this.otherProvider.getSharedUserData(uid)); // this is the code I need to change }); })) ``` When I trigger the observale to emit a new value. **this.userObservables.push** will have duplicate values. ### Example Suppose I have only one uid: **AAAAA** then **this.userObservables** contains only observable to **AAAAA** Now, When I trigger the observable to emit a new value **BBBBB** **this.userObservables** will contain observables to **AAAAA** **AAAAA** **BBBBB** ### Question Is there a way to prevent this sort of behavior I want to have an array of observables that emit like **AAAAA** **BBBBB**<issue_comment>username_1: I found a solution although not the best in my point of view. I reset the array of observables at the beginning of the subscribe method. ``` this.provider.getUids() .then(uidObservable => this.uidsSubscription$ = uidObservable.subscribe((uids: string[]) => { console.log('res', uids); // array of uids const uidsSet = new Set(uids); // deletes duplicates this.userObservables = []; uidsSet.forEach(uid => { this.userObservables.push(this._ShareListProvider.getSharedUserData(uid)); // this is the code I need to change }); })) ``` Upvotes: 0 <issue_comment>username_2: You can do it like this: ``` this.uidsSubscription$ = Observable.fromPromise(this.provider.getUids()) .flatMap(obs$ => obs$) .map(uids => new Set(uids)) .map(uids => uids.map(uid => this.otherProvider.getSharedUserData(uid))) .subscribe(observables => this.userObservables = observables); ``` However, `provider.getUids` returning a `Promise>` sounds really weird and should probably be redesigned. Also note that it's unusual to suffix a subscription with `$`. The typical convention is to suffix observables that way, not subscriptions. And, lastly, it *also* strikes me as odd that this observable only assigns an array of observables to some array. You'd typically want to continue the chain here so you subscribe to them in whatever fashion you need. Overall, the design of this code looks strange, but without more information this is just a XY problem. Upvotes: 3 [selected_answer]
2018/03/22
1,156
4,055
<issue_start>username_0: In my Dockerfile I need to use command substition to add some environment variables. I want to set ``` ENV PYTHONPATH /usr/local/$(python3 -c 'from distutils import sysconfig; print(sysconfig.get_python_lib())') ``` but it doesn't work. The result is ``` foo@bar:~$ echo $PYTHONPATH /usr/local/$(python3 -c from distutils import sysconfig; print(sysconfig.get_python_lib())) ``` What's wrong?<issue_comment>username_1: What went wrong --------------- The `$( ... )` [command substitution](http://www.tldp.org/LDP/abs/html/commandsub.html) you attempted is for Bash, whereas the Dockerfile is not Bash. So docker doesn't know what to do with that, it's just plain text to docker, docker just spews out what you wrote as-is. Recommendation -------------- To avoid hard-coding values into a Dockerfile, and instead, to dynamically change a setting or custom variable as `PYTHONPATH` during the build, perhaps the `ARG ...` , `--build-arg` docker features might be most helpful, in combination with `ENV ...` to ensure it persists. Within your Dockerfile: ``` ARG PYTHON_PATH_ARG ENV PYTHONPATH ${PYTHON_PATH_ARG} ``` In Bash where you build your container: ```bash python_path="/usr/local$(python3 -c 'from distutils import sysconfig; print(sysconfig.get_python_lib())')" docker build --build-arg PYTHON_PATH_ARG=$python_path . ``` Explanation ----------- According to documentation, [`ARG`](https://docs.docker.com/engine/reference/builder/#arg): > > The `ARG` instruction defines a variable that users can pass at build-time to the builder with the docker build command using the `--build-arg =` flag. > > > So, in Bash we first: ``` python_path="/usr/local$(python3 -c 'from distutils import sysconfig; print(sysconfig.get_python_lib())')" ``` * `$(...)` Bash command substitution is used to dynamically put together a Python path value * this value is stored temporarily in a Bash variable `$python_path` for clarity ``` docker build --build-arg PYTHON_PATH_ARG=$python_path . ``` * Bash variable `$python_path` value is passed to docker's `--build-arg PYTHON_PATH_ARG` Within the Dockerfile: ``` ARG PYTHON_PATH_ARG ``` * so `PYTHON_PATH_ARG` stores the value from `--build-arg PYTHON_PATH_ARG...` `ARG` variables are not equivalent to `ENV` variables, so we couldn't merely do `ARG PYTHONPATH` and be done with it. According to documentation about [Using arg variables](https://docs.docker.com/engine/reference/builder/#using-arg-variables): > > `ARG` variables are not persisted into the built image as `ENV` variables are. > > > So finally: ``` ENV PYTHONPATH ${PYTHON_PATH_ARG} ``` * We use Dockerfile's `${...}` convention to get the value of `PYTHON_PATH_ARG`, and save it to your originally named `PYTHONPATH` environment variable Differences from original code ------------------------------ You originally wrote: ``` ENV PYTHONPATH /usr/local/$(python3 -c 'from distutils import sysconfig; print(sysconfig.get_python_lib())') ``` I re-wrote the Python path finding portion as a Bash command, and tested on my machine: ```bash $ python_path="/usr/local/$(python3 -c 'from distutils import sysconfig; print(sysconfig.get_python_lib())')" $ echo $python_path /usr/local//usr/lib/python3/dist-packages ``` Notice there is a double forward slash `... local//usr ...` , not sure if that will break anything for you, depends on how you use it in your code. Instead, I changed it to: ```bash $ python_path="/usr/local$(python3 -c 'from distutils import sysconfig; print(sysconfig.get_python_lib())')" ``` Result: ``` $ echo $python_path /usr/local/usr/lib/python3/dist-packages ``` So this new code will have no double forward slashes. Upvotes: 4 [selected_answer]<issue_comment>username_2: You should use `ARG` if possible. But sometimes you really need to use command substitution for a dynamic variable. As long as you put all the commands in the same `RUN` statement, then you can still access the value. ``` RUN foo=$(date) && \ echo $foo ``` Upvotes: 2
2018/03/22
1,584
3,082
<issue_start>username_0: I need to create a function of five variables, * a (multiplier) * n (sample size) * c (increment with default 0) * m (modulus) * x0 (Initial seed value) I need to generate a sequence of random numbers with the equation * xi = (a\*xi-1 + c) (mod m), i = 1, 2, ..., n As in the vector x = (x1, ..., xn). My attempt: ```html my.unif1 <- function(n, a,c = 0, m, x = x[0]) { while(n > 0) { x[n] <- (a*x[n-1]+c)%%m } } ```<issue_comment>username_1: It sounds like you want to learn more about Linear Congruential Generators. Here's a resource that will probably help you solve your code problem: <https://qualityandinnovation.com/2015/03/03/a-linear-congruential-generator-lcg-in-r/> ``` lcg <- function(a,c,m,run.length,seed) { x <- rep(0,run.length) x[1] <- seed for (i in 1:(run.length-1)) { x[i+1] <- (a * x[i] + c) %% m } U <- x/m # scale all of the x's to # produce uniformly distributed # random numbers between [0,1) return(list(x=x,U=U)) } > z <- lcg(6,7,23,20,5) > z $x [1] 5 14 22 1 13 16 11 4 8 9 15 5 14 22 1 13 16 11 [19] 4 8 $U [1] 0.21739130 0.60869565 0.95652174 0.04347826 0.56521739 [6] 0.69565217 0.47826087 0.17391304 0.34782609 0.39130435 [11] 0.65217391 0.21739130 0.60869565 0.95652174 0.04347826 [16] 0.56521739 0.69565217 0.47826087 0.17391304 0.34782609 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: That could help: ``` my.fct.1 <- function(x, multiplier, increment, modulus){ increment <- ifelse(missing(increment), 0, increment) # setting the default increment to 0 newval <- (multiplier*x + increment) %% modulus return(newval) } my.fct.2 <- function(x0, n, multiplier, increment, modulus){ if(n == 1){ val <- my.fct.1(x = x0, multiplier = multiplier, increment = increment, modulus = modulus) vec <- c(x0, val) return(vec) } if(n > 1){ vec <- my.fct.2(x = x0, n = n-1, multiplier = multiplier, increment = increment, modulus = modulus) val <- vec[length(vec)] newval <- my.fct.1(x = val, multiplier = multiplier, increment = increment, modulus = modulus) newvec <- c(vec, newval) return(newvec) } } ``` `my.fct.2` does the required, the arguments are pretty much self explanatory. Watch out though, because it is a recursive function (which can affect speed among other things). And here are some examples of such generated sequences: ``` > my.fct.2(3, 9, 7, -1, 4) [1] 3 0 3 0 3 0 3 0 3 0 > my.fct.2(1, 9, 2, 1, 13) [1] 1 3 7 2 5 11 10 8 4 9 > my.fct.2(0, 17, 5, 3, 7) [1] 0 3 4 2 6 5 0 3 4 2 6 5 0 3 4 2 6 5 # and here the arguments set to cross check it against @username_1's answer > my.fct.2(5, 20, 6, 7, 23) [1] 5 14 22 1 13 16 11 4 8 9 15 5 14 22 1 13 16 11 4 8 9 U <- my.fct.2(5, 20, 6, 7, 23)/23 > U [1] 0.21739130 0.60869565 0.95652174 0.04347826 0.56521739 0.69565217 0.47826087 0.17391304 [9] 0.34782609 0.39130435 0.65217391 0.21739130 0.60869565 0.95652174 0.04347826 0.56521739 [17] 0.69565217 0.47826087 0.17391304 0.34782609 0.39130435 ``` Upvotes: 1
2018/03/22
1,121
3,634
<issue_start>username_0: I am trying to convert airport **GeoCoordinate** data i.e. [**IATA Code**, **latitude**, **longitude**] to **Gremlin** **Vertex** in an **Azure Cosmos DB Graph API** project. **Vertex** conversion is mainly done through an **Asp.Net Core 2.0 console application** using **CSVReader** to stream and convert data from a **airport.dat** (csv) file. This process involves converting over 6,000 lines... So for example, in original **airport.dat** source file, the **Montreal Pierre Elliott Trudeau International Airport** would be listed using a similar model as below: ``` 1,"Montreal / Pierre Elliott Trudeau International Airport","Montreal","Canada","YUL","CYUL",45.4706001282,-73.7407989502,118,-5,"A","America/Toronto","airport","OurAirports" ``` Then if I define a **Gremlin** **Vertex** creation query in my cod as followed: ``` var gremlinQuery = $"g.addV('airport').property('id', \"{code}\").property('latitude', {lat}).property('longitude', {lng})"; ``` then when the console application is launched, the **Vertex** conversion process would be generated successfully in exact similar fashion: ``` 1 g.addV('airport').property('id', "YUL").property('latitude', 45.4706001282).property('longitude', -73.7407989502) ``` Note that in the case of **Montreal Airport** (which is located in N.A not in the Far East...), the **longitude** is properly formatted with **minus** (**-**) **prefix**, though this seems to be lost underway when doing a query on Azure Portal. ``` { "id": "YUL", "label": "airport", "type": "vertex", "properties": { "latitude": [ { "id": "13a30a4f-42cc-4413-b201-11efe7fa4dbb", "value": 45.4706001282 } ], "longitude": [ { "id": "74554911-07e5-4766-935a-571eedc21ca3", "value": 73.7407989502 <---- //Should be displayed as -73.7407989502 } ] } ``` This is a bit awkward. If anyone has encountered a similar issue and was able to fix it, then I'm fully open to suggestion. Thanks<issue_comment>username_1: According to your description, I just executed Gremlin query on my side and I could retrieve the inserted Vertex as follows: [![enter image description here](https://i.stack.imgur.com/DoLr6.png)](https://i.stack.imgur.com/DoLr6.png) Then, I just queried on Azure Portal and retrieved the record as follows: [![enter image description here](https://i.stack.imgur.com/nAjzm.png)](https://i.stack.imgur.com/nAjzm.png) Per my understanding, you need to check the execution of your code and verify the response of your query to narrow down this issue. Upvotes: 1 <issue_comment>username_2: Thank you for your suggestion, though problem has now been solved in my case. What was previously suggested as a working answer scenario [and voted 1...] has long been settled in case of **.Net 4.5.2** [& .**Net 4.6.1**] version used in combination with **Microsoft.Azure.Graph 0.2.4 -preview**. The issue of my question didn't really concern that and may have been a bit more subtle... Perhaps I should have put a bit more emphasis on the fact that the issue was mainly related to **Microsoft.Azure.Graph 0.3.1 -preview** used in **Core 2.0** + **dotnet CLI** scenario. According to following Graph - **Multiple issues with parsing of numeric constants in the graph gremlin query #438** comments on **Github**, <https://github.com/Azure/azure-documentdb-dotnet/issues/438> there are indeed some fair reasons to believe that the issue was a bug with **Microsoft.Azure.Graph 0.3.1 -preview**. I chose to use **Gremlin.Net** approach instead and managed to get the proper result I expected. Upvotes: 1 [selected_answer]
2018/03/22
522
1,915
<issue_start>username_0: There is an example from documentation, but it is not clear how to use in practice: ``` class Result { constructor(public wasSuccessful: boolean, public error: T) { } public clone(): Result { ... } } let r1 = new Result(false, 'error: 42'); ```<issue_comment>username_1: According to your description, I just executed Gremlin query on my side and I could retrieve the inserted Vertex as follows: [![enter image description here](https://i.stack.imgur.com/DoLr6.png)](https://i.stack.imgur.com/DoLr6.png) Then, I just queried on Azure Portal and retrieved the record as follows: [![enter image description here](https://i.stack.imgur.com/nAjzm.png)](https://i.stack.imgur.com/nAjzm.png) Per my understanding, you need to check the execution of your code and verify the response of your query to narrow down this issue. Upvotes: 1 <issue_comment>username_2: Thank you for your suggestion, though problem has now been solved in my case. What was previously suggested as a working answer scenario [and voted 1...] has long been settled in case of **.Net 4.5.2** [& .**Net 4.6.1**] version used in combination with **Microsoft.Azure.Graph 0.2.4 -preview**. The issue of my question didn't really concern that and may have been a bit more subtle... Perhaps I should have put a bit more emphasis on the fact that the issue was mainly related to **Microsoft.Azure.Graph 0.3.1 -preview** used in **Core 2.0** + **dotnet CLI** scenario. According to following Graph - **Multiple issues with parsing of numeric constants in the graph gremlin query #438** comments on **Github**, <https://github.com/Azure/azure-documentdb-dotnet/issues/438> there are indeed some fair reasons to believe that the issue was a bug with **Microsoft.Azure.Graph 0.3.1 -preview**. I chose to use **Gremlin.Net** approach instead and managed to get the proper result I expected. Upvotes: 1 [selected_answer]
2018/03/22
510
2,236
<issue_start>username_0: Goal - send data to Google Analytics (don't care about Firebase Analytics). In my App I am replacing GTM SDK (v3) with a newer version: Firebase SDK (v5) and wondering if I can pass an object as an event parameter. I.e ``` [FIRAnalytics logEventWithName:@"share_image" parameters:@{ @"mediaItem": { @"title":title, @"url":url } }]; ``` I need *mediaItem* to be an object (dictionary) with two keys (*title* and *url* which are both strings). Now when I am passing this, I can access this object and it's properties in **GTM** using something like *{{mediaItemDataLayerVar}}.title* however the debug console of my app throws a warning that I should only send NSNumber or NSString as an event parameters. Documentation page says the same. While it obviously work (passing *NSDictionary*) the warning get me worried as this may stop working in the future releases. Does anyone has a similar problem? How have you dealt with it?<issue_comment>username_1: Firebase Analytics SDK won't accept other data structure other than string or number. It might pass to GTM but Analytics won't log such parameters and thus you won't see it in the dashboard. You can log more complicated data structure in Analytics (see [enhanced ecommerce](https://developers.google.com/tag-manager/ios/v5/enhanced-ecommerce)) which allows you to pass an **array** of parameters if that what you want. See the link for examples. Upvotes: 0 <issue_comment>username_2: Had the same situation. After a little digging - passing dictionaries as event key parameters to GTM via Firebase+GTM SDK works without any issues. Any attempt to get more information from google or roadmap on what we can expect in the future as a result of merging Firebase and GTM SDKs wasn't successful. Basically, you can do that at your own risk, but there is a chance that this will be officially supported in the next versions of the Firebase+GTM SDK or replaced with a similar approach used for Enhanced ECommerce. While dictionaries work - I could not get arrays to work. Upvotes: 2 [selected_answer]
2018/03/22
671
2,385
<issue_start>username_0: `webpack-serve`: <https://github.com/webpack-contrib/webpack-serve> `webpack-dev-server`: <https://github.com/webpack/webpack-dev-server> They both state they're a dev server for webpack. How are they different?<issue_comment>username_1: I have not used webpack-serve but by looking at documents initial take is that it is relatively new repo (7 releases againest webpack-dev-server 70+) and focus is to use a browsers native WebSocket to fetch assets instead of polling mechanism. My guess is that would obviously make webpack-serve work better and faster while working in dev mode than webpack-dev-server. This made curious about it. I would give it a try on my current setup and will try to get back with findings. Upvotes: 1 <issue_comment>username_2: ``` +-----------------+--------------------------+----------------------+ | | webpack-dev-server | webpack-serve | +-----------------+--------------------------+----------------------+ | Initial release | 23 Dec 2014 | 12 feb 2018 | | Total releases | 74 | 7 | | Github stars | 3449 | 231 | | Lines of code | 28301 | 16075 | | under the hood | Express.js (22047 lines) | Koa.js (8913 lines)| | | API | not aligned | API first | | Mode | active (support, update) | deprecated (mar 2018)| | Total | work slower but supports | fast alternative | | | old browsers | | +-----------------+--------------------------+----------------------+ ``` **Sources** * Official repos * [Webpack-serve Up a Side of Whoop-Ass](http://shellscape.org/2018/02/12/webpack-serve-up-whoopass) * [I investigated webpack-serve which seems to be the successor of webpack-dev-server](http://yami-beta.hateblo.jp/entry/2018/03/07/191730) * [Gloc](https://chrome.google.com/webstore/detail/github-gloc/kaodcnpebhdbpaeeemkiobcokcnegdki) (Chrome extension for line counting. All strings are considered) * [Total section](https://github.com/webpack/webpack-dev-server#project-in-maintenance) <https://www.reddit.com/r/javascript/comments/7pg2rq/webpackdevserver_is_now_in_maintenance_mode/dsgwxjd/?st=jf286v37&sh=0336089c> Upvotes: 5 [selected_answer]
2018/03/22
850
3,391
<issue_start>username_0: I currently have a dataframe in R that was cleaned in order to get informative parts of some URLS. It refuses to print the first element when I request to print the whole dataframe. The dataframe looks like this: ``` print(my_data[1,]) #provided for clarity [1] c("https: 1073 Levels: ... Zloc-60-Qt-WeatherShield-Storage-Box-Clear ``` its a very long list.... ``` print(mydata) 549818028 311 Quilted- Northern-Ultra-Plush-24-Double-Rolls-Toilet-Paper-Bath-Tissue 312 49883627 313 Great-Value-Bath-Tissue-Ultra-Strong-24-Double-Rolls 314 910596048 315 Quilted- Northern-Ultra-Soft-Strong-Bathroom-Tissue-2-Ply-White-12-rolls 316 170741025 317 Great-Value-1000-Sheets-Bath-Tissue-12-Rolls 318 32631328 319 Great-Value-Bath-Tissue-Everyday-Soft-24-Double-Rolls 320 118420428 321 Great-Value-Bath-Tissue-Ultra-Strong-12-Double-Rolls 322 935578946 ``` Things seem to be ok but when I print any element I have this extra snippet of text on the bottom: ``` > print(jacks_new_list[315,]) [1] Quilted-Northern-Ultra-Soft-Strong-Bathroom-Tissue-2-Ply-White- 12-rolls 1073 Levels: ... Zloc-60-Qt-WeatherShield-Storage-Box-Clear ``` I'm trying to remove this snippet that now appears on each element line "1073 Levels: ... Zloc-60-Qt-WeatherShield-Storage-Box-Clear" I've tried to get rid of it using grep with no success so far. I also can't decide if there's actually a new line there or not because I'm not seeing one actually written in the text anywhere. Eventually this will be a two column list of titles with their corresponding number. So the numbers need to be legible and junk free so they can be used later. FYI the three digit numbers are the indices, and are not part of the string info in the dataframe element<issue_comment>username_1: You're printing a factor, this is the normal output. Here is an example: ``` iris$Species <- as.factor(iris$Species) print(iris$Species[1]) ``` Convert the variable factor to character: ``` iris$Species <- as.character(iris$Species) print(iris$Species[1]) ``` The real question is why you even care what the printout looks like. Upvotes: 2 [selected_answer]<issue_comment>username_2: Do this to remove the extra information about levels of factor: ``` print(as.character(jacks_new_list[315,])) ``` Upvotes: 0
2018/03/22
1,223
4,686
<issue_start>username_0: I've built a modal in ReactJS which needs to be triggered by clicking an to add `.active` class to the modal . Once class is active as `newsletterModal active` the `onClick={this.toggle.bind(this)}` is successful in removing the `active` class but how can I add the `active` class from within my `footer`? In Newsletter.js ``` import React from 'react' import PropTypes from 'prop-types'; import Link from 'gatsby-link'; class Newsletter extends React.Component { constructor(props) { super(props); this.state = {addClass: false} } toggle() { this.setState({addClass: !this.state.addClass}); } render() { let toggleModal = ["newsletterModal"]; if(this.state.addClass) { toggleModal.push('active'); } return( [...](javascript:;) ); } } export default Newsletter ``` In Footer.js (where I want the link to add the class set within Newsletter.js) ``` import React from 'react' import PropTypes from 'prop-types'; import Link from 'gatsby-link' const Footer = (props) => ( [Newsletter](javascript:;) ) export default Footer ``` index.js where both are called to the template - please note that I have a class being added to show the menu too from within this file. Perhaps it is possible to combine my `is-menu-visible` with `newsletterModal active`? ``` import React from 'react' import PropTypes from 'prop-types'; import Helmet from 'react-helmet' import { Link, withPrefix } from 'gatsby-link' import '../assets/scss/main.scss' import Header from '../components/global/Header' import Menu from '../components/global/Menu' import Newsletter from '../components/global/Newsletter' import Footer from '../components/global/Footer' class Template extends React.Component { constructor(props) { super(props) this.state = { isMenuVisible: false, loading: 'is-loading' } this.handleToggleMenu = this.handleToggleMenu.bind(this) } componentDidMount () { this.timeoutId = setTimeout(() => { this.setState({loading: ''}); }, 100); } componentWillUnmount () { if (this.timeoutId) { clearTimeout(this.timeoutId); } } handleToggleMenu() { this.setState({ isMenuVisible: !this.state.isMenuVisible }) } render() { const { children } = this.props return ( {children()} ) } } Template.propTypes = { children: PropTypes.func } export default Template ```<issue_comment>username_1: You need to pass the `toggle()` method to your component ``` import React from 'react' import PropTypes from 'prop-types'; import Link from 'gatsby-link'; import Footer from './Footer'; class Newsletter extends React.Component { constructor(props) { super(props); this.state = {addClass: false} } toggle() { this.setState({addClass: !this.state.addClass}); } render() { let toggleModal = ["newsletterModal"]; if(this.state.addClass) { toggleModal.push('active'); } return( ); } } export default Newsletter; ``` In your Footer.js file use the onClick function passed in as prop ``` import React from 'react' import PropTypes from 'prop-types'; import Link from 'gatsby-link' const Footer = (props) => ( [Newsletter](javascript:;) ) export default Footer ``` Upvotes: 0 <issue_comment>username_2: It can be done by adding `showActive` state in `Template` class. Add callback function in `Footer`: ``` const Footer = props => ( Newsletter ); ``` Instead of `setState` inside, read `addClass` from `props`: ``` class Newsletter extends Component { render() { let toggleModal = ["newsletterModal"]; if (this.props.addClass) { toggleModal.push("active"); } return ( This part should update! ); } } ``` Add an event handler for footer `onClick` event: ``` const Header = props => ( Main Page Title =============== ) class Template extends Component { constructor(props) { super(props); this.state = { showActive: false }; } toggleClass = () => { this.setState(prevState => ({ showActive: !prevState.showActive })); }; render() { return ( ); } } ``` **Note:** If you have multiple components interact with each other, it's better to consider using a state manager, such as **Redux** or **MobX** **Update:** I updated my code, so it can run standalone as a complete demo. There is the [codesandbox demo link](https://codesandbox.io/s/r798zr8ky4) Upvotes: 3 [selected_answer]
2018/03/22
275
808
<issue_start>username_0: In this code snippet the output I get is 24. Why is that? ``` int data[] = { 5, 6, 7, 1, 4, 0 }; int n = sizeof(data); cout << n << endl; ```<issue_comment>username_1: `sizeof` returns 24 because you have 6 integers, each taking 4 bytes. Upvotes: 4 [selected_answer]<issue_comment>username_2: First of all, you must remember that `arrays` and `pointers` are different. In case of an array, `sizeof()` returns the size of the whole array, which is 24 bytes in your example as you have 6 elements of `int`, and each is 4 bytes. Now look at this code snippet: ``` int *data = { 5, 6, 7, 1, 4, 0 }; int n = sizeof(data); ``` In this case, `sizeof()` will return the size of a pointer, not an array. A pointer is 4 bytes in a 32-bit app and 8 bytes in a 64-bit app. Upvotes: 1
2018/03/22
1,894
6,032
<issue_start>username_0: If I have an array such as: ``` ["Adambb", "Andrebw", "Bob", "Billy", "Sandrab", "Xaviercb"] ``` And I type in a search box (for example) "B", how can I reorder the array in JavaScript with the results that match the string closest (also alphabetized), first? For example, typing in "B" into the search box would reorder the array to: ``` ["Billy", "Bob", "Adambb", "Andrebw", "Sandrab", "Xaviercb"] ``` I want the array to reorder much like any search system should work. For some reason I cannot find this answer anywhere online. Either I am not formulating my question right or I just can't find anything similar to my question.<issue_comment>username_1: An alternative is to check is the strings start with the entered value, and make a decision about the order. * When `a` and `b` starts with the entered value make a string comparison. * If a starts with the entered value and `b` doesn't then place `a` at the beginning. * If `b` starts with the entered value and a doesn't then place `b` at the beginning. * Otherwise, make the default string comparison. ```js var array = [{id: "157", tag: "Adambb", course: "Adambb - Calculus I"}, {id: "158", tag: "Andrebw", course: "Andrebw - Ca I"}, {id: "159", tag: "Bob", course: "Bob - Cass I"}, {id: "160", tag: "Billy", course: "Billy - uus I"}, {id: "161", tag: "Sandrab", course: "Sandrab - allus I"}, {id: "162", tag: "Xaviercb", course: "Xaviercb - Cal I"}]; var input = 'Sa'; // Changed to illustrate the behavior. var sorted = array.sort((a, b) => { if (a.course.startsWith(input) && b.course.startsWith(input)) return a.course.localeCompare(b.course); else if (a.course.startsWith(input)) return -1; else if (b.course.startsWith(input)) return 1; return a.course.localeCompare(b.course);; }); console.log(sorted); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` An alternative is to check if the strings contain the entered value and make a decision about the order. * When `a` and `b` contain the entered value make a string comparison. * If `a` contains the entered value and `b` doesn't then place `a` at the beginning. * If `b` contains the entered value and `a` doesn't then place `b` at the beginning. * Otherwise, make the default string comparison. ```js var array = [{id: "157", tag: "Adambb", course: "Adambb - Calculus I"}, {id: "158", tag: "Andrebw", course: "Andrebw - Ca I"}, {id: "159", tag: "Bob", course: "Bob - Cass I"}, {id: "160", tag: "Billy", course: "Billy - uus I"}, {id: "161", tag: "Sandrab", course: "Sandrab - allus I"}, {id: "162", tag: "Xaviercb", course: "Xaviercb - Cal I"}]; var input = 'r'; // Changed to illustrate the behavior. var sorted = array.sort((a, b) => { if (a.course.indexOf(input) !== -1 && b.course.indexOf(input) !== -1) return a.course.localeCompare(b.course); else if (a.course.indexOf(input) !== -1) return -1; else if (b.course.indexOf(input) !== -1) return 1; return a.course.localeCompare(b.course); }); console.log(sorted); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Assuming you want to this in javascript. You can use a compareFunction of sorts which you can pass as a parameter to Array.sort method. ``` var compareFunction = function(a, b) { if (a.indexOf('B') < b.indexOf('B')) { return 1 } else { return -1 } } var arrToSort = ["Adambb", "Andrebw", "Bob", "Billy", "Sandrab", "Xaviercb"]; console.log(arrToSort.sort(compareFunction)); ``` Upvotes: 0 <issue_comment>username_3: This will get you started. It gets the *first* character char codes (ASCII) and sorts by the distance to this code. You will want to improve it by doing this for all characters in the string, not just the first character (zeroth index). Bearing in mind it doesn't take into account lowercase letters. You can use String.toLowerCase() to improve this. ```js let arr = ["Adambb", "Andrebw", "Bob", "Billy", "Sandrab", "Xaviercb"]; let arrCodes = arr.map(s=>({code:s.charCodeAt(0), value:s})); let searchInput = document.getElementById("search-input"); searchInput.addEventListener("change", function(){ let val = this.value; if(val){ const code = val.charCodeAt(0); let sorted = arrCodes.sort(function(a, b){ let distA = Math.abs(a.code-code); let distB = Math.abs(b.code-code); return distA-distB; }); final = sorted.map(s=>s.value); console.log("sorted:", final); } }, false); ``` ```css .as-console-wrapper { max-height: calc(100vh - 50px) !important; } ``` Upvotes: 0 <issue_comment>username_4: Here's a sample snippet that does the filtered sorting. Might require some more refinement. Also I did not consider the casing in this solution so is case sensitive. So basically in your sort function you evaluate the following conditions: 1. x starts with the filter value but not y; 2. x does not start with the filter value but y does; 3. both x and y starts with the filter value; so you want to start comparing with the parts of string after removing the filter value; 4. both x and y doesn't start with the filter value; you want to retain their position based on the normal sorting. ```html Try it var names = ["Adambb", "Andrebw", "Bob", "Billy", "Sandrab", "Xaviercb"]; document.getElementById("demo").innerHTML = names; function myFunction() { var filterValue = document.getElementById("textFilter").value; // sort with a function names.sort(function(x,y) { if( x.startsWith(filterValue) && !y.startsWith(filterValue)) { return -1; } else if (!x.startsWith(filterValue) && y.startsWith(filterValue)) { return 1; } else if(x.startsWith(filterValue) && y.startsWith(filterValue)) { var x2 = x.substring(filterValue.length-1); var y2 = y.substring(filterValue.length-1); return x2 > y2; } else { return x > y; } }); document.getElementById("demo").innerHTML = names; } ``` Upvotes: 0
2018/03/22
1,967
6,102
<issue_start>username_0: I have two tables.. ``` 1-Identity(id, ref1,ref2, address) 2-details(ref1,ref2,amount,u_no,u_date) ``` I want to extract the each id with sum of amount having highest u\_date and highest u\_no i tried below-- ``` Select I.id, d.amount From identity I Inner Join (select ref1,ref2,sum(amount) as amount From details d where (ref1,ref2,u_no,u_date) In (select ref1, ref2, max(u_no) as u_no, max(u_date) as u_date from details group By ref1,ref2) Group By ref1,ref2) ) d On I.ref1 = d.ref1 And I.ref2 = d.ref2; ``` But I am getting same id with multiple amount. [Table details and expected output](https://i.stack.imgur.com/70JWh.png) Can someone plz help me with this.thanks<issue_comment>username_1: An alternative is to check is the strings start with the entered value, and make a decision about the order. * When `a` and `b` starts with the entered value make a string comparison. * If a starts with the entered value and `b` doesn't then place `a` at the beginning. * If `b` starts with the entered value and a doesn't then place `b` at the beginning. * Otherwise, make the default string comparison. ```js var array = [{id: "157", tag: "Adambb", course: "Adambb - Calculus I"}, {id: "158", tag: "Andrebw", course: "Andrebw - Ca I"}, {id: "159", tag: "Bob", course: "Bob - Cass I"}, {id: "160", tag: "Billy", course: "Billy - uus I"}, {id: "161", tag: "Sandrab", course: "Sandrab - allus I"}, {id: "162", tag: "Xaviercb", course: "Xaviercb - Cal I"}]; var input = 'Sa'; // Changed to illustrate the behavior. var sorted = array.sort((a, b) => { if (a.course.startsWith(input) && b.course.startsWith(input)) return a.course.localeCompare(b.course); else if (a.course.startsWith(input)) return -1; else if (b.course.startsWith(input)) return 1; return a.course.localeCompare(b.course);; }); console.log(sorted); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` An alternative is to check if the strings contain the entered value and make a decision about the order. * When `a` and `b` contain the entered value make a string comparison. * If `a` contains the entered value and `b` doesn't then place `a` at the beginning. * If `b` contains the entered value and `a` doesn't then place `b` at the beginning. * Otherwise, make the default string comparison. ```js var array = [{id: "157", tag: "Adambb", course: "Adambb - Calculus I"}, {id: "158", tag: "Andrebw", course: "Andrebw - Ca I"}, {id: "159", tag: "Bob", course: "Bob - Cass I"}, {id: "160", tag: "Billy", course: "Billy - uus I"}, {id: "161", tag: "Sandrab", course: "Sandrab - allus I"}, {id: "162", tag: "Xaviercb", course: "Xaviercb - Cal I"}]; var input = 'r'; // Changed to illustrate the behavior. var sorted = array.sort((a, b) => { if (a.course.indexOf(input) !== -1 && b.course.indexOf(input) !== -1) return a.course.localeCompare(b.course); else if (a.course.indexOf(input) !== -1) return -1; else if (b.course.indexOf(input) !== -1) return 1; return a.course.localeCompare(b.course); }); console.log(sorted); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Assuming you want to this in javascript. You can use a compareFunction of sorts which you can pass as a parameter to Array.sort method. ``` var compareFunction = function(a, b) { if (a.indexOf('B') < b.indexOf('B')) { return 1 } else { return -1 } } var arrToSort = ["Adambb", "Andrebw", "Bob", "Billy", "Sandrab", "Xaviercb"]; console.log(arrToSort.sort(compareFunction)); ``` Upvotes: 0 <issue_comment>username_3: This will get you started. It gets the *first* character char codes (ASCII) and sorts by the distance to this code. You will want to improve it by doing this for all characters in the string, not just the first character (zeroth index). Bearing in mind it doesn't take into account lowercase letters. You can use String.toLowerCase() to improve this. ```js let arr = ["Adambb", "Andrebw", "Bob", "Billy", "Sandrab", "Xaviercb"]; let arrCodes = arr.map(s=>({code:s.charCodeAt(0), value:s})); let searchInput = document.getElementById("search-input"); searchInput.addEventListener("change", function(){ let val = this.value; if(val){ const code = val.charCodeAt(0); let sorted = arrCodes.sort(function(a, b){ let distA = Math.abs(a.code-code); let distB = Math.abs(b.code-code); return distA-distB; }); final = sorted.map(s=>s.value); console.log("sorted:", final); } }, false); ``` ```css .as-console-wrapper { max-height: calc(100vh - 50px) !important; } ``` Upvotes: 0 <issue_comment>username_4: Here's a sample snippet that does the filtered sorting. Might require some more refinement. Also I did not consider the casing in this solution so is case sensitive. So basically in your sort function you evaluate the following conditions: 1. x starts with the filter value but not y; 2. x does not start with the filter value but y does; 3. both x and y starts with the filter value; so you want to start comparing with the parts of string after removing the filter value; 4. both x and y doesn't start with the filter value; you want to retain their position based on the normal sorting. ```html Try it var names = ["Adambb", "Andrebw", "Bob", "Billy", "Sandrab", "Xaviercb"]; document.getElementById("demo").innerHTML = names; function myFunction() { var filterValue = document.getElementById("textFilter").value; // sort with a function names.sort(function(x,y) { if( x.startsWith(filterValue) && !y.startsWith(filterValue)) { return -1; } else if (!x.startsWith(filterValue) && y.startsWith(filterValue)) { return 1; } else if(x.startsWith(filterValue) && y.startsWith(filterValue)) { var x2 = x.substring(filterValue.length-1); var y2 = y.substring(filterValue.length-1); return x2 > y2; } else { return x > y; } }); document.getElementById("demo").innerHTML = names; } ``` Upvotes: 0
2018/03/22
582
1,884
<issue_start>username_0: When compiling a program that uses strptime with the following: ``` gcc http_server.c -g -std=c11 -o http_server ``` I run into the this warning: ``` warning: implicit declaration of function 'strptime'; did you mean 'strftime'? [-Wimplicit-function-declaration] ``` When I run the program I get a segmentation fault. Upon further debugging I come to find out it fails at the strptime() line. I have `time.h` included in the file. I am also using gcc 7.2.0 as stated in the title. Any help would be appreciated as I'm at a loss. Here is the line in my code: ``` const char TIME_FORMAT[] = "%a, %d %b %Y %H:%M:%S GMT\r\n"; char date[255]; strcpy(date, token + 19); strptime(date, TIME_FORMAT, request->if_modified_since); ```<issue_comment>username_1: Fixed the segmentation fault. Unlike `strftime()` you need to allocate memory for the tm structure. added the following: ``` request->if_modified_since = (struct tm*) malloc( sizeof(struct tm) ); ``` However I'm still getting the pesky warning at compile time. Ill award the answer to whomever helps me solve that. Upvotes: 0 <issue_comment>username_2: Use `-D_XOPEN_SOURCE=700` on the compiler command line. Just `-D_XOPEN_SOURCE` is equivalent to `-D_XOPEN_SOURCE=1` and that won't get `strptime()` declared. You could use 500 or 600 instead of 700; you shouldn't need to. You could also use `-std=gnu11` instead of `-std=c11` and then `strptime()` would be exposed, with or without the `-D_XOPEN_SOURCE=700`. You could also think about using a header to ensure the correct POSIX defines are in use; that's what I do. See `posixver.h`, which is available on GitHub in my [SOQ](https://github.com/jleffler/soq) (Stack Overflow Questions) repository as file `posixver.h` in the [src/libsoq](https://github.com/jleffler/soq/tree/master/src/libsoq) sub-directory. Upvotes: 3 [selected_answer]
2018/03/22
962
3,421
<issue_start>username_0: Given the following C API generated by Kotlin/Native: ``` #ifndef KONAN_bar_H #define KONAN_bar_H #ifdef __cplusplus extern "C" { #endif typedef struct { /* Service functions. */ void (*DisposeStablePointer)(bar_KNativePtr ptr); void (*DisposeString)(const char* string); bar_KBoolean (*IsInstance)(bar_KNativePtr ref, const bar_KType* type); /* User functions. */ struct { struct { void (*foo)(const char* string); } root; } kotlin; } bar_ExportedSymbols; extern bar_ExportedSymbols* bar_symbols(void); #ifdef __cplusplus } /* extern "C" */ #endif #endif /* KONAN_bar_H */ ``` how can I access the native function `foo` from C# using P/Invoke? I have been going through [Marshal API](https://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal(v=vs.110).aspx), and have tried several different ways to marshal the native object (like `Marshal.PtrToStructure` after returning an `IntPtr` from the `extern` call), but I know I have a fundamental misunderstanding of how to marshal native objects, and things are even more complex when given a nested struct like above. I have been going through [this guide](https://www.codeproject.com/Articles/66245/Marshaling-with-Csharp-Chapter-1-Introducing-Marsh.aspx) trying to learn how to marshal complex objects, but this particular use-case doesn't seem to be covered. After some hours trying to squeeze anything our of this, here is the current state of my application: ``` public class TestExtern { [UnmanagedFunctionPointer( CallingConvention.StdCall )] public delegate void foo( string @string ); [DllImport( "bar" )] private static extern BarSymbols bar_symbols(); private void Start() { var barSymbols = bar_symbols(); var kotlin = barSymbols.kotlin; var root = kotlin.root; var fooDelegate = Marshal.GetDelegateForFunctionPointer( root.instance ); fooDelegate( "Testing" ); // Access Violation } [StructLayout( LayoutKind.Sequential )] public struct BarSymbols { public Kotlin kotlin; } [StructLayout( LayoutKind.Sequential )] public struct Kotlin { public Root root; } [StructLayout( LayoutKind.Sequential )] public struct Root { public IntPtr instance; } } ``` Thanks in advance.<issue_comment>username_1: Fixed the segmentation fault. Unlike `strftime()` you need to allocate memory for the tm structure. added the following: ``` request->if_modified_since = (struct tm*) malloc( sizeof(struct tm) ); ``` However I'm still getting the pesky warning at compile time. Ill award the answer to whomever helps me solve that. Upvotes: 0 <issue_comment>username_2: Use `-D_XOPEN_SOURCE=700` on the compiler command line. Just `-D_XOPEN_SOURCE` is equivalent to `-D_XOPEN_SOURCE=1` and that won't get `strptime()` declared. You could use 500 or 600 instead of 700; you shouldn't need to. You could also use `-std=gnu11` instead of `-std=c11` and then `strptime()` would be exposed, with or without the `-D_XOPEN_SOURCE=700`. You could also think about using a header to ensure the correct POSIX defines are in use; that's what I do. See `posixver.h`, which is available on GitHub in my [SOQ](https://github.com/jleffler/soq) (Stack Overflow Questions) repository as file `posixver.h` in the [src/libsoq](https://github.com/jleffler/soq/tree/master/src/libsoq) sub-directory. Upvotes: 3 [selected_answer]
2018/03/22
1,156
4,314
<issue_start>username_0: I'm trying to make a game of tic-tac-toe. I have one class, Player, and one nested class, Check, in it. This is my full code: ``` class Player: # Player class '''this is a class for the player in a tic-tac-toe game''' def __init__(self, counter): # initialize self.counter = counter # set player counter def place(self, x, y): # functions directly interfering with data can global '''this function helps with placing counters on the grid''' global grid grid[y][x] = self.counter class Check: '''checks if a player wins or loses''' def check_vertical(self, grid): # checking functions avoid globaling '''check if there are three counters in a row vertically''' for row in range(3): if grid[row][0] and grid[row][1]\ and grid[row][2] == self.counter: return True def check_horiontal(self, grid): # checking functions avoid globaling '''check if there are three counters in a row horizontally''' for column in range(3): if grid[0][column] and grid[1][column]\ and grid[2][column] == self.counter: return True def check_diagonal(self, grid): # checking functions avoid globaling '''check if there are three counters in a row diagonally''' if grid[0][0] and grid[1][1] and grid[2][2] or\ grid[0][2] and grid[1][1] and grid[2][0] == self.counter: return True def check_all(self, grid): '''check if there are three counters in a row in any direction''' return (self.check_vertical(self, grid) or\ self.check_horizontal(self, grid) or\ self.check_diagonal(self, grid)) ``` So, when I try to test it in the shell: ``` >>> player = Player("O") >>> player.Check.check_all(tic_tac_toe_grid) ``` Python throws an error: ``` Traceback (most recent call last): File "", line 1, in a.Check.check\_all(grid) TypeError: check\_all() missing 1 required positional argument: 'grid' ``` Python thinks that `self` is a required argument. What's wrong with my code?<issue_comment>username_1: `Check` is a nested class in `Player` and you can access it using `Player.Check` or using `Player` instances - in your case `player`. A nested class behaves just the same as a normal class. Its methods still require an instance. The `self` parameter to the method tells you that it operates on instances. If you don't need any state on that class, you have two options: 1. Move all methods from `Check` to its own module and place the functions in there without any class. Or... 2. Make all methods in `Check` static (`@staticmethod`). This is rather an antipattern though. Upvotes: 0 <issue_comment>username_2: None of this has anything to do with `Check` being a nested class. First, Python thinks that `self` is a required argument because it is. You explicitly declared it as a parameter: ``` def check_all(self, grid): ``` When you call methods normally, on an instance of a class, like `thingy.method(foo)`, Python will turn that `thingy` into the `self`. But you're not calling the method on an instance of `Check`, you're calling it on `Check` itself. That's legal, but unusual. And when you do it, you need to explicitly pass an instance to be the `self`. And there's your real problem—you don't even *have* an instance. And presumably you created the class, with attributes and an `__init__` method and everything, because you needed instances of that class. (If you *don't* need a class, then you should get rid of the class and just make the functions methods of `Player`, or top-level functions.) So, just getting rid of this error is easy: ``` player.Check().check_all(tic_tac_toe_grid) ``` But what you almost certainly *want* to do is to create a `Check()` instance somewhere that you can use when you need it. Does each Player own a single Check? Then something like this: ``` class Player: def __init__(self, counter): # initialize self.counter = counter # set player counter self.check = Check() ``` And then you can use it: ``` player.check.check_all(tic_tac_toe_grid) ``` I don't actually know if that's what your object model is supposed to be, but hopefully you do. Upvotes: 3 [selected_answer]
2018/03/22
596
1,912
<issue_start>username_0: i can't make the sorting on this multidimensional array, i need to sort from lower to Higher by "packagenumber" value This is my array: [![enter image description here](https://i.stack.imgur.com/pOjfR.png)](https://i.stack.imgur.com/pOjfR.png) [![enter image description here](https://i.stack.imgur.com/pIAvG.png)](https://i.stack.imgur.com/pIAvG.png) I'm trying with usort: ``` uasort($data, function($a, $b) { return strcmp($data['packagenumber'], $data['packagenumber']); }); ```<issue_comment>username_1: Try ``` usort($data, function($a, $b) { return $a['packagenumber'] > $b['packagenumber']; }); ``` Upvotes: 1 <issue_comment>username_2: First, you should use usort as you dont need to maintain the indexes of your array. Then, something like this should work: ``` function int_compare($a, $b) { return $a['packagenumber'] - $b['packagenumber']; } usort($data, 'int_compare'); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: If you, like myself lots of times, find yourself in a situation where you need to do this kind of sorting inside a class and need to define the comparing function as a member of the class, you can then call it like this: ``` function mainFunction(){ $array = array( ['id' => 5, 'name' => 'Name', 'packagenumber' => 2], ['id' => 6, 'name' => 'Another', 'packagenumber' => 3], ['id' => 7, 'name' => 'Again', 'packagenumber' => 1]); usort($array, array($this, 'int_compare')); } function int_compare($a, $b) { return $a['packagenumber'] - $b['packagenumber']; } ``` `$this` in this case is a reference to the class containing the comparing function HIH Upvotes: 1 <issue_comment>username_4: You can extract a column, sort it, then sort the original on that: ``` array_multisort(array_column($data, 'packagenumber'), $data); ``` Upvotes: 0
2018/03/22
793
2,686
<issue_start>username_0: I am new to angular. I see across constructors,template,selector,etc. i have no idea. I know $scope,$rootscope, module and controller. With these I have written a code for ngFor.I believe before nfFor we had ngRepeat in angular1. please correct it. ``` Collections angular.module('moduleA') .controller('SubjectController',function($scope,$rootscope){ $scope.todos = [ { id: 1, title: 'Learn AngularJS', description: 'Learn,Live,Laugh AngularJS', done: true }, { id: 2, title: 'Explore hibernate', description: 'Explore and use hibernate instead of jdbc', done: true }, { id: 3, title: 'Play with spring', description: 'spring seems better have a look', done: false }, { id: 4, title: 'Try struts', description: 'No more labour work..use struts', done: false }, { id: 5, title: 'Try servlets', description: 'Aah..servlets stack seems cool..why dont u try once', done: false } ]; }); | # | Title | Description | Done? | | --- | --- | --- | --- | | {{todo.id}} | {{todo.title}} | {{todo.description}} | {{todo.done}} | ```<issue_comment>username_1: You **`cannot`** use **`ngFor`** with angularjs, you can use **[`ng-repeat`](https://docs.angularjs.org/api/ng/directive/ngRepeat)** to iterate over a collection. ``` | ``` DEMO ```js angular.module('moduleA',[]) .controller('SubjectController',function($scope){ $scope.todos = [ { id: 1, title: 'Learn AngularJS', description: 'Learn,Live,Laugh AngularJS', done: true }, { id: 2, title: 'Explore hibernate', description: 'Explore and use hibernate instead of jdbc', done: true }, { id: 3, title: 'Play with spring', description: 'spring seems better have a look', done: false }, { id: 4, title: 'Try struts', description: 'No more labour work..use struts', done: false }, { id: 5, title: 'Try servlets', description: 'Aah..servlets stack seems cool..why dont u try once', done: false } ]; }); ``` ```html Collections | # | Title | Description | Done? | | --- | --- | --- | --- | | {{todo.id}} | {{todo.title}} | {{todo.description}} | {{todo.done}} | ``` Upvotes: 3 <issue_comment>username_2: As others mentioned, `ngFor` is not a part of AngularJS However, you can create function on `$rootScope` to simply generate a dummy array for `ngRepeat` as following: ``` /** * Generates range for ngRepeat * @param start * @param stop * @param increment */ $rootScope.generateRange = function(start, stop, increment) { let a = []; for(; start < stop; ) { a[start] = start; start += increment; } return a; }; ``` You can use it like this in your views: ``` ``` Upvotes: 2
2018/03/22
688
2,320
<issue_start>username_0: Is there a way to put a delay/sleep before running the next loop? I cant use the $.ajax async false. because the loader wont show up for each row. I need to call the next loop after the $.post request is done. **Code:** ``` $( ".ids:checked" ).each(function() { var Id = $(this).val(); $(".modal_close_btn").hide(); if(y==count){ last_request = 1; }else{ last_request = 0; } $.post("db/delete_test.php", {Id:Id,last_request:last_request}, function(data){ $("td#Id_"+Id).html(data.message); if(x==count){ $(".modal_close_btn").show(); } x++; },"json"); y++; }); ```<issue_comment>username_1: You **`cannot`** use **`ngFor`** with angularjs, you can use **[`ng-repeat`](https://docs.angularjs.org/api/ng/directive/ngRepeat)** to iterate over a collection. ``` | ``` DEMO ```js angular.module('moduleA',[]) .controller('SubjectController',function($scope){ $scope.todos = [ { id: 1, title: 'Learn AngularJS', description: 'Learn,Live,Laugh AngularJS', done: true }, { id: 2, title: 'Explore hibernate', description: 'Explore and use hibernate instead of jdbc', done: true }, { id: 3, title: 'Play with spring', description: 'spring seems better have a look', done: false }, { id: 4, title: 'Try struts', description: 'No more labour work..use struts', done: false }, { id: 5, title: 'Try servlets', description: 'Aah..servlets stack seems cool..why dont u try once', done: false } ]; }); ``` ```html Collections | # | Title | Description | Done? | | --- | --- | --- | --- | | {{todo.id}} | {{todo.title}} | {{todo.description}} | {{todo.done}} | ``` Upvotes: 3 <issue_comment>username_2: As others mentioned, `ngFor` is not a part of AngularJS However, you can create function on `$rootScope` to simply generate a dummy array for `ngRepeat` as following: ``` /** * Generates range for ngRepeat * @param start * @param stop * @param increment */ $rootScope.generateRange = function(start, stop, increment) { let a = []; for(; start < stop; ) { a[start] = start; start += increment; } return a; }; ``` You can use it like this in your views: ``` ``` Upvotes: 2
2018/03/22
331
894
<issue_start>username_0: I am having a hard time trying to display a number and would appreciate any help or suggestions. ``` tmp <- train[train$label == 0,] tmp # has V17015 V17021... values. m <- matrix(tmp[1,1:784], ncol = 28, nrow = 28) m # m has 28 by 28 and all zeroes m_numbers <- apply(m, 2, rev) m_numbers #I got [[28]][[28]] 0 image(1:28, 1:28,z = m_numbers, col = gray.colors(256)) ``` I get 'z' must be a matrix and when I do `m_numbers <- as.matrix(m_numbers)` I get 'z' must be numeric or logical. Thanks for the help.<issue_comment>username_1: Try: `m_numbers <- unlist(apply(m, 2, rev))` z needs to be a numeric or matrix. Upvotes: 0 <issue_comment>username_2: I had the same issue, the values in the matrix were integers, I have converted them to numeric by `mode(m_numbers) = "numeric"`. This should fix the error > > 'z' must be numeric or logical > > > Upvotes: 2
2018/03/22
913
3,052
<issue_start>username_0: I initialized the map like: ``` var map = new Map(); ``` when i do `console.log(map)`, i get: ``` testCreateBadAppointmentRequest: { name: 'testCreateBadAppointmentRequest', time: 0.02926, status: 'Passed' }, testAppointmentRequestAPI: { name: 'testAppointmentRequestAPI', time: 0.051030000000000006, status: 'Passed' }, ``` I want to sort this map on the time attribute of the value. How do i do that in nodejs? Is there a ready sort function to do so?<issue_comment>username_1: You'll have to create a new Map object, since a Map object iterates its elements in insertion order. ```js const inputMap = new Map( [ ['testCreateBadAppointmentRequest', { name: 'testCreateBadAppointmentRequest', time: 0.02926, status: 'Passed' } ], ['testAppointmentRequestAPI', { name: 'testAppointmentRequestAPI', time: 0.051030000000000006, status: 'Passed' }, ], ['another', { name: 'name', time: 0.0001, status: 'Passed' }, ] ]); const sortedMap = new Map([...inputMap.entries()].sort((entryA, entryB) => entryB[1].time - entryA[1].time)); for (const value of sortedMap.values()) { console.log(value) } ``` Upvotes: 0 <issue_comment>username_2: You will need to convert the `Map` to an `Array` first then use the built-in `sort` and provide a callback: ``` const sorted = Array.from(map).sort(function(a, b) { if (a.time < b.time) return -1; if (a.time > b.time) return 1; return 0; }); ``` Upvotes: 2 <issue_comment>username_3: [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) order is determined by insertion order. > > It should be noted that a Map which is a map of an object, especially a dictionary of dictionaries, will only map to the object's insertion order—which is random and not ordered. > > > Convert map to array with [Array.from](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/from) or using the [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) on the map iterable. Then sort the array: ```js const map = new Map() map.set('testCreateBadAppointmentRequest', { name: 'testCreateBadAppointmentRequest', time: 0.02926, status: 'Passed' }); map.set('testAppointmentRequestAPI', { name: 'testAppointmentRequestAPI', time: 0.051030000000000006, status: 'Passed' }); // convert map to array console.log('array', [...map.entries()]); const array = Array.from(map); // sort (inverse sort to change your current sort) array.sort((x, y) => y[1].time - x[1].time); console.log('sorted', array); // create new map with objects pairs in the desired order: const timeSortedMap = new Map(array); console.log('sorted map', [...timeSortedMap]); ``` Upvotes: 1
2018/03/22
898
2,328
<issue_start>username_0: I am trying to return a dataframe containing all of the values in each column that are less than -1.5 that has both the column header and the rowname. I basically have everything worked out, except in the final step where I replace a column that has the column numbers with the corresponding column names from the original dataframe when there are multiple values from the same column that are less than -1.5 the new column name values are listed as "column name.1". I have searched around and found out that make.unique appears to do a similar thing, but I never called that function. ``` A <- c(0.6, -0.5, 0.1, 1.6, -1.6, 0.4, -1.6) B <- c(0.7, -2.1, -0.3, 1.1, 2.1, -1.7, 1.1) DF <- as.data.frame(cbind(A, B)) colnames(DF) <- c("010302A620300302000", "010803A110100069000") rownames(DF) <- c("1996", "1997", "1998", "1999", "2000", "2001", "2002") ``` So my original dataframe looks something like this: ``` 010302A620300302000 010803A110100069000 1996 0.6 0.7 1997 -0.5 -2.1 1998 0.1 -0.3 1999 1.6 1.1 2000 -1.6 2.1 2001 0.4 -1.7 2002 -1.6 1.1 ``` In order to get the relevant values for each row: ``` DF.new <- as.data.frame(which(DF <= -1.5, arr.ind = T, useNames = TRUE)) DF.new <- as.data.frame(setDT(DF.new, keep.rownames = TRUE)[]) DF.new$SUID <- colnames(DF[, DF.new[ ,3]]) ``` This brings me to the problem, how do I use the colnames function so that the resulting SUID column does not append ".1" to repeat character vectors like I see here: ``` rn row col SUID 1 2000 5 1 010302A620300302000 2 2002 7 1 010302A620300302000.1 3 1997 2 2 010803A110100069000 4 2001 6 2 010803A110100069000.1 ``` Thanks in advance!<issue_comment>username_1: A quick and dirty fix is `DF.new$SUID <- floor(DF.new$SUID)` to remove the decimals. Upvotes: 0 <issue_comment>username_2: Subset your column names from a character vector rather than columns from a new data frame. like this ``` DF.new$SUID <- colnames(DF)[DF.new[ ,3]] ``` instead of this ``` DF.new$SUID <- colnames(DF[, DF.new[ ,3]]) ``` Upvotes: 3 [selected_answer]