date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/21
692
2,435
<issue_start>username_0: I have the following method: ``` increaseCount() { console.log(this); this.setState(prevState => ({ // X count: prevState.count + 1 })); this.setState(function(prevState) { // Y return { count: prevState.count + 1 }; }); this.setState({ count: this.state.count + 1 }, function() { // Z return this.setState({ count: this.state.count + 1 }); }); } ``` X and Y will increment the counter by two when used together. Z will increment the counter by 2 when used alone Why doesn't the counter increment by 4 when X,Y, and Z are used? Also, if I place block Z above X and Y it works as expected and increments by 4. Could someone explain what's going on?<issue_comment>username_1: Because setState is async. Let's say you have initial count 0 when you have all three of them running line by line. The `this.state.count` value block Z gets is actually 0 instead of 2. Upvotes: 0 <issue_comment>username_2: You can think of setState as an async call. When you use `this.setState((prevState) => {})` you pass it a function that will update the state based on its previous value. On the other hand when you use it like `this.setState({count:this.state.count+1})` you are passing an actual value to the object. That said, setState actions will update the state in the order they are called but they will not wait for the previous call to finish when they are actually called. So in your case X and Y will update the state based on the previous value (so 0 -> 1, then 1->2) however Z is already called with a value of `{count: this.sate.count + 1}` which might be `{count: 0 + 1}` at the time it is called. The second parameter in Z is a callback that is called once the first update is finished. This way `this.state.count` will be 1 already when it is called, resulting 2 again. If you place Z on the top, it will increment `count` to 2 as stated above, then the next two setState calls will increase it based on the actual `prevState` at the time they are called. Illustration: ``` In the js code: InitialState: count = 0; X called -> (count: prevState + 1); Y called -> (count: prevState + 1); Z called -> (count: 0 + 1); Meanwhile asynchronously: InitialState: count = 0 X -> prevState=0, count = 0 + 1; Y -> prevState=1, count = 1 + 1; Z -> count = 1; then called again for (count: this.state.count + 1) which is now (count: 1 + 1) ``` Upvotes: 2 [selected_answer]
2018/03/21
1,459
4,927
<issue_start>username_0: I've build a vue.js web app for an insurance brokerage where every agent has their own website that is generated from their profiles. This is what the link looks like in my vue-router index file" ``` { path: '/agents/:id', name: 'AgentSite', component: AgentSite }, ``` Everything works great EXCEPT that the urls are getting too long to fit on some business cards. I would like to change the URLs to be like this: ``` { path: '/:id', name: 'AgentSite', component: AgentSite }, ``` However, then every other bit of dynamic content in the app loads our agent website template (AgentSite). Quotes, Clients, Policies... they won't load properly. Is there a way to remove the "/agents" from the URLs without messing up the rest of our application? I could shorten it to "/a/:id but that ends up being more confusing than it's worth. Thanks! EDIT: a couple of people have mentioned solutions that work when the agent id is a number. That's a great idea except that we have built agent "slugs" to use instead. On the agent website layout: ``` created() { console.log(this.$route.params.id); this.$store.dispatch("getAgentFromSlug", this.$route.params.id); } ``` and in the store: ``` getAgentFromSlug({commit}, payload){ const db = firebase.database(); db.ref("users/").orderByChild("slug").equalTo(payload).once("value", (snap) => { console.log(snap.val()); var info = snap.val(); commit("setAgentSiteInfo", info[Object.keys(info)[0]]) }) } ``` So, our route Id is really a slug.<issue_comment>username_1: Considering `id`s are numbers, you could use: ``` { path: '/:id(\\d+)', name: 'AgentSite', component: AgentSite }, ``` Which only matches if `id` is made only of numbers. > > **Update:** A couple of people have mentioned solutions that work when the agent id is a number. That's a great idea except that we have built agent "slugs" to use instead. > > > If the names can conflict with existing routes, **declare the agent route last**. From the [**Matching Priority** docs](https://router.vuejs.org/en/essentials/dynamic-matching.html#matching-priority) (emphasis mine): > > Matching Priority > ================= > > > Sometimes the same URL may be matched by multiple routes. In such a > case the matching priority is determined by the order of route > definition: **the earlier a route is defined, the higher priority it > gets**. > > > In other words, declare like: ```js routes: [ { path: '/', component: HomePage }, { path: '/quotes', component: Quotes }, { path: '/clients', component: Clients }, { path: '/:id', component: AgentSite, props: true } ] ``` See **[CodeSandbox demo Here](https://codesandbox.io/s/kokoqxvl87?module=%2Frouter%2Findex.js)**. Handling 404 pages ================== > > Would I then declare the 404 page route above or below the "`AgentSite`" in your example? `{ path: "*", component: PageNotFound }` > > > The `AgentSite` route would **match any URL not matched previously**, so you'll have to handle the 404s inside the `AgentSite` component. First, declare the 404 route **after** the `AgentSite`: ```js routes: [ // ... (other routes) { path: "/:id", component: AgentSite, props: true }, { path: ":path", name: "404", component: p404, props: true } ] ``` Then, inside `AgentSite`, get the agent `:id`, check if it is a known agent and, if not, redirect to the `404` route **by name** (otherwise it would match agent again). ```js export default { props: ["id"], data() { return { availableAgents: ["scully", "bond", "nikita"] }; }, created() { let isExistingAgent = this.availableAgents.includes(this.id); if (!isExistingAgent) { this.$router.push({ name: "404", params: { path: this.$route.fullPath.substring(1) } }); } } }; ``` The **[CodeSandbox demo Here](https://codesandbox.io/s/kokoqxvl87?module=%2Frouter%2Findex.js)** already contains this handling. Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use regex matching if you `:id` has a specific format ([example from vue-router repository](https://github.com/vuejs/vue-router/blob/dev/examples/route-matching/app.js)). For example, if your `:id` is a number: ``` const routes = [ { path: '/:id(\\d+)', component: Foo }, { path: '/bar', component: Bar } ] ``` ```js const Foo = { template: 'foo' } const Bar = { template: 'bar' } const routes = [ { path: '/:id(\\d+)', component: Foo }, { path: '/bar', component: Bar } ] const router = new VueRouter({ routes }) const app = new Vue({ router }).$mount('#app') ``` ```css .router-link-active { color: red; } ``` ```html Hello App! ========== Go to Foo Go to Bar ``` Upvotes: 0
2018/03/21
1,288
4,239
<issue_start>username_0: I was recomended to use a has though relationship to access all of a users books in all their lists. So if a user has 3 lists and 3 books in each. I can accessall 9 books at once. This works fine thanks to this answer so SO -> <https://stackoverflow.com/a/49386095/9277589> How would I go about deleting the books, when I delete the list. Currently the list deletes, but the books don't. This is how I access the books in my view. `$user->WatchedBooks` I have tried this ``` public function destroy($id){ $user = Auth::user(); $Watchlist = Watchlists::where('id', $id)->first(); if($Watchlist->user_id == Auth::id()){ $Watchlist->delete(); return redirect('watchlist'); } else { return redirect('watchlist'); } if($user->WatchedBooks == $user){ return redirect('watchlist'); } else { return redirect('watchlist'); } } } ```<issue_comment>username_1: Considering `id`s are numbers, you could use: ``` { path: '/:id(\\d+)', name: 'AgentSite', component: AgentSite }, ``` Which only matches if `id` is made only of numbers. > > **Update:** A couple of people have mentioned solutions that work when the agent id is a number. That's a great idea except that we have built agent "slugs" to use instead. > > > If the names can conflict with existing routes, **declare the agent route last**. From the [**Matching Priority** docs](https://router.vuejs.org/en/essentials/dynamic-matching.html#matching-priority) (emphasis mine): > > Matching Priority > ================= > > > Sometimes the same URL may be matched by multiple routes. In such a > case the matching priority is determined by the order of route > definition: **the earlier a route is defined, the higher priority it > gets**. > > > In other words, declare like: ```js routes: [ { path: '/', component: HomePage }, { path: '/quotes', component: Quotes }, { path: '/clients', component: Clients }, { path: '/:id', component: AgentSite, props: true } ] ``` See **[CodeSandbox demo Here](https://codesandbox.io/s/kokoqxvl87?module=%2Frouter%2Findex.js)**. Handling 404 pages ================== > > Would I then declare the 404 page route above or below the "`AgentSite`" in your example? `{ path: "*", component: PageNotFound }` > > > The `AgentSite` route would **match any URL not matched previously**, so you'll have to handle the 404s inside the `AgentSite` component. First, declare the 404 route **after** the `AgentSite`: ```js routes: [ // ... (other routes) { path: "/:id", component: AgentSite, props: true }, { path: ":path", name: "404", component: p404, props: true } ] ``` Then, inside `AgentSite`, get the agent `:id`, check if it is a known agent and, if not, redirect to the `404` route **by name** (otherwise it would match agent again). ```js export default { props: ["id"], data() { return { availableAgents: ["scully", "bond", "nikita"] }; }, created() { let isExistingAgent = this.availableAgents.includes(this.id); if (!isExistingAgent) { this.$router.push({ name: "404", params: { path: this.$route.fullPath.substring(1) } }); } } }; ``` The **[CodeSandbox demo Here](https://codesandbox.io/s/kokoqxvl87?module=%2Frouter%2Findex.js)** already contains this handling. Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use regex matching if you `:id` has a specific format ([example from vue-router repository](https://github.com/vuejs/vue-router/blob/dev/examples/route-matching/app.js)). For example, if your `:id` is a number: ``` const routes = [ { path: '/:id(\\d+)', component: Foo }, { path: '/bar', component: Bar } ] ``` ```js const Foo = { template: 'foo' } const Bar = { template: 'bar' } const routes = [ { path: '/:id(\\d+)', component: Foo }, { path: '/bar', component: Bar } ] const router = new VueRouter({ routes }) const app = new Vue({ router }).$mount('#app') ``` ```css .router-link-active { color: red; } ``` ```html Hello App! ========== Go to Foo Go to Bar ``` Upvotes: 0
2018/03/21
968
2,441
<issue_start>username_0: Maybe a silly question, I am trying to print numbers in a loop in such a way that they are multiples of 10. This is very easy as long as the timestep in the loop is multiple of 10. This is how I do it: ``` time = 0. timestep = 2. while time <= 100.: if int(round(time)) % 10 == 0: print time time += timestep ``` which gives me an output of: ``` 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 ``` And if I use a timestep = 1, I get a similar output. My problem is that now my timestep is given as a function of another variable, and is a float with many decimals. For one case, for instance, the timestep turns out to be 1.31784024239, and if I try to do a similar loop, the numbers I get are not that uniform anymore. For example, I get: ``` 0.0 19.7676036358 30.310325575 39.5352072717 50.0779292108 69.8455328467 80.3882547858 89.6131364825 ``` My question is if there is any trick so that my output is printed uniformly - every, let's say, 10 days? it doesn't have to be exactly ten, but I would like to have a point, for example, between 0 and 19 (around 10) and another one around 60, since theres a jump from 50.07 to 69.84. I don't know if it is possible, but any ideas will really be helpful as many of my timesteps are floats with many decimals.<issue_comment>username_1: Remember the last time you printed a line, and print another line as soon as the decade changes: ``` time = 0. lasttime = -1. timestep = 3. while time <= 100.: if time // 10 != lasttime // 10: print time lasttime = time time += timestep ``` Result: ``` $ python x.py 0.0 12.0 21.0 30.0 42.0 51.0 60.0 72.0 81.0 90.0 ``` Upvotes: 1 <issue_comment>username_2: Here's a simple solution that finds the steps that are nearest to a given series of multiples: ``` def stepper(timestep, limit=100.0, multiple=10.0): current = multiples = 0.0 while current <= limit: step = current + timestep if step >= multiples: if multiples - current > step - multiples: yield step else: yield current multiples += multiple current = step for step in stepper(1.31784024239): print step ``` Output: ``` 0.0 10.5427219391 19.7676036358 30.310325575 39.5352072717 50.0779292108 60.6206511499 69.8455328467 80.3882547858 89.6131364825 100.155858422 ``` Upvotes: 3 [selected_answer]
2018/03/21
647
2,584
<issue_start>username_0: I have several classes that exhibit a inheritance structure: ``` public class BaseClass { Guid ID {get;set;} } public class LeafType : BaseClass{ /* omitted */} public class OtherLeafType : BaseClass{ /* omitted */} public class Node : BaseClass { public List FirstLeaves {get;set;} public List SecondLeaves {get;set;} public ???? AllLeaves {get;} //Returns all items in both FirstLeaves and SecondLeaves } ``` In the example above, `Node` has two collections, whose elements derive from `BaseClass`. Does .Net have a collection that can combine these two collections and automatically update when either `FirstLeaves` or `SecondLeaves` changes? I have found the class System.Windows.Data.CompositeCollection, but it is in PresentationFramework, which to me indicates that it is intended for UI purposes. My class `Node` lives in an assembly that has nothing to do with the UI, so `CompositeCollection` looks like a bad fit. Is there any other class that would serve a similar purpose? Update 1: Looking at the answers so far, it seems that my question was not clearly formulated: `CompositeCollection` [Enables multiple collections and items to be displayed as a single list](https://msdn.microsoft.com/en-us/library/system.windows.data.compositecollection(v=vs.110).aspx), but I was wondering if the .Net framework supplies a type with similar functionality that is not related to the GUI. If not, then I will roll my own solution, which looks very much like the answer by @<NAME><issue_comment>username_1: I believe concating one list to another may not work in your case as they are declared as different classes (even though they inherit the `Base` class). I would return a newly combined list. ``` public List AllLeaves { get { List l = new List(); l.AddRange(FirstLeaves); l.AddRange(SecondLeaves); return l; } } ``` Upvotes: 0 <issue_comment>username_2: I recommend using an iterator. It's not a collection but can converted to a collection via Linq's ToList() extension method. The iterator provides a live view of the collection contents. You'll need to test what happens if the underlying collections are mutated while you're iterating through the IEnumerable. But generally this is considered bad practice. ``` public IEnumerable AllLeaves { get { foreach (LeafType firstLeaf in FirstLeaves) { yield return firstLeaf; } foreach (OtherLeafType secondLeaf in SecondLeaves) { yield return secondLeaf; } } } public List AllLeavesList() { return AllLeaves.ToList(); } ``` Upvotes: 2 [selected_answer]
2018/03/21
315
1,011
<issue_start>username_0: I have defined a simple class but recieve the error `AttributeError: 'NoneType' object has no attribute 'bar'` when trying to use it. What am I doing wrong? ``` def Foo(): a = 0 def bar(self): return self.a f = Foo() f.bar() # error ```<issue_comment>username_1: This is the syntax to define a *function* ``` def Foo(): ``` to define a *class* you'd say ``` class Foo: ``` Upvotes: 3 <issue_comment>username_2: Your problem ------------ `Foo` is defined with `def`, which makes it a **function**, not a class. If you want `Foo` to contain functions, then it needs to be a **class**. Solution -------- Use the `class` keyword to turn `Foo` into a class: ``` class Foo(): ``` Upvotes: 2 <issue_comment>username_3: If you're trying to create a class, try : ``` class Foo: def bar(self): return self.a ``` Upvotes: 2 <issue_comment>username_4: ``` class Foo(): a = 0 def bar(self): return self.a f = Foo() f.bar() ``` Upvotes: 1
2018/03/21
448
1,570
<issue_start>username_0: I am looking to stop the audio once a STOP button is clicked. the way i have it now only stops the audio for a moment then continues to play again. Here is my code ``` let x = 500; $("#buttonStart").click(()=>{ setInterval(function() {$('audio')[0].play();}, x); }) $("#buttonStop").click(()=>{ // I WANT TO STOP THE AUDIO COMPLETELY HERE $('audio')[0].pause(); }) ```<issue_comment>username_1: You could have a public boolean that said if the music is supposed to be played or not. ``` var musicOn = false; let x = 500; $("#buttonStart").click(()=>{ musicOn = true; setInterval(function() { if(musicOn==true){ $('audio')[0].play(); }, x); } }) $("#buttonStop").click(()=>{ // I WANT TO STOP THE AUDIO COMPLETELY HERE musicOn = false; }) ``` Maybe a lazy solution, but would definitely work Upvotes: 0 <issue_comment>username_2: try something like this ``` var task; $("#buttonStart").click(()=>{ task = setInterval(function() {$('audio')[0].play();}, x); }) $("#buttonStop").click(()=>{ // I WANT TO STOP THE AUDIO COMPLETELY HERE clearInterval(task); }) ``` Upvotes: 1 <issue_comment>username_3: You can set your play interval to a variable, then when you click the STOP button, you can clear the interval with clearInterval. ``` let x = 500; let playing; $("#buttonStart").click(()=>{ playing = setInterval(function() {$('audio')[0].play();}, x); }); $("#buttonStop").click(()=>{ clearInterval(playing); }); ``` Upvotes: 0
2018/03/21
526
1,759
<issue_start>username_0: I'm trying to filter a list with two commands - Filter by Attribute then filter by "not disconnected" and "logged in in the last 90 days." Trying something like this, but it's not working. > > get-mailbox -filter 'ExtensionCustomAttribute1 -eq $null'| > Get-MailboxStatistics -filter {DisconnectDate -eq $null -and > LastLogonTime -gt (get-date).adddays(-90)} > > > When I run; > > get-mailbox -filter 'ExtensionCustomAttribute1 -eq $null' | > Get-MailboxStatistics > > > I get the first part of the results with the info I'm looking for - I just can't filter this list further. ie adding *-filter* does not work. The result is: ![Results of code](https://i.stack.imgur.com/NmePR.png)<issue_comment>username_1: Technet documentation specifies that filters must be done using single quotes, not curly braces. ``` -Filter 'DisconnectDate -ne $null' ``` Is the example given here: <https://technet.microsoft.com/en-us/library/bb124612(v=exchg.160).aspx> I've run into this issue before when filtering AD users. I have to assume that they changed their standard at some time during the past. If you don't have success with this, you can try using where-object to filter to the right... i.e.: ``` Get-MailboxStatistics | ?{$_.DisconnectDate -ne $null} ``` Obviously use the former if it works. Upvotes: 0 <issue_comment>username_2: I'd say the best solution to your problem is to filter via where-object as stated earlier. The below should work out for you, it has not been tested but should be correct. ``` Get-Mailbox -Filter 'ExtensionCustomAttribute1 -eq $null' | Get-MailboxStatistics | where {$_.DisconnectDate -eq $null -and $_.LastLogonTime -gt (Get-Date).AddDays(-90)} ``` Upvotes: 2 [selected_answer]
2018/03/21
1,270
5,295
<issue_start>username_0: Is it possible to compile and instantiate Kotlin class at runtime? I'm talking about something like that but using Kotlin API: [How do I programmatically compile and instantiate a Java class?](https://stackoverflow.com/questions/2946338/how-do-i-programmatically-compile-and-instantiate-a-java-class) As example: I'm getting full class definition as String: ``` val example = "package example\n" + "\n" + "fun main(args: Array) {\n" + " println(\"Hello World\")\n" + "}\n" ``` And then inserting it into some class.kt and running it so I'm getting "Hello World" printed in console at runtime.<issue_comment>username_1: You might want to look at Kotlin Scripting, see <https://github.com/andrewoma/kotlin-script> Alternatively, you'll need to write your own `eval(kotlin-code-string-here)` method which will dump the text inside `blah.kt` file for example, compile it using an external Kotlin compiler into `blah.class` then dynamically load those classes into the runtime using the Java Classloader doing something like this: ``` MainClass.class.classLoader.loadClass("com.mypackage.MyClass") ``` This might be very slow and unreliable. Another no so great option is to make use of Rhino and run JavaScript inside your Kotlin code. So once again, you'll have an `eval(kotlin-code-string-here)` method which will dump the content to a `blah.kt` file, then you would use a Kotlin2JS compiler to compile it to JavaScript and directly execute the JavaScript inside Kotlin using Rhino which is not great either. Another option is to make use of Kotlin Scripting or an external Kotlin compiler (in both cases, the Kotlin compiler will have to start up) and doing something like this will also allow you to execute dynamically, albeit, only on Unix systems. ``` Runtime.getRuntime().exec(""" "kotlin code here" > blah.kts | sh""") ``` I'm not aware of a clean solution for this, Kotlin was not designed to be run like like PHP / JavaScript / Python which just interprets text dynamically, it has to compile to bytecode first before it can do anything on the JVM; so in each scenario, you will need to compile that code first in one way or another, whether to bytecode or to javascript and in both cases load it into you application using the Java Classloader or Rhino. Upvotes: 2 <issue_comment>username_2: Please check [this solution](https://github.com/s1monw1/KtsRunner) for dependencies, jar resources, etc. Code below isn't enough for successful execution. However, to compile dynamic class you can do the following: ``` val classLoader = Thread.currentThread().contextClassLoader val engineManager = ScriptEngineManager(classLoader) setIdeaIoUseFallback() // hack to have ability to do this from IntelliJ Idea context val ktsEngine: ScriptEngine = engineManager.getEngineByExtension("kts") ktsEngine.eval("object MyClass { val number = 123 } ") println(ktsEngine.eval("MyClass.number")) ``` **Please note: there is code injection possible here. Please be careful and use dedicated process or dedicated ClassLoader for this.** Upvotes: 2 <issue_comment>username_3: KotlinScript can be used to compile Kotlin source code (e.g. to generate a jar file that can then be loaded). Here's a Java project which demonstrates this (code would be cleaner in Kotlin): <https://github.com/alexoooo/sample-kotlin-compile/blob/main/src/main/java/io/github/alexoooo/sample/compile/KotlinCompilerFacade.java> Note that the code you provide would be generated as a nested class (inside the script). Here is a Kotlin version: ``` @KotlinScript object KotlinDynamicCompiler { //----------------------------------------------------------------------------------------------------------------- const val scriptClassName = "__" const val classNamePrefix = "${scriptClassName}$" private val baseClassType: KotlinType = KotlinType(KotlinDynamicCompiler::class.java.kotlin) private val contextClass: KClass<*> = ScriptCompilationConfiguration::class.java.kotlin //----------------------------------------------------------------------------------------------------------------- fun compile( kotlinCode: String, outputJarFile: Path, classpathLocations: List, classLoader: ClassLoader ): String? { Files.createDirectories(outputJarFile.parent) val scriptCompilationConfiguration = createCompilationConfigurationFromTemplate( baseClassType, defaultJvmScriptingHostConfiguration, contextClass ) { jvm { val classloaderClasspath: List = classpathFromClassloader(classLoader, false)!! val classpathFiles = classloaderClasspath + classpathLocations.map { it.toFile() } updateClasspath(classpathFiles) } hostConfiguration(ScriptingHostConfiguration (defaultJvmScriptingHostConfiguration) { jvm { compilationCache( CompiledScriptJarsCache { \_, \_ -> outputJarFile.toFile() } ) } }) } val scriptCompilerProxy = ScriptJvmCompilerIsolated(defaultJvmScriptingHostConfiguration) val result = scriptCompilerProxy.compile( kotlinCode.toScriptSource(KotlinCode.scriptClassName), scriptCompilationConfiguration) val errors = result.reports.filter { it.severity == ScriptDiagnostic.Severity.ERROR } return when { errors.isEmpty() -> null else -> errors.joinToString(" | ") } } } ``` Upvotes: 0
2018/03/21
832
3,131
<issue_start>username_0: using `urwid`, I'm trying to separate the highlight/walk and cursor functionality of a `Pile` widget. How can I use `up/down` to change which widget is highlighted, while keeping the cursor in a different widget?<issue_comment>username_1: If you really need this, it probably makes sense to write your own widgets -- maybe based on some classes extending urwid.Text and urwid.Button There is no real "highlight" feature in the widgets that come with urwid, there is only a "focus" feature, and it doesn't seem to be easy to decouple the focus highlight from the focus behavior. You probably want to implement your own widgets with some sort of secondary highlighting. Upvotes: 1 <issue_comment>username_2: The default `focus` behavior couples the cursor with attribute (highlighting) behavior. The example below shows one way to decouple these, where a list of `SelectableIcons` retains the highlight feature, while the cursor is moved to a separate `Edit` widget. It does this via: * overriding the `keypress` method to update the focus where the cursor is not * wrapping each `SelectableIcon` in `AttrMap` that change their `attribute` based on their `Pile's` `focus_position` * after changing the SelectableIcon attributes, the focus (cursor) is set back to the `Edit` widget via `focus_part='body'` * `self._w = ...` is called to update all widgets on screen There may be more concise ways of doing this, but this should be rather flexible. ``` import urwid def main(): my_widget = MyWidget() palette = [('unselected', 'default', 'default'), ('selected', 'standout', 'default', 'bold')] urwid.MainLoop(my_widget, palette=palette).run() class MyWidget(urwid.WidgetWrap): def __init__(self): n = 10 labels = ['selection {}'.format(j) for j in range(n)] self.header = urwid.Pile([urwid.AttrMap(urwid.SelectableIcon(label), 'unselected', focus_map='selected') for label in labels]) self.edit_widgets = [urwid.Edit('', label + ' edit_text') for label in labels] self.body = urwid.Filler(self.edit_widgets[0]) super().__init__(urwid.Frame(header=self.header, body=self.body, focus_part='body')) self.update_focus(new_focus_position=0) def update_focus(self, new_focus_position=None): self.header.focus_item.set_attr_map({None: 'unselected'}) try: self.header.focus_position = new_focus_position self.body = urwid.Filler(self.edit_widgets[new_focus_position]) except IndexError: pass self.header.focus_item.set_attr_map({None: 'selected'}) self._w = urwid.Frame(header=self.header, body=self.body, focus_part='body') def keypress(self, size, key): if key == 'up': self.update_focus(new_focus_position=self.header.focus_position - 1) if key == 'down': self.update_focus(new_focus_position=self.header.focus_position + 1) if key in {'Q', 'q'}: raise urwid.ExitMainLoop() super().keypress(size, key) main() ``` Upvotes: 3 [selected_answer]
2018/03/21
382
1,257
<issue_start>username_0: I have this code ``` select column from Table start with supervisor_id='555555' connect by prior employee_id=supervisor_id ``` So this query gives me the result for all the employees that have `55555` as their `supervisor_id` and any other employees that have those `employee_id` as their `supervisor_id`, but I also want the person with the `user_id` = `555555` to show up in my result too, and he has to show up in the same column as if he is part of the result of the query. Any help is appreciated. Thank you<issue_comment>username_1: You could union the supervisor into the query: ``` select column from Table where user_id='555555' union all select column from Table start with supervisor_id='555555' connect by prior employee_id=supervisor_id ``` I'm assuming that the person with `user_id = '555555'` comes from `Table` also and you want to display the same `column` - if not substitute those changes you need into the above SQL. Upvotes: 0 <issue_comment>username_2: Put an OR condition and it should work. See demo: <http://sqlfiddle.com/#!4/7ef7f9/1> ``` Select distinct * from table1 start with supervisor_id='555555' or employee_id='555555' connect by prior employee_id=supervisor_id ``` Upvotes: 1
2018/03/21
1,009
3,342
<issue_start>username_0: This code is for printing a table and it's working fine but the problem is that when I click on **print table button** it prints the table but when I clicked it again it again prints the same table below I want it to not work again until the new input values are given. once it should print table and then don't until new values are given. I also want it to be more responsive. ```html Multiplication Table .mystyle { width: 100%; padding: 25px; background-color: coral; color: white; font-size: 25px; box-sizing: border-box; } MultiplicationTable ------------------- Table Number: Initial Number: Ending Number: Print Table Add Alternate Row Style |Add Hover Effect | | | // function myFunction() // { // var text = ""; // var Number = document.getElementById("TN").value; // var T; // var I = document.getElementById("IN").value; // var E = document.getElementById("EN").value; // for (T = I; T <= E; T++) { // text += Number + "\*" + T + "=" + Number\*T + "<br>"; // } // document.getElementById("MT").innerHTML = text; // } // function generateTable() // { // //var myVar = prompt("A number?", ""); // var myVar = document.forms.multTables.x.value; // var myString = "<tr><th>"+ myVar + " times tables</th></tr>"; // for (i=1; i<=myVar; i++) // { // myString += "<tr><td>"; // myString += i+ " x " +myVar+ " = " +(i\*myVar)+ "\n"; // myString += "</td></tr>"; // } // document.getElementById('t').innerHTML = myString; // return false; // } function myTable() { var Number = document.getElementById("TN").value; var T; var I = document.getElementById("IN").value; var E = document.getElementById("EN").value; var temp=""; for (T = I; T <= E; T++) { temp+="<tr><td>"+Number+"</td><td>\*</td><td>" + T + "</td><td>=</td><td>" + Number\*T +"</td></tr>"; } $("#displayTables").append(temp); } function Bordertoggle() { var element = document.getElementById("displayTables"); element.classList.toggle("table-bordered"); var change = document.getElementById("Bordertoggle"); if (change.innerHTML == "Add Alternate Row Style") { change.innerHTML = "Remove Alternate Row Style"; } else { change.innerHTML = "Add Alternate Row Style"; } } function Hovertoggle() { var element = document.getElementById("displayTables"); element.classList.toggle("table-hover"); var change = document.getElementById("Hovertoggle"); if (change.innerHTML == "Add Hover Effect") { change.innerHTML = "Remove Hover Effect"; } else { change.innerHTML = "Add Hover Effect"; } } ```<issue_comment>username_1: You could union the supervisor into the query: ``` select column from Table where user_id='555555' union all select column from Table start with supervisor_id='555555' connect by prior employee_id=supervisor_id ``` I'm assuming that the person with `user_id = '555555'` comes from `Table` also and you want to display the same `column` - if not substitute those changes you need into the above SQL. Upvotes: 0 <issue_comment>username_2: Put an OR condition and it should work. See demo: <http://sqlfiddle.com/#!4/7ef7f9/1> ``` Select distinct * from table1 start with supervisor_id='555555' or employee_id='555555' connect by prior employee_id=supervisor_id ``` Upvotes: 1
2018/03/21
809
2,542
<issue_start>username_0: I am wanting my image to start essentially look as if it is fading out once the image has been scrolled down to the halfway point. [This is a great example.](https://www.aktivwebsolutions.com/solutions) I am using waypoints to come up with the trigger point, which is working fine. I can't figure out how to apply the opacity, especially when it comes to fading out the image the further down you scroll and then to fade back in as the user scrolls up. Do I use an overlay or apply an `opacity: 0` like I have in my snippet? Does anyone have any idea how to do this? ```js $('#servMain').waypoint(function() { $('#servMain').addClass('fadeOpacity'); console.log('scrolled into view'); }, { offset: '-30%' }); ``` ```css #servMain { width: 100%; height: 1000px; background-image: url("https://upload.wikimedia.org/wikipedia/commons/e/e0/Long_March_2D_launching_VRSS-1.jpg"); background-repeat: no-repeat; background-size: 100% 100%; position: relative; border: 1px solid red; } .fadeOpacity { opacity 0; -webkit-transition: opacity 1s ease-in-out; -moz-transition: opacity 1s ease-in-out; -o-transition: opacity 1s ease-in-out; transition: opacity 1s ease-in-out; } ``` ```html ```<issue_comment>username_1: you've got a typo in your css ( forgot semi-column ) ``` .fadeOpacity { opacity : 0; ``` check this : <https://jsfiddle.net/h5mj5ezh/> Upvotes: 2 <issue_comment>username_2: You can't do this with Waypoints, but you can write something yourself. Here is an example of the javascript needed to set the opacity depending on the scroll (I stole this code from the website you linked): ``` $(document).scroll(function (t) { var $main = $("#servMain"); var h = window.innerHeight; var r = (h - window.scrollY) / (h - 400); if (r >= 0) { $main.css("opacity", r); } }); ``` They 400 is like the offset I believe so you can adjust that to your needs. ```js $(document).scroll(function (t) { var $main = $("#servMain"); var h = window.innerHeight; var r = (h - window.scrollY) / (h - 100); if (r >= 0) { $main.css("opacity", r); } }); ``` ```css #servMain { width: 100%; height: 1000px; background-image: url("https://upload.wikimedia.org/wikipedia/commons/e/e0/Long_March_2D_launching_VRSS-1.jpg"); background-repeat: no-repeat; background-size: 100% 100%; position: relative; border: 1px solid red; } ``` ```html ``` Upvotes: 3 [selected_answer]
2018/03/21
1,242
4,472
<issue_start>username_0: Hello I´m trying to write program which is counting all characters in given string using HashMap and then it prints result on console like: {a=2, s=2, k=1, m=1, o=1} So far I have something like this: ``` public void result(String sentence) { int value; HashMap mp = new HashMap(); for (int i = 0; i < sentence.length(); i++) { if (mp.containsKey(sentence.charAt(i))) { value = mp.get(sentence.charAt(i)); value++; mp.put(sentence.charAt(i), value); } else { mp.put(sentence.charAt(i), 1); } } System.out.print(mp); } ``` I want to ask how can I ignore spaces, capital letter and punctuation in that given string. So it will not show in result? I hope someone will help me thank you!<issue_comment>username_1: I would recommend filtering out the characters you don't want before passing it into the `HashMap`. For example, ``` sentence = sentence.replaceAll("[^a-z]", ""); ``` Would remove anything other than a lowercase letter, and ``` sentence = sentence.replaceAll("[^a-z0-9]", ""); ``` Would leave you with lower-case letters and numbers. If you want to convert uppercase letters to lowercase instead of ignoring them, then first use ``` sentence = sentence.toLowerCase(); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You could check the ascii value of the current character and if it's in the given range 'a' (97) and 'z' (122) you would add it to the map, otherwise you ignore it. ``` if (mp.containsKey(sentence.charAt(i))) { ... } else if(sentence.charAt(i) >= 'a' || sentence.charAt(i) <= 'z') { mp.put(sentence.charAt(i), 1); } else { System.out.println("Ignoring - " + sentence.charAt(i)); } ``` This would prevent any extra traversing of the sentence and clean up before having to create your map. O(1) Upvotes: 0 <issue_comment>username_3: This is how I'd approach the task: ```java public static void main(String[] args) { String source = "Here comes another challenger!"; Map characterCounts = countCharacters(source); System.out.println("Source string \"" + source + "\" gives map:\n" + characterCounts); } public static Map countCharacters(String source) { Map characterCounts = new HashMap<>(64); source.chars().map(LetterCounter::lowercaseCharacter).filter( LetterCounter::isCharactedCounted).forEach(c -> characterCounts.merge((char) c, 1, (o, n) -> o + 1)); return characterCounts; } public static int lowercaseCharacter(int characterValue) { char character = (char) characterValue; return (int) Character.toLowerCase(character); } public static boolean isCharactedCounted(int character) { if (character >= 'a' && character <= 'z') { return true; } return false; } ``` Running the example `main` method gives the output: ```none Source string "Here comes another challenger!" gives map: {a=2, c=2, e=6, g=1, h=3, l=2, m=1, n=2, o=2, r=3, s=1, t=1} ``` The `countCharacters` method is creating an `IntStream` of the characters found within the given String, but this is just an elegant/compact/lazy way of doing exactly the same thing you're doing with a for loop. The `Stream.map` method calls `lowercaseCharacter(int)` to turn the `int` character value into a lowercase character `int` value. Then the `filter` method is discarding all the characters which cause the method `isCharactedCounted` to return `false` because we don't want those. Then `forEach` is used to process every character from the string which does interest us (the code in `forEach` is called each time the character is found in the string). The `Map.merge` method is just an elegant/compact/lazy way of doing exactly what you're already doing with your get,++,put code. Then the finished map is returned. You can customise the `isCharacterCounted` method to suit your needs. Note that this method takes an `int` primitive and not a `char` (because the `String.chars()` method returns a stream of `int` and not a stream of `char`). However, in Java a `char` is really just an `int` anyway, so the two are basically interchangeable, so long as you remember to use the correct type cast when needed. (Note that in the call to `Map.merge` we have to cast the `int c` into `(char) c` because the map is expecting a `Character` and an `int` cannot be auto-boxed into a `Character`. And we need to create the utility method `lowercaseCharacter` to convert an `int` character value into a lowercase `int` character value.) Upvotes: 0
2018/03/21
1,908
8,294
<issue_start>username_0: I stumbled on this problem that I am not able to solve properly. Here is some explanation. **Code** I have these Product classes: ``` public abstract class Product { public int BaseParam {get;set;} } public class SpecificProductA : Product { public int ParamA {get;set;} } public class SpecificProductB : Product { public int ParamB {get;set;} } ``` And I have these Consumer classes: ``` public interface IConsumer { void Consume(Product product); } public class ConcreteConsumerA : IConsumer { public void Consume(Product product) { /* I need ParamA of SpecificProductA */ } } public class ConcreteConsumerB : IConsumer { public void Consume(Product product) { /* I need ParamB of SpecificProductB */ } } ``` **Problem** I need the concrete implementations of the IConsumer interface to access specific parts of the Product. ConcreteConsumerA will only be able to consume ProductA and ConcreteConsumerB can only consume ProductB. This breaks the nice abstraction that I had with Consumer & Product. **Solution 1: Casting** The first and obvious thing that can be done is **casting** the product instance to the specific product. It works but it is not ideal as I rely on the runtime to throw any errors if anything is wrong with the type. **Solution 2: Breaking the inheritance of the product classes** The other solution has been to **break the Product inheritance** to something like this: ``` public class Product { public int BaseParam {get;set;} public SpecificProductA ProductA {get;set;} public SpecificProductB ProductB {get;set;} } public class SpecificProductA { public int ParamA {get;set;} } public class SpecificProductB { public int ParamB {get;set;} } ``` **Solution 3: Generics** I can also make the IConsumer interface **generic** like this: ``` public interface IConsumer where TProduct: Product { void Consume(Product product); } public class ConcreteConsumerA : IConsumer { public void Consume(SpecificProductA productA) { /\* I now have access to ParamA of SpecificProductA \*/ } } public class ConcreteConsumerB : IConsumer { public void Consume(SpecificProductB productB) { /\* I now have access to ParamA of SpecificProductB \*/ } } ``` However, like cancer, this generic interface is now spreading into the whole program which is not ideal either. I am not certain what is wrong here and which rule has been broken. Maybe it is a design issue that needs to be changed. Is there a better solution that the ones provided to solve this problem?<issue_comment>username_1: If `ConcreteConsumerA` *requires* a `SpecificConfigurationA` to do its work, and not any `Configuration` instance then *it should accept `SpecificConfigurationA`*, and not `Configuration`. Accepting any type of configuration and then just erroring at runtime when the caller doesn't know that you have requirements you haven't provided is just asking for bugs. For your second solution you make a configuration object that simply has all of the information any consumer would ever need, so that no consumer can be given a configuration object that lacks what they need. If that's entirely feasible for you, then that's great. There's no way for any consumer to ever have an invalid object; it will always work just fine. If you *can't* unify the objects, and there need to be different types of specific implementations, where different consumers can only handle certain types of configurations, then the final solution is the only real option. It of course ensures that you can't ever provide a configuration value of an improper type. While it may be more *code* than just not having the types keep track of this information, that doesn't mean it's more *work*. If the types weren't keeping track *for* you as to which of these consumers require which types of configurations then *you'd have to be keeping track of it somehow*, and if you got it wrong, instead of figuring it out immediately, due to your program not compiling, you wouldn't find out until that improper situation actually came up in testing and you got an invalid cast exception. This is all the more problematic if the situation is uncommon, rather than a bug that happens in all situations, resulting in you missing it in your testing and it only being found by customers later. Upvotes: 2 <issue_comment>username_2: If generic spreading is something you want to avoid, you can mitigate the runtime errors of option 1 giving the consumer a way to know if he's passing along the right types: ``` public interface IConsumer { bool TryConsume(Product product); } public class ConcreteConsumerA : IConsumer { public bool TryConsume(Product product) { if (product is SpecificProductA a) { //consume a return true; } return false; } } ``` Upvotes: 2 <issue_comment>username_3: I have found a solution that solves my problem: the Visitor Pattern. The trick was to find another abstraction (called here `ICommonInterface`) between my `IConsumer` and my `Product` and let the visitors deals with the details. ``` public interface IProductVisitor { ICommonInterface Visit(SpecificProductA productA); ICommonInterface Visit(SpecificProductB productB); } /* The purpose of this abstract class is to minimize the impact of the changes if I had to support another SpecificProductC. */ public abstract class ProductVisitor : IProductVisitor { public virtual ICommonInterface GetCommonInterface(SpecificProductA productA) { throw new NotImplementedException(); } public virtual ICommonInterface GetCommonInterface(SpecificProductB productB) { throw new NotImplementedException(); } } public sealed class SpecificProductAVisitor : ProductVisitor { public override ICommonInterface GetCommonInterface(SpecificProductA productA) { /* This guy will deal with ParamA of SpecificProductA */ return new ImplACommonInterface(productA); } } public sealed class SpecificProductBVisitor : ProductVisitor { public override ICommonInterface GetCommonInterface(SpecificProductB productB) { /* This guy will deal with ParamB of SpecificProductB */ return new ImplBCommonInterface(productB); } } ``` Then I have to allow the new `IProductVisitor` on the `Product` classes: ``` public abstract class Product { public int BaseParam { get; set; } public abstract ICommonInterface Visit(IProductVisitor productVisitor); } public class SpecificProductA : Product { public int ParamA {get;set;} public override ICommonInterface Visit(IProductVisitor productVisitor) { /* Forwards the SpecificProductA to the Visitor */ return productVisitor.GetCommonInterface(this); } } public class SpecificProductB : Product { public int ParamB {get;set;} public override ICommonInterface Visit(IProductVisitor productVisitor) { /* Forwards the SpecificProductB to the Visitor */ return productVisitor.GetCommonInterface(this); } } ``` Each `IConsumer` implementations can now do the following without having the need to cast anything: ``` public interface IConsumer { void Consume(Product product); ICommonObject Visit(IProductVisitor productVisitor); } public class ConcreteConsumerA : IConsumer { public void Consume(Product product) { /* The logic that needs for ParamA of SpecificProductA is now pushed into the Visitor. */ var productAVisitor = new SpecificProductAVisitor(); ICommonInterface commonInterfaceWithParamA = product.GetCommonInterface(productAVisitor); } } public class ConcreteConsumerB : IConsumer { public void Consume(Product product) { /* The logic that needs for ParamB of SpecificProductB is now pushed into the Visitor. */ var productBVisitor = new SpecificProductBVisitor(); ICommonInterface commonInterfaceWithParamB = product.GetCommonInterface(productBVisitor); } } ``` Upvotes: 1 [selected_answer]
2018/03/21
1,311
5,048
<issue_start>username_0: I'm currently trying to learn constraints and styling programmatically in `Swift`. I'm also trying to maintain clean and modularized code by splitting up code that relates to "styling". I simply have my `LoginViewController`: ``` import UIKit class LoginViewController: UIViewController { var loginView: LoginView! override func viewDidLoad() { super.viewDidLoad() loginView = LoginView(frame: CGRect.zero) self.view.addSubview(loginView) // AutoLayout loginView.autoPinEdgesToSuperviewEdges(with: UIEdgeInsets.zero, excludingEdge: .bottom) } override var preferredStatusBarStyle: UIStatusBarStyle { return .lightContent } } ``` Then my `LoginView`: ``` import UIKit class LoginView: UIView { var shouldSetupConstraints = true var headerContainerView: UIView! override init(frame: CGRect) { super.init(frame: frame) // Header Container View headerContainerView = UIView(frame: CGRect.zero) headerContainerView.backgroundColor = UIColor(red:0.42, green:0.56, blue:0.14, alpha:1.0) // #6B8E23 headerContainerView.translatesAutoresizingMaskIntoConstraints = false self.addSubview(headerContainerView) headerContainerView.topAnchor.constraint(equalTo: self.superview!.topAnchor) } required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) } override func updateConstraints() { if(shouldSetupConstraints) { // AutoLayout constraints shouldSetupConstraints = false } super.updateConstraints() } } ``` Where I am getting stuck is with just simply trying to add this `headerContainerView` to the top of my `superview`. I want to be able to add it so it pins itself to the top, left and right of the `superview` and only 1/3 of the `superview's` height. I continue to try and reference the `superview` with no success and I cannot find a solution that helps me understand on the internet. Any suggestions on how I can complete this? Thank you for taking the time for those that respond. **NOTE**: I did start out using `PureLayout` which is really nice. However, I am an individual that likes to understand what is going on behind the scenes or at least how to write the code at its base level. You can see that I am using a `PureLayout` function in my `LoginViewController`, but I am looking to change that. I would prefer a solution that doesn't add a third party library.<issue_comment>username_1: ``` headerContainerView.snp.makeConstraints { (make) in make.top.equalTo(self) make.leading.and.trailing.equalTo(self) make.height.equalTo(self.frame.height/3) } ``` With [SnapKit](https://github.com/SnapKit/SnapKit "SnapKit"). Upvotes: 0 <issue_comment>username_2: With SnapKit, you can do the following: ``` override func viewDidLoad() { super.viewDidLoad() loginView = LoginView(frame: CGRect.zero) self.view.addSubview(loginView) // AutoLayout loginView.snp.makeConstraints { (make) in make.left.equalTo(view.snp.left) make.right.equalTo(view.snp.right) make.top.equalTo(view.snp.top) make.height.equalTo(view.snp.height).multipliedBy(1/3) } } ``` Upvotes: 0 <issue_comment>username_3: Here `self` in the custom `UIView` class is the parent view of `headerContainerView` so , You can add this , Also I recommend to learn constraints first without 3rd party libraries to fully understand the concept as you will learn a lot from seeing conflicts and other things , once done , shift to libraries ``` override init(frame: CGRect) { super.init(frame: frame) // Header Container View headerContainerView = UIView(frame: CGRect.zero) headerContainerView.backgroundColor = UIColor(red:0.42, green:0.56, blue:0.14, alpha:1.0) // #6B8E23 headerContainerView.translatesAutoresizingMaskIntoConstraints = false self.addSubview(headerContainerView) headerContainerView.topAnchor.constraint(equalTo: self.topAnchor).isActive = true headerContainerView.leadingAnchor.constraint(equalTo: self.leadingAnchor).isActive = true headerContainerView.trailingAnchor.constraint(equalTo: self.trailingAnchor).isActive = true headerContainerView.heightAnchor.constraintEqualToAnchor(self.heightAnchor, multiplier:1.0/3.0, constant: 0.0).active = true } ``` // loginView layout ``` override func viewDidLoad() { super.viewDidLoad() loginView = LoginView(frame: CGRect.zero) self.view.addSubview(loginView) loginView.translatesAutoresizingMaskIntoConstraints = false loginView.topAnchor.constraint(equalTo: self.view.topAnchor).isActive = true loginView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor).isActive = true loginView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor).isActive = true loginView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor).isActive = true ``` Upvotes: 2 [selected_answer]
2018/03/21
976
2,233
<issue_start>username_0: I have a list of strings that look liked this ``` "Heartbeats::hype-wss://vps-gb2.nsa.drw:7026" "Heartbeats::hype-wss://vps-de7.nsa.drw:7026" "Heartbeats::hype-wss://vps-gb3.nsa.drw:7006" "Heartbeats::hype-wss://vps-de2.nsa.drw:7043" "Heartbeats::hype-wss://vps-jp2.nsa.drw:7060" "Heartbeats::hype-wss://vps-jp2.nsa.drw:7071" "Heartbeats::hype-wss://vps-de3.nsa.drw:7055" "Heartbeats::hype-wss://vps-de3.nsa.drw:7066" "Heartbeats::hype-wss://vps-gb2.nsa.drw:7005" ``` I would like to get the substrings `gb2`, `de7`, `gb3` etc that are after `vps-` and also the four digit numbers at the end. Is there a clean way to do this in regex? Thanks in advance<issue_comment>username_1: You can easily do like: ``` vps-([^.]+) ``` which means: * After `vps-` * `(` match * `[^.]` anything that is not a dot * `+` one or more times For the number at the end you can than expand with: ``` vps-([^.]+).*:(\d+)$ ``` * `.*` any character 0 or more times until... * `:` column and... * `(\d+)` Match one or more integers * `$` followed by end of string Upvotes: 0 <issue_comment>username_2: This can work for you: ``` .*vps-([^.]*)[^:]*:(\d*) ``` You must get the group[0] and group[1]. You can test it here: <https://regex101.com/#javascript> Upvotes: 0 <issue_comment>username_3: Using those samples, this is an alternative `/(\w+\d)/g` **Explanation:** <https://regex101.com/r/64MOHt/1> ```js console.log("Heartbeats::hype-wss://vps-gb2.nsa.drw:7026".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de7.nsa.drw:7026".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-gb3.nsa.drw:7006".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de2.nsa.drw:7043".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-jp2.nsa.drw:7060".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-jp2.nsa.drw:7071".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de3.nsa.drw:7055".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de3.nsa.drw:7066".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-gb2.nsa.drw:7005".match(/(\w+\d)/g)) ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]
2018/03/21
892
2,418
<issue_start>username_0: I am trying to aggregate strings, but limited to only the preceding rows, not the whole partition. Does anyone know how to do this in Redshift? What I am trying to achieve is the `appended_event_namespace` column below. [![enter image description here](https://i.stack.imgur.com/ZvI6V.png)](https://i.stack.imgur.com/ZvI6V.png) This is what I've tried so far. ``` LISTAGG(event_namespace, '/') WITHIN GROUP (ORDER BY tstamp_true) OVER (PARTITION BY acct_id) AS appended_event_namespace ``` This results in the full `ApplicationLaunch/CategoryBrowse/NotificationCenter/UserProfile` aggregation on every single row instead of what is in the desired screenshot. The difficulty is in getting it to only append up to the current row since there doesn't seem to be a frame-clause for Redshift's LISTAGG(). Thanks for any ideas that may help.<issue_comment>username_1: You can easily do like: ``` vps-([^.]+) ``` which means: * After `vps-` * `(` match * `[^.]` anything that is not a dot * `+` one or more times For the number at the end you can than expand with: ``` vps-([^.]+).*:(\d+)$ ``` * `.*` any character 0 or more times until... * `:` column and... * `(\d+)` Match one or more integers * `$` followed by end of string Upvotes: 0 <issue_comment>username_2: This can work for you: ``` .*vps-([^.]*)[^:]*:(\d*) ``` You must get the group[0] and group[1]. You can test it here: <https://regex101.com/#javascript> Upvotes: 0 <issue_comment>username_3: Using those samples, this is an alternative `/(\w+\d)/g` **Explanation:** <https://regex101.com/r/64MOHt/1> ```js console.log("Heartbeats::hype-wss://vps-gb2.nsa.drw:7026".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de7.nsa.drw:7026".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-gb3.nsa.drw:7006".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de2.nsa.drw:7043".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-jp2.nsa.drw:7060".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-jp2.nsa.drw:7071".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de3.nsa.drw:7055".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-de3.nsa.drw:7066".match(/(\w+\d)/g)) console.log("Heartbeats::hype-wss://vps-gb2.nsa.drw:7005".match(/(\w+\d)/g)) ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]
2018/03/21
465
1,364
<issue_start>username_0: I have this script using the npm module [`node-shedule`](https://www.npmjs.com/package/node-schedule). I think I set it to run every 6 hours but it runs for an hour when the hour is 0,6,12,18. it only may run once. I could dirty fix it with a bool, but that ain't my style. a cronjob in Linux is not an option either, it needs to run cross-platform ``` let schedule = require('node-schedule'); let j = schedule.scheduleJob('* */6 * * *', function() { do smt }); ```<issue_comment>username_1: This will run every minute. Change cron schedule to `0 */6 * * *`, to only run it when the minute is 0. Upvotes: 2 [selected_answer]<issue_comment>username_2: You have to do: ``` let schedule = require('node-schedule'); let j = nodeSchedule.scheduleJob('0 0 */5 * * *', function() { do smt }); ``` It will run in the second 0 minute 0 every 6 hours. They use this [format](https://github.com/node-schedule/node-schedule#cron-style-scheduling) ``` * * * * * * ┬ ┬ ┬ ┬ ┬ ┬ │ │ │ │ │ | │ │ │ │ │ └ day of week (0 - 7) (0 or 7 is Sun) │ │ │ │ └───── month (1 - 12) │ │ │ └────────── day of month (1 - 31) │ │ └─────────────── hour (0 - 23) │ └──────────────────── minute (0 - 59) └───────────────────────── second (0 - 59, optional) ``` Upvotes: 0
2018/03/21
994
3,426
<issue_start>username_0: I'm adapting [this tutorial here](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/02_Convolutional_Neural_Network.ipynb) so I can train a ConvNet in my own set of images. So I made this function to try and get batches, though it does not create batches (yet): ``` def training_batch(batch_size): images = trainpaths for i in range(len(images)): # converting the path to an image image = mpimg.imread(images[i]) images[i] = image # Create batches X, Y = images, trainlabels return X, Y ``` And this function is called here: ``` def optimize(num_iterations): global total_iterations for i in range(total_iterations, total_iterations + num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = training_batch(train_batch_size) # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. session.run(optimizer, feed_dict=feed_dict_train) (...) ``` thing is if I run this if I run this code I get ``` Traceback (most recent call last): File "scr.py", line 405, in optimize(1) File "scr.py", line 379, in optimize session.run(optimizer, feed\_dict=feed\_dict\_train) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 905, in run run\_metadata\_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1116, in \_run str(subfeed\_t.get\_shape()))) ValueError: Cannot feed value of shape (2034, 218, 178, 3) for Tensor u'x:0', which has shape '(?, 116412)' ``` Can someone shine some light on how to fix this?<issue_comment>username_1: Adding the following line: ``` x_batch = x_batch.reshape((-1, 218 * 178 * 3)) ``` should fix the error. However, since you're building a convolutional neural network, you'll need spatial information of the images anyway. So I'd suggest you change your `x` placeholder to shape `(None, 218, 178, 3)`, rather than `(None, 116412)` instead. The `x_batch` conversion in this case will not be necessary. Upvotes: 2 [selected_answer]<issue_comment>username_2: you need to reshape the input to `(?, 116412)` ``` def optimize(num_iterations): global total_iterations for i in range(total_iterations, total_iterations + num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = training_batch(train_batch_size) x_batch = tf.reshape(x_batch,[-1, 218 * 178 * 3]) # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. session.run(optimizer, feed_dict=feed_dict_train) (...) ``` Upvotes: 0
2018/03/21
1,826
5,071
<issue_start>username_0: I am working on a set of biological sequences which involves the use of ncbi-blast. I need some help with processing the output file using python regex. The text result containing multiple outputs (sequence analysis results) looks something like this, > > **Query= lcl|TRINITY\_DN2888\_c0\_g2\_i1** > > > Length=1394 > Score E Sequences producing significant alignments: > > (Bits) Value > > > sp|Q9S775|PKL\_ARATH > > > CHD3-type chromatin-remodeling factor PICKLE... 1640 0.0 > > > sp|Q9S775|PKL\_ARATH CHD3-type chromatin-remodeling factor PICKLE > OS=Arabidopsis thaliana OX=3702 GN=PKL PE=1 SV=1 Length=1384 > > > Score = 1640 bits (4248), Expect = 0.0, Method: Compositional matrix > adjust. Identities = 830/1348 (62%), Positives = 1036/1348 (77%), > Gaps = 53/1348 (4%) > > > Query 1 > > MSSLVERLRVRSERRPLYTDDDSDDDLYAARGGSESKQEERPPERIVRDDAKNDTCKTCG 60 > MSSLVERLR+RS+R+P+Y DDSDDD + + +Q E IVR DAK + C+ CG Sbjct 1 > > MSSLVERLRIRSDRKPVYNLDDSDDDDFVPKKDRTFEQ----VEAIVRTDAKENACQACG 56 > > > Lambda K H a alpha > 0.317 0.134 0.389 0.792 4.96 > > > Gapped Lambda K H a alpha sigma > 0.267 0.0410 0.140 1.90 42.6 43.6 > > > Effective search space used: 160862965056 > > > **Query= lcl|TRINITY\_DN2855\_c0\_g1\_i1** > > > Length=145 ........................................ > ................................................... > ................................................... > > > I want to extract the information starting from "***Query= lcl|TRINITY\_DN2888\_c0\_g2\_i1***" to the next query "***Query=lcl|TRINITY\_DN2855\_c0\_g1\_i1***" and store it in a python list for further analysis (since the entire file contains few thousands of query results). Is there a python regex code that can do this action? Here is my code: ``` #!/user/bin/python3 file=open("path/file_name","r+") import re inter=file.read() lst=[] lst=re.findall(r'>(.*)>',inter,re.DOTALL) print(lst) for x in lst: print(x) ``` I get the wrong output since the code prints the entire information present in file (thousands) rather than picking up one result at a time. Thank you<issue_comment>username_1: To get the result you want, edit the line with the `re.findall()` method call to the following using `re.split()`: ``` lst=re.split(r'(>Query\=.*)?',inter,re.DOTALL) ``` See this for more info on `re.split()`: <https://docs.python.org/2/library/re.html> Also, you may want to consider using the now deprecated BLAST parser in `biopython`: <http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc96> > > The plain text BLAST parser is located in Bio.Blast.NCBIStandalone. > > > As with the XML parser, we need to have a handle object that we can > pass to the parser. The handle must implement the readline() method > and do this properly. The common ways to get such a handle are to > either use the provided blastall or blastpgp functions to run the > local blast, or to run a local blast via the command line, and then do > something like the following: > > > > > > > > > > > > > result\_handle = open("my\_file\_of\_blast\_output.txt") > > > > > > > > > > > > > > > > > > Well, now that we’ve got a handle (which we’ll call result\_handle), we > are ready to parse it. This can be done with the following code: > > > ``` >>> from Bio.Blast import NCBIStandalone >>> blast_parser = NCBIStandalone.BlastParser() >>> blast_record = blast_parser.parse(result_handle) ``` > > This will parse the BLAST report into a Blast Record class (either a > Blast or a PSIBlast record, depending on what you are parsing) so that > you can extract the information from it. In our case, let’s just print > out a quick summary of all of the alignments greater than some > threshold value. > > > ``` >>> E_VALUE_THRESH = 0.04 >>> for alignment in blast_record.alignments: ... for hsp in alignment.hsps: ... if hsp.expect < E_VALUE_THRESH: ... print('****Alignment****') ... print('sequence:', alignment.title) ... print('length:', alignment.length) ... print('e value:', hsp.expect) ... print(hsp.query[0:75] + '...') ... print(hsp.match[0:75] + '...') ... print(hsp.sbjct[0:75] + '...') ``` > > If you also read the section 7.3 on parsing BLAST XML output, you’ll > notice that the above code is identical to what is found in that > section. Once you parse something into a record class you can deal > with it independent of the format of the original BLAST info you were > parsing. Pretty snazzy! > > > Upvotes: 3 [selected_answer]<issue_comment>username_2: I finally found the solution to break the huge file into small chunks so that I can process individual query result using python regex... Here is my code... ``` #!/user/bin/python3 file=open("/path/file_name.txt","r+") import re inter=file.read() lst=re.findall('(?<=Query= lcl)(.*?)(?=Effective search space)', inter, flags=re.S) print(lst) ``` Thank you all for helping me out... Upvotes: 2
2018/03/21
1,261
3,964
<issue_start>username_0: I need to convert the following finite state diagram into Verilog code. [![enter image description here](https://i.stack.imgur.com/temVp.png)](https://i.stack.imgur.com/temVp.png) I've included the code I've written so far below. It looks to me like I've implemented all of the logic correctly and the code works for the first few input combinations. It eventually fails however and I can't seem to figure out why. ``` module FiniteStateMachine(output reg out_z, input in_x, in_y, clk, reset_b); parameter S0 = 2'b00, S1 = 2'b01, S2 = 2'b10, S3 = 2'b11; reg state; always @(posedge clk, negedge reset_b) begin // set state if (reset_b || !in_x) state <= S0; else case (state) S0: state <= (in_y == 1) ? S1 : S3; S1: state <= S2; S2: state <= S3; S3: state <= S3; endcase // set output out_z <= (state == S2 || state == S3) ? 1 : 0; end endmodule ```<issue_comment>username_1: To get the result you want, edit the line with the `re.findall()` method call to the following using `re.split()`: ``` lst=re.split(r'(>Query\=.*)?',inter,re.DOTALL) ``` See this for more info on `re.split()`: <https://docs.python.org/2/library/re.html> Also, you may want to consider using the now deprecated BLAST parser in `biopython`: <http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc96> > > The plain text BLAST parser is located in Bio.Blast.NCBIStandalone. > > > As with the XML parser, we need to have a handle object that we can > pass to the parser. The handle must implement the readline() method > and do this properly. The common ways to get such a handle are to > either use the provided blastall or blastpgp functions to run the > local blast, or to run a local blast via the command line, and then do > something like the following: > > > > > > > > > > > > > result\_handle = open("my\_file\_of\_blast\_output.txt") > > > > > > > > > > > > > > > > > > Well, now that we’ve got a handle (which we’ll call result\_handle), we > are ready to parse it. This can be done with the following code: > > > ``` >>> from Bio.Blast import NCBIStandalone >>> blast_parser = NCBIStandalone.BlastParser() >>> blast_record = blast_parser.parse(result_handle) ``` > > This will parse the BLAST report into a Blast Record class (either a > Blast or a PSIBlast record, depending on what you are parsing) so that > you can extract the information from it. In our case, let’s just print > out a quick summary of all of the alignments greater than some > threshold value. > > > ``` >>> E_VALUE_THRESH = 0.04 >>> for alignment in blast_record.alignments: ... for hsp in alignment.hsps: ... if hsp.expect < E_VALUE_THRESH: ... print('****Alignment****') ... print('sequence:', alignment.title) ... print('length:', alignment.length) ... print('e value:', hsp.expect) ... print(hsp.query[0:75] + '...') ... print(hsp.match[0:75] + '...') ... print(hsp.sbjct[0:75] + '...') ``` > > If you also read the section 7.3 on parsing BLAST XML output, you’ll > notice that the above code is identical to what is found in that > section. Once you parse something into a record class you can deal > with it independent of the format of the original BLAST info you were > parsing. Pretty snazzy! > > > Upvotes: 3 [selected_answer]<issue_comment>username_2: I finally found the solution to break the huge file into small chunks so that I can process individual query result using python regex... Here is my code... ``` #!/user/bin/python3 file=open("/path/file_name.txt","r+") import re inter=file.read() lst=re.findall('(?<=Query= lcl)(.*?)(?=Effective search space)', inter, flags=re.S) print(lst) ``` Thank you all for helping me out... Upvotes: 2
2018/03/21
848
2,757
<issue_start>username_0: I have a configuration setup such as this: ``` #[derive(Debug, Deserialize, Serialize)] struct Config { defaults: Option, } #[derive(Debug, Deserialize, Serialize)] struct Default { duration: Option, } #[derive(Serialize, Deserialize, Debug)] struct Millis(u64); ``` Having a value of `let cfg: &mut Config`, how could I easily set the duration of this value? I tried this, which panics if the value is not there to begin with: ``` *cfg.default.as_mut().unwrap().duration.as_mut().unwrap() = Millis(1234) ``` I have not found a way around those `unwrap`s to create the values on demand other than this, which is even more verbose... ``` if cfg.defaults.is_none() { cfg.defaults = Some(Default { duration: None }); } if cfg.defaults.as_mut().unwrap().duration.is_none() { cfg.defaults.as_mut().unwrap().duration = Some(Millis(1234)); } ``` What's "The Way" to do this?<issue_comment>username_1: I think the best way would be to use pattern matching: ``` if let Some(Config { defaults: Some(Default { duration: Some(ref mut millis) }) }) = cfg { *millis = 1234; } ``` Or, alternatively, with nested `if let`s: ``` if let Some(Config { ref mut defaults }) = cfg { if let Some(Default { ref mut duration }) = *defaults { *duration = Some(Millis(1234)) } } ``` This won't create values on demand, however. There is no simple boilerplate-less way to do it, as far as I'm aware. One of the approaches would be defining your own accessor methods which would handle defaults instantiation upon access: ``` impl Config { fn defaults(&mut self) -> &mut Default { if let Some(ref mut defaults) = *self.defaults { defaults } else { self.defaults = Some(Defaults::new()); // assuming that Defaults::new() exists self.defaults.as_mut().unwrap() } } } ``` If you have such a method for every field of every structure, you will be able to do this: ``` *cfg.defaults().duration() = 1234; ``` Upvotes: 0 <issue_comment>username_2: This is what the [`get_or_insert`](https://doc.rust-lang.org/std/option/enum.Option.html#method.get_or_insert) method is for: ``` #[derive(Debug)] struct Config { defaults: Option, } #[derive(Debug)] struct Default { duration: Option, } #[derive(Debug)] struct Millis(u64); fn main() { let mut config = Config { defaults: None }; config .defaults .get\_or\_insert(Default { duration: None }) .duration .get\_or\_insert(Millis(0)) .0 = 42; // Config { defaults: Some(Default { duration: Some(Millis(42)) }) } println!("{:?}", config); } ``` [(link to playground)](http://play.integer32.com/?gist=0b59ae42db6a5011da196718598e4c8c&version=stable) Upvotes: 4 [selected_answer]
2018/03/21
547
1,956
<issue_start>username_0: I had a CLI program which will ask user to type ENTER to continue and OTHER keys to abort. ``` for { show() // list one page fmt.Printf("Press ENTER to show next page, press any other key then Enter to quit") var input string fmt.Scanln(&input) if strings.Trim(input, " ") == "" { continue } else { break } } ``` I want to improve user experience: instead of "ENTER or press something then ENTER", how can I make it **"Press SPACE to show next page, press q to quit"**, just like Linux command "more" and others. To make it clear: * The existing control to continue is "ENTER", I want to use "SPACE" (just SPACE, not SPACE+ENTER); * The existing control to quit is "any key + ENTER", I want to use "q" (just q, not q+ENTER)<issue_comment>username_1: There is a built in shell command `read -n1 -r -p "Press SPACE to show next page, press q to quit" key`. Possibly you could exec that. For a more full featured golang solution, see `github.com/nsf/termbox-go`. Good example: <https://www.socketloop.com/tutorials/golang-get-ascii-code-from-a-key-press-cross-platform-example> Upvotes: 2 [selected_answer]<issue_comment>username_2: A simple solution with `github.com/nsf/termbox-go` ``` package main import ( "fmt" tb "github.com/nsf/termbox-go" ) func main() { err := tb.Init() if err != nil { panic(err) } defer tb.Close() for { fmt.Println("Press any key") event := tb.PollEvent() switch { case event.Ch == 'a': fmt.Println("a") case event.Key == tb.KeyEsc: fmt.Println("Bye!") return case event.Key == tb.KeySpace: fmt.Println("ANY KEY! You pressed SPACE!") case event.Key == tb.KeyEnter: fmt.Println("ANY KEY! You pressed ENTER!") default: fmt.Println("Any key.") } } } ``` Upvotes: 2
2018/03/21
325
1,143
<issue_start>username_0: ) I am getting this error: [![enter image description here](https://i.stack.imgur.com/7m7kM.png)](https://i.stack.imgur.com/7m7kM.png) I installed versions [email protected] but then I reinstalled 2.3.0, (I thought it would help....) Thank you in advance for your answer ;)<issue_comment>username_1: I recently had the same problem. What you have to do is to install the `webpack-cli` package with the -D flag in the terminal: ``` npm install webpack-cli -D ``` After you have installed the package update the dependencies in the package.json: ``` npm update ``` Now you should be able to start the server. If this is not the case and you run in a configuration error you may have to change also something in the package.json. For that follow the link [here](https://stackoverflow.com/questions/49370849/configuration-module-has-an-unknown-property-loaders). Upvotes: 3 [selected_answer]<issue_comment>username_2: npm i -D webpack-cli and then npm update solved the issue for me. Upvotes: 0 <issue_comment>username_3: For those using webpack 5, replace webpack-dev-server command with webpack serve Upvotes: 2
2018/03/21
1,022
3,430
<issue_start>username_0: So i have a following structure in my html ``` - - -closing column col-9 - closing row - closing container ``` As you can see i have one row nested inside another row, and then in that inside row i have two more columns (trying to divide that col-9 into more cols) Problem is,for some reason the inside columns instead of being next to one another go underneath, and both act like they are col 12 in the row and take up the whole 100% of that col-9. (Im trying to have input and button on the same line) Thanks in advance UPDATE: No, haha, i didn't forget the class attributes and how they work. I just didn't include them while typing the code here for the question. Like i said, its just the structure. or my code. If needed ill copy and paste the code exactly, but its quite lengthy UPDATE CODE HERE: ``` * Home * Profile php if ($super\_admin == 1) echo '<li class="nav-item"Add Staff Member'?> * Settings * [Log Out](logout.php) php echo "<h1Welcome " . $\_SESSION['firstName'] . "!"?> Suggestions: function showHint(str){ var xhttp; if(str.length == 0){ document.getElementById("txtHint").innerHTML = ""; return; } xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function(){ if(xhttp.readyState == 4 && xhttp.status == 200){ document.getElementById("txtHint").innerHTML = xhttp.responseText; // document.getElementById("suggestions").style.border="1px solid #A5ACB2"; } }; xhttp.open("GET", "getparticipants.php?q=" + str, true); xhttp.send(); } php //table header echo ' <thread| ID | Name | Email | '; //Fetch and print all the records: while ($row = mysqli\_fetch\_array($r, MYSQLI\_ASSOC)) { echo '| ' . $row['id'] . ' | [' . $row['name'] . '](client_profile_overview.php) | ' . $row['email'] . ' | View Profile | '; } //close the table and free up the resources. echo ' '; ?> ### Profile ### Add Staff Member Enter Staff ID Number to be added to the System. Add ### Settings ``` [inside column](https://i.stack.imgur.com/or6XZ.jpg) [inside row](https://i.stack.imgur.com/UynQH.jpg) [Main column (col-md-9)](https://i.stack.imgur.com/Bv97G.jpg)<issue_comment>username_1: That problem is that you seem to have forgotten about your `class` attribute, and are instead trying to assign the classes as attributes themselves. As such, Bootstrap's class selectors won't apply, and your will default to the standard of taking up 100% of the available width (thus stacking on top of each other). To apply the classes, instead of (for example), you're looking for . And don't forget to include Bootstrap's files themselves! In terms of a full example, you're looking for: ```html 3 - Left 9 - Left 9 - Right ``` I'd also recommend taking a look at **[the official examples](https://getbootstrap.com/docs/4.0/layout/grid/#auto-layout-columns)** as a starting point. Upvotes: -1 <issue_comment>username_2: IF Obsidian is wrong and you just used this 'syntax' as a shortcut, it just looks fine. So post your real code or check it for typos. Upvotes: -1 <issue_comment>username_3: In your code I don't see `.row` class inside the content. > > `.row` and `.col` are like `tr` and `td` in tables. Make sure you wrap your `.col`s with `.row` > > > Code works fine check the imported bootstrap files. ```css .b { border: 2px solid blue; } ``` ```html 3 - Left 9 - Left 9 - Right ``` Upvotes: 0
2018/03/21
389
1,309
<issue_start>username_0: In VS MSBuild we move group of files from one folder to another: ``` ``` It works fine, except one file: `App.exe.config`, because it has double extension and it renamed to `NewApp.config` instead `NewApp.exe.config` (how it should be). How to fix it?<issue_comment>username_1: > > Move and rename file with double extension using MSBuild > > > It seems you want to move and rename file `App.*` to `NewApp.*` but keep extension using MSBuild. You can use the [MsBuild Community Tasks](https://github.com/loresoft/msbuildtasks) which have a [RegexReplace](https://github.com/loresoft/msbuildtasks/blob/master/Documentation/TaskDocs.md#RegexReplace) task. To accomplish this, add the [MSBuildTasks](https://www.nuget.org/packages/MSBuildTasks) nuget package to your project, then add following target to do that: ``` ``` With this target, all `App.*` file are moved and renamed to `NewApp.*`: [![enter image description here](https://i.stack.imgur.com/dA06y.png)](https://i.stack.imgur.com/dA06y.png) Upvotes: 2 <issue_comment>username_2: Starting with MSBuild 4.0 we can use [String Item Functions](https://learn.microsoft.com/en-us/visualstudio/msbuild/item-functions#string-item-functions), for example Replace: ``` ``` which works fine. Upvotes: 5 [selected_answer]
2018/03/21
563
1,916
<issue_start>username_0: I'm having issues with my Cordova installation : I installed cordova, Android Studio, Android SDK, Java and defined `JAVA_HOME="/usr/lib/jdk1.8.0_161/bin/java" ANDROID_HOME="/home/myusername/Android/Sdk/platforms/android-24"` In `/etc/environment`. When I go to a Cordova project and type `cordova requirments android`, I have the error > > Android Studio project detected > > > Requirements check results for android: > > Java JDK: installed 1.8.0 > > Android SDK: installed true > > Android target: not installed > > android: Command failed with exit code ENOENT > > Gradle: installed /usr/share/gradle/bin/gradle > > > > When I change `ANDROID_HOME` to `/home/myusername/Android/Sdk/`, the error becomes > > avdmanager: Command failed with exit code 1 > > > I can't find the problem, I didn't found a good answer on the other posts...<issue_comment>username_1: i think you should try to change ANDROID\_HOME to /home/myusername/Android/ Almost adding to PATH > > :$HOME/Android/tools:$HOME/Android/build-tools:$Home/Android/platform-tools:$PATH > > > Otherwise i have export the GRADLE\_HOME too. Good Luck! Upvotes: 0 <issue_comment>username_2: You have two errors: * the android sdk is not found I think this is because ANDROID\_HOME should be more like `/home/myusername/Android/Sdk/` (the root of the sdk so that the cordova android platform will be able to choose the sdk version) * android command is not found in the path you need to add to the PATH the folder where the android command is located. This should be `/home/myusername/Android/Sdk/tools` Upvotes: 2 [selected_answer]<issue_comment>username_3: I finally found my error. I installed JAVA, but I didn't added it to the alternatives. The solution was to add my freshly installed JAVA version as main JAVA with `sudo update-alternatives --config java` Upvotes: 0
2018/03/21
955
3,353
<issue_start>username_0: I have a search component that captures the value onClick of a button. I use Redux for state management and when I pass in data, I get an error. ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import { Typeahead } from 'react-bootstrap-typeahead'; import { searchInput } from '../../actions/index'; class Search extends Component { constructor(props) { super(props); this.setRef = null; } onSearch = () => { const value = this.setRef.instanceRef.state.text; console.log(value); this.props.searchInput(value); }; render() { return ( Search: this.setRef = a } /> ); } } Search.propTypes = { options: PropTypes.array, placeholder: PropTypes.string, emptyLabel: PropTypes.node, }; Search.defaultProps = { options: ['red', 'green', 'blue', 'orange', 'yellow'], placeholder: 'Enter a placeholder', emptyLabel: null, }; export default connect(null, searchInput)(Search); ``` I get the value through the refs and pass that value through reducers so I can get that value to my parent calling this component as pass it as a param to the database to get a server side filtering. This is my action. ``` export const searchInput = event => ({ type: ACTION_TYPES.SEARCH_INPUT, data: event, }); ``` This is my reducer. ``` import * as ACTION_TYPES from '../consts/action_types'; const initialState = { searchTerm: '', }; export const searchReducer = (state = initialState, action) => { switch (action.type) { case ACTION_TYPES.SEARCH_INPUT: return { ...state, searchTerm: action.data, }; default: return state; } }; ``` This is the error I get. ``` Uncaught TypeError: n.props.searchInput is not a function at n.onSearch (Search.js:16) at Object.R (react-dom.production.min.js:26) at Object.invokeGuardedCallback (react-dom.production.min.js:25) at Object.invokeGuardedCallbackAndCatchFirstError (react-dom.production.min.js:25) at $ (react-dom.production.min.js:30) at ee (react-dom.production.min.js:32) at ne (react-dom.production.min.js:32) at Array.forEach () at Z (react-dom.production.min.js:31) at se (react-dom.production.min.js:34) ``` Could someone point out my mistake?<issue_comment>username_1: i think you should try to change ANDROID\_HOME to /home/myusername/Android/ Almost adding to PATH > > :$HOME/Android/tools:$HOME/Android/build-tools:$Home/Android/platform-tools:$PATH > > > Otherwise i have export the GRADLE\_HOME too. Good Luck! Upvotes: 0 <issue_comment>username_2: You have two errors: * the android sdk is not found I think this is because ANDROID\_HOME should be more like `/home/myusername/Android/Sdk/` (the root of the sdk so that the cordova android platform will be able to choose the sdk version) * android command is not found in the path you need to add to the PATH the folder where the android command is located. This should be `/home/myusername/Android/Sdk/tools` Upvotes: 2 [selected_answer]<issue_comment>username_3: I finally found my error. I installed JAVA, but I didn't added it to the alternatives. The solution was to add my freshly installed JAVA version as main JAVA with `sudo update-alternatives --config java` Upvotes: 0
2018/03/21
1,025
3,586
<issue_start>username_0: Let me start off by saying this is for a database view that I do not manage so the data is what it is. It has a column for job title which also includes the job level (e.g. Software Engineer 2, Designer C, Air Traffic Controller E-1, etc...) I need to get the distinct job titles. All the Software Engineers (level 1 through 6) should return a single value. Job titles can have one or more words in them. Levels can have 1 to 3 characters. I tried this to get the level ``` SELECT jobtitle, REVERSE(LEFT(REVERSE(jobtitle), CHARINDEX(' ', REVERSE(jobtitle)) - 1)) AS level ``` to get the level but I couldn't figure out how to strip that off the job title and then get the distinct value for those.<issue_comment>username_1: If level is always a word you can use following which strips the last word off so you get only job title: ``` SELECT SUBSTRING(jobtitle, 1, LEN(jobtitle) - CHARINDEX(' ', REVERSE(jobtitle))) ``` or if you prefer using LEFT instead of SUBSTRING: ``` SELECT LEFT(jobtitle,LEN(jobtitle)-CHARINDEX(' ', REVERSE(jobtitle),0)+1) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: If your level is always the format "E-1" or just "C" or "A4" you can split on the last space in your full jobtitle. Try this: ``` SELECT RIGHT(jobtitle, charindex(' ', reverse(jobtitle) + ' ') - 1) ``` The RIGHT will get you the level. Or: ``` SELECT LEFT(jobtitle, len(jobtitle) - charindex(' ', reverse(jobtitle) + ' ')) ``` The LEFT will get you the title. I came across these when looking for a sql equivalent for C# string.LastIndexOf() method. If the level can have a space, this becomes quite a bit more complicated. You will probably just want to pull these 2 values into 2 columns on a temp table, along with any other relevant pieces of info. Then do your aggregations on the temp table. This will give you more freedom for analysis, and probably better performance than trying to separate the pieces and then do a distinct on the substring all in one shot. Upvotes: 0 <issue_comment>username_3: ``` select reverse(stuff(reverse([JOBTITLE]), 1, 3, '')) , Substring(reverse(stuff(([JOBTITLE]), 1, 3, '')),0,CHARINDEX(' ', reverse(stuff(([JOBTITLE]), 1, 3, '')))) ``` Upvotes: 0 <issue_comment>username_4: Using an apply operator allows re-use of a generated position within a string for subsequent calculations. e.g. [SQL Fiddle](http://sqlfiddle.com/#!18/3d8f3/1) **MS SQL Server 2017 Schema Setup**: ``` CREATE TABLE Table1 ([jobtitle] varchar(50)) ; INSERT INTO Table1 ([jobtitle]) VALUES (NULL), (''), ('nospacehere'), ('Software Engineer 2'), ('Designer C'), ('Air Traffic Controller E-1') ; ``` **Query 1**: ``` select jobtitle , left(jobtitle,a.pos) , ltrim(substring(jobtitle,a.pos+1,50)) , a.pos from table1 outer apply ( select len(jobtitle) - charindex(' ',reverse(jobtitle)) ) a (pos) ``` **[Results](http://sqlfiddle.com/#!18/3d8f3/1/0)**: ``` | jobtitle | | | pos | |----------------------------|------------------------|--------|--------| | (null) | (null) | (null) | (null) | | | | | 0 | | nospacehere | nospacehere | | 11 | | Software Engineer 2 | Software Engineer | 2 | 17 | | Designer C | Designer | C | 8 | | Air Traffic Controller E-1 | Air Traffic Controller | E-1 | 22 | ``` Upvotes: 0
2018/03/21
258
816
<issue_start>username_0: ``` string raw_str = R"(R"(foo)")"; ``` If I have `R"()"` inside a raw string, and that causes the parser to confuse. (ie., it thought the left most `)"` was the end of the raw string. How do I escape this?<issue_comment>username_1: The raw string will terminate after the first `)"` it sees. You can change the delimiter to `***` for example: ``` string raw_str = R"***(R"(foo)")***"; ``` Upvotes: 4 <issue_comment>username_2: The format for the [raw-string literals](http://en.cppreference.com/w/cpp/language/string_literal)[[2](https://timsong-cpp.github.io/cppwp/lex.string#nt:raw-string)] is: `R"delimiter( raw_characters )delimiter"` so you can use a different *delimiter* that is not in the string like: ``` string raw_str = R"~(R"(foo)")~"; ``` Upvotes: 6 [selected_answer]
2018/03/21
1,064
4,408
<issue_start>username_0: I implement a service which is using database. There are few tables in the database. Inside these tables store results of some calculations. Another services creates a new table for each calculation and now list of table looks like this: * calculation\_1\_0 * calculation\_2\_0 * calculation\_2\_1 * calculation\_2\_2 * calculation\_3\_0 * calculation\_X\_Y The table structure is same for all tables. Is it possible to get data from these tables using JPA without creation entity for each table?<issue_comment>username_1: Creating tables at runtime is a bad idea. Given that you know what the columns are, a better design would be to have a `calculation_type` lookup table and create a foreign key in your calculation table and index it. Then you can create your entities up front and have better relational integrity. To answer the question directly, new tables cannot be created dynamically then mapped using JPA. You *could* use plain JDBC but there's a reason JPA doesn't support it - it's bad design. Upvotes: 2 <issue_comment>username_2: > > @username_2 I absolutely agree with you, but I am not responsible for > changing design structure because I'm not a developer of it, I just > use it one for my goals. > > > If you must live with it, then I would dynamically drop/create a view or synonym to cheat JPA. The idea is to: * before loading EntityManager, create a view/synonym with a fixed name, which maps the selected table to that common name * in JPA map the entity to this common name of view/synonym using XML or annotations * load entity manager This approach has disadventages: * only one instance/thread of the application can use this view/synonym at the same time, this won't work in multiuser environment (unless all instances/threads are synchronized and will turn off / on entity managers simultaneously at the time when the view is updated/recreated - but I can not even imagine such a solution, it would be a huge project with many traps and errors) * only one table can be used at a time, in order to use another one, the entity manager must be closed, the view/synonym must be dropped, then created again, then the entity manager must be loaded again --- You can also create a view which union many tables, something like this: ``` CREATE VIEM my_common_name AS SELECT '1_0' as calc_name, t.* FROM calculation_1_0 t UNION ALL SELECT '2_0' as calc_name, t.* FROM calculation_2_0 t UNION ALL ..... ..... SELECT 'X_Y' as calc_name, t.* FROM calculation_X_Y t ``` but again - if the process creates a new table, you must close entity manager, recreate this view with a new table name, and load entity manager again. And this will be very poor from the performance perspective - you cannot create any index on this view, it cannot be partitioned nor tuned nor materialized, every query against this monster view will almost always do full table scans of all these tables. --- A bad database structure design was done in the first place, it's like designing phones with a different charging socket, you'll need to use a different plug each time you charge your phone. The best it can be done is to fix this original error **because such a fix is very easy to do**, just create one common table with an additional colum "calculation\_name" and fix a code which saves data to these tables (insert data into common table instead of creating new tables each time), everything else will be a bothersome attempt to get around this problem. --- If I were you, then I would go to my manager, and I would tell him that there is such a technical problem wit bad design. I would tell him we can fix it now, and it will cost a little ($). I would tell him that if we do not fix it now, then every future change in this functionality will be more and more troublesome and expensive ($$, $$$, $$$$$ - because of cost of man days, maintanance, performance/hardware etc), and eventually such a fix will be simply impossible and we will have to rewrite this system from scratch (and this will cost us $$$$$$$). The manager has a wider view, he should know business plans, he should know if this functionality will be developed in the future, or maybe it will be abandoned in half a year. It's the manager decision - invest $ now and save $$$$$$ in the future, or do nothing with it because it will not give us any profits in the future. Upvotes: 0
2018/03/21
427
1,769
<issue_start>username_0: Currently, my Spring Boot project is consist of few REST services; resource service, authentication service, and so on. The name of each service is self-descriptive but here are the roles, * Resource service: responsible to manage all resources like User, Transaction, Supplier, and so on. * Authentication server: responsible to create JWT access token upon a 'password' grant\_type request The framework does JWT token verification for me with the public key. Currently, the resource server processes any request if the requested JWT token is valid. However, I'm not sure what's the best practice and how to verify the requester owns the requested resource. For example, if user Jacob wants to retrieve a transaction that Paul created with a valid JWT. My application somehow need to verify the requester, Jacob does not have ownership on Paul's transaction. Thanks in advance!<issue_comment>username_1: I think the idea behind JWT token is that you can not determine from whom it is from token itself, you just check if the token is valid and you also check the timestamp. Upvotes: -1 <issue_comment>username_2: Best practice, I would say, is to keep your access tokens short-lived and keep them on the server side away from the user-agent and always use SSL on the wire. JWT access tokens are bearer tokens. And as the name suggests, whoever's got them, can use them. Think of an access token as a fiver (£5) and your Transaction Resource Server as a barman. Now, let's say you drop your £5 on the floor. It's my lucky day! I pick it up, go to the pub and buy a pint. Does the barman ask me to whom the £5 note belongs? No. Just like your server doesn't care if it's Jacob or Paul with the token. Upvotes: 2 [selected_answer]
2018/03/21
428
1,423
<issue_start>username_0: Im new to Python and am having issues with lists, elements,etc. I have this code: ``` order_payment = Api.which_api('order_payments', 'None', '26', 'None') # now that we have info from order_payment, obtain the rest of the vars we need order_id = order_payment['order_id'] order_payment = Api.which_api('order_payments', 'None', '26', 'None') orders = Api.which_api('orders', 'None', order_id, 'items') # your loop for orders in orders['items']: oid = orders['order_id'] item = orders['item_name'] print(item) sub = 'Download' print (s for s in item if sub in s) ``` When I do print(item) I get this returned: Norton Antivirus 2014 - Download - 1 User / 1 PC - 1 Year Subscription I want to check this variable to see if it has Download in it(like this one does) I looked on here and found the code Im trying(sub = 'Download') but that is not working. Basically Im trying to set up a if condition. If this element 'item\_name' has 'Download' in it do this. How do i check the vairable for this<issue_comment>username_1: Use the in operator `if "Download" in item: print` Upvotes: 1 <issue_comment>username_2: My suggestion to you would be to use `try` and `except` in your python code for better results. ``` try : item = orders['item_name'] print(item) sub = 'Download' print (s for s in item if sub in s) except: pass ``` Upvotes: 0
2018/03/21
414
1,463
<issue_start>username_0: I need to find the offset of a specific timezone. How can I do that by using moment? Lets say my client timezone is MST and I want to found the EST timezone offset. I need to get the standard offset without considering daylight saving. With `moment.tz("America/Edmonton").format('Z')` I get `-6:00` but this considers daylight saving. I want something gives me `-7:00` as it is standard offset.<issue_comment>username_1: How about something like this? ```js function getStandardOffset(zone) { // start with now var m = moment.tz(zone); // advance until it is not DST while (m.isDST()) { m.add(1, 'month'); } // return the formatted offset return m.format('Z'); } getStandardOffset('America/Edmonton') // returns "-07:00" ``` Of course, this returns the *current* standard offset. If the time zone had used a different standard offset in the past, you'd need to start with a moment in that range, rather than "now". Upvotes: 3 [selected_answer]<issue_comment>username_2: In case of permanent DST timezone it will be an infinite loop, right? So what I ended up for my code is: ``` function getStandardOffset(zone) { // start with now var m = moment.tz(zone); // advance until it is not DST var counter = 1; while (m.isDST()) { m.add(1, 'month'); if (counter > 12) { break; } } // return the formatted offset return m.format('Z'); } ``` Upvotes: 1
2018/03/21
560
1,970
<issue_start>username_0: I have a class ``` class MyClass(object): ClassTag = '!' + 'MyClass' ``` Instead of explicitly assigning `'MyClass'` I would like to use some construct to get the class name. If I were inside a class function, I would do something like ``` @classfunction def Foo(cls): tag = '!' + cls.__class__.__name__ ``` but here I am in class scope but not inside any function scope. What would be the correct way to address this? Thank you very much<issue_comment>username_1: > > Instead of explicitly assigning 'MyClass' I would like to use some construct to get the class name. > > > You can use a class decorator combined with the `__name__` attribute of class objects to accomplish this: ``` def add_tag(cls): cls.ClassTag = cls.__name__ return cls @add_tag class Foo(object): pass print(Foo.ClassTag) # Foo ``` In addition to the above, here are some side notes: * As can be seen from the above example, classes are defined using the `class` keyword, not the `def` keyword. The `def` keyword is for defining functions. I recommend walking through [the tutorial provided by Python](https://docs.python.org/2/tutorial/index.html), to get a grasp of Python basics. * If you're not working on legacy code, or code that requires a Python 2 library, I highly recommend [upgrading to Python 3](https://www.python.org/downloads/). Along with the fact that the Python Foundation will stop supporting Python in 2020, Python 3 also fixes many quirks that Python 2 had, as well as provides new, useful features. If you're looking for more info on how to transition from Python 2 to 3, a good place to start would be [here](https://docs.python.org/3.0/whatsnew/3.0.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: A simple way is to write a decorator: ``` def add_tag(cls): cls.ClassTag = cls.__name__ return cls # test @add_tag class MyClass(object): pass print(MyClass.ClassTag) ``` Upvotes: 2
2018/03/21
680
2,241
<issue_start>username_0: I am trying to build Tensorflow 1.6 with MPI support. I am getting the following error: > > ERROR: /gpfshome01/u/amalik/Tensorflow/tensorflow/tensorflow/contrib/gdr/BUILD:52:1: C++ compilation of rule '//tensorflow/contrib/gdr:gdr\_memory\_manager' failed (Exit 1) > tensorflow/contrib/gdr/gdr\_memory\_manager.cc:28:27: fatal error: rdma/rdma\_cma.h: No such file or directory > #include > ^ > compilation terminated. > Target //tensorflow/tools/pip\_package:build\_pip\_package failed to build > Use --verbose\_failures to see the command lines of failed build steps. > INFO: Elapsed time: 556.299s, Critical Path: 183.28s > FAILED: Build did NOT complete successfully > > > --- Any suggestion and comments<issue_comment>username_1: > > Instead of explicitly assigning 'MyClass' I would like to use some construct to get the class name. > > > You can use a class decorator combined with the `__name__` attribute of class objects to accomplish this: ``` def add_tag(cls): cls.ClassTag = cls.__name__ return cls @add_tag class Foo(object): pass print(Foo.ClassTag) # Foo ``` In addition to the above, here are some side notes: * As can be seen from the above example, classes are defined using the `class` keyword, not the `def` keyword. The `def` keyword is for defining functions. I recommend walking through [the tutorial provided by Python](https://docs.python.org/2/tutorial/index.html), to get a grasp of Python basics. * If you're not working on legacy code, or code that requires a Python 2 library, I highly recommend [upgrading to Python 3](https://www.python.org/downloads/). Along with the fact that the Python Foundation will stop supporting Python in 2020, Python 3 also fixes many quirks that Python 2 had, as well as provides new, useful features. If you're looking for more info on how to transition from Python 2 to 3, a good place to start would be [here](https://docs.python.org/3.0/whatsnew/3.0.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: A simple way is to write a decorator: ``` def add_tag(cls): cls.ClassTag = cls.__name__ return cls # test @add_tag class MyClass(object): pass print(MyClass.ClassTag) ``` Upvotes: 2
2018/03/21
744
2,729
<issue_start>username_0: If I have a query like this, will the OUTER APPLY only be applied on the TOP 5 rows of tblA? Or will it be applied on all rows returned from tblA, and only then will the TOP 5 be applicable? I have a lot of rows in tblA, and I dont want the OUTER APPLY to run on all the rows. ``` SELECT TOP 5 ACol1, ACol2, b.BCol2 FROM tblA OUTER APPLY (SELECT TOP 1 BCol2 FROM tblB WHERE BCol1 = tblA.ACol1 ORDER BY ins_dtim DESC) b ORDER BY tblA.ACol3 ```<issue_comment>username_1: First of all, any use of `TOP` without a matching `ORDER BY` is very likely a mistake. This applies to **both parts** of your existing query: the `TOP 1` *and* `TOP 5` statements. That out of the way, for this query, you may be better off like this: ``` SELECT TOP 5 ACol1, ACol2, MAX(b.BCol2) AS BCol2 FROM tblA a LEFT JOIN tblB b ON b.BCol1 = a.ACol1 GROUP BY a.ACol1, a.ACol2 ``` It should work as well as what you have as long as `ACol1` and `ACol2` are reasonably unique. What you have now might allow for duplicate `ACol1, ACol2` pairs if that data exists in your table. This will not. But since you don't care about ORDER, you have no grounds to complain if this returns different values, as long as they match real data in the table. --- But I understand all this is also likely a simplified version of the problem, so I will address the question directly in two ways. First, the right thing to do when you're unclear on behavior is to check the execution plan. You don't need to be an execution plan expert to determine what is going on. If you're not familiar with the meaning for each of the nodes, Google can help you understand it. From the other direction, don't quote me on this, but I would expect Sql Server to only use it 5 times in this specific case. I think that's what you were hoping to hear. **But don't count on that in the general case.** If you have an `ORDER BY` clause for the query, as you should do whenever you use `TOP`, then Sql Server can't know which results to return until it checks all the possibilities. Furthermore, if that `ORDER BY` is determined by any fields from the APPLY results, Sql Server will **definitely** have to run the sub query for every possible record. Otherwise, it would have no way to know which records it needs. Finally, in many cases the query optimizer will handle an `APPLY` as something more closely resembling a `JOIN` than a sub query. Even when it seems like it needs to compute the results for every record, it still might be faster than you expect. Upvotes: 2 <issue_comment>username_2: I found out that the OUTER APPLY will be done on all the rows first, and then the TOP 5 will be returned. Upvotes: 1 [selected_answer]
2018/03/21
903
3,404
<issue_start>username_0: I am trying to allow my app to save UIColors in settings, and when I tried to save the settings for the background color it worked. But when I add a second block of code that should allow me to save a second UIColor, it gives me the error - Fatal error: Unexpectedly found nil while unwrapping an Optional value. Can someone show me how to save the second UIColor without an error? ``` // First UIColor save - Works var dd = UIColor(hex: UserDefaults.standard.value(forKey: "TheMainUIColour") as! String ) UserDefaults.standard.set(dd.toHexString, forKey: "TheMainUIColour") let mainBackgroundColour = UserDefaults.standard.value(forKey: "TheMainUIColour") as! String let color = UIColor(hex: mainBackgroundColour) self.view.backgroundColor = dd // Second UIColor Save - Doesnt Work let dd2: UIColor = UIColor(hex: UserDefaults.standard.value(forKey: "TheMainUIColour2") as! String ) UserDefaults.standard.set(dd2, forKey: "TheMainUIColour2") let mainBackgroundColour2 = UserDefaults.standard.value(forKey: "TheMainUIColour2") as! String let color2 = UIColor(hex: mainBackgroundColour2) ```<issue_comment>username_1: First of all, any use of `TOP` without a matching `ORDER BY` is very likely a mistake. This applies to **both parts** of your existing query: the `TOP 1` *and* `TOP 5` statements. That out of the way, for this query, you may be better off like this: ``` SELECT TOP 5 ACol1, ACol2, MAX(b.BCol2) AS BCol2 FROM tblA a LEFT JOIN tblB b ON b.BCol1 = a.ACol1 GROUP BY a.ACol1, a.ACol2 ``` It should work as well as what you have as long as `ACol1` and `ACol2` are reasonably unique. What you have now might allow for duplicate `ACol1, ACol2` pairs if that data exists in your table. This will not. But since you don't care about ORDER, you have no grounds to complain if this returns different values, as long as they match real data in the table. --- But I understand all this is also likely a simplified version of the problem, so I will address the question directly in two ways. First, the right thing to do when you're unclear on behavior is to check the execution plan. You don't need to be an execution plan expert to determine what is going on. If you're not familiar with the meaning for each of the nodes, Google can help you understand it. From the other direction, don't quote me on this, but I would expect Sql Server to only use it 5 times in this specific case. I think that's what you were hoping to hear. **But don't count on that in the general case.** If you have an `ORDER BY` clause for the query, as you should do whenever you use `TOP`, then Sql Server can't know which results to return until it checks all the possibilities. Furthermore, if that `ORDER BY` is determined by any fields from the APPLY results, Sql Server will **definitely** have to run the sub query for every possible record. Otherwise, it would have no way to know which records it needs. Finally, in many cases the query optimizer will handle an `APPLY` as something more closely resembling a `JOIN` than a sub query. Even when it seems like it needs to compute the results for every record, it still might be faster than you expect. Upvotes: 2 <issue_comment>username_2: I found out that the OUTER APPLY will be done on all the rows first, and then the TOP 5 will be returned. Upvotes: 1 [selected_answer]
2018/03/21
631
2,325
<issue_start>username_0: i have a gridview that i have created without datasource control and that has data from a database table and the gridview also has a link selection in one column. The select link is pointed to ActivityID (maybe it will be a problem?) ``` ``` [![enter image description here](https://i.stack.imgur.com/Ghnh5.jpg)](https://i.stack.imgur.com/Ghnh5.jpg) I have created a OnClick event. ``` protected void lnkSelect_Click(object sender, EventArgs e) { txtActivity.Text = GridView1.SelectedRow.Cells[2].Text; ddlChange_Requestor.selectedvalue = GridView1.SelectedRow.Cells[6].selectevvalue; } ``` i´m i missing something? should i maybe "FindControl".. Im a bit lost here? e.g. [![enter image description here](https://i.stack.imgur.com/3aDZ5.jpg)](https://i.stack.imgur.com/3aDZ5.jpg) Activity textbox (**txtActivity**) = Test2 (it should say that in textbox)................................................... Change requestor dropdown (**ddlChange\_Requestor**) = ... (find change request value and change dropdownlist)<issue_comment>username_1: You should switch to the GridView `RowCommand`. ``` ``` And change the LinkButton to ``` Select ``` Now you can get all the data you need in the method. ``` protected void gwActivity_RowCommand(object sender, GridViewCommandEventArgs e) { GridViewRow row = (GridViewRow)(((LinkButton)e.CommandSource).NamingContainer); txtActivity.Text = row.Cells[2].Text; ddlChange_Requestor.SelectedValue = e.CommandArgument.ToString(); } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: If you're just copying data from your datagrid row to the input fields, then what you're trying to do incurs a heavy execution cost. Handling the LinkButton OnClick event on the server side requires that the page be posted back, all the page events to fire (e.g. events to bind data to the template, execute the LinkButton OnClick event, and render the entire page into HTML), and the resulting HTML to be sent back to the browser for display. If you can use javascript, you should consider using the [OnClientClick event handler](https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.linkbutton.onclientclick(v=vs.110).aspx) instead as this keeps you on the web page without posting back to the web server. Upvotes: 0
2018/03/21
773
2,892
<issue_start>username_0: Trying to fetch the pokeapi and map through the array of data returned from the api. I set my state to an empty array and then I proceed to try and fetch the api and receive data from the response, and add it to my pokemon state ``` class App extends Component { constructor() { super() this.state = { pokemon: [], searchfield: '' } } componentDidMount() { fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => this.setState({ pokemon: pokemon })) .catch(err => console.log(err)); } onSearchChange = (e) => { this.setState({ searchfield: e.target.value }) } render() { const filteredPokemon = this.state.pokemon.filter(poki => { return poki.name.toLowerCase().includes (this.state.searchfield.toLowerCase()); }) if (!this.state.pokemon.length) { return Loading ======= } else { return ( Pokemon ======= ); } } ```<issue_comment>username_1: This api call results in `this.state.pokemon` **being set to an object not an array**: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => this.setState({ pokemon: pokemon })) .catch(err => console.log(err)); ``` I believe you are trying to filter on the `results` property which is an array? In this case, set `this.state.pokemon` to `pokemon.results`: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => this.setState({ pokemon: pokemon.results })) .catch(err => console.log(err)); ``` You could have debugged the fetch to see the object like this: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => console.log(pokemon)) .catch(err => console.log(err)); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: From the actual data you receive from the api. ``` { "count":949, "previous":null, "results": [ { "url":"https:\/\/pokeapi.co\/api\/v2\/pokemon\/1\/", "name":"bulbasaur" }, { "url":"https:\/\/pokeapi.co\/api\/v2\/pokemon\/2\/", "name":"ivysaur" } ] } ``` Change This: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => this.setState({ pokemon: pokemon })) .catch(err => console.log(err)); ``` To ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(({results}) => this.setState({ pokemon: results })) .catch(err => console.log(err)); ``` Upvotes: 0
2018/03/21
774
2,663
<issue_start>username_0: Let's say I have a JavaFX application and want to change between scenes. Each scene contains Buttons, which, when pressed, lead to the following scenes. In order to give my buttons commands what to do if pressed, I try to do the following: ``` for (Button button : buttonsArray) //for each Button in ArrayList button.setOnAction( e -> handle(frameID, button.ID) ); //give it the data about frame and button ``` The problem is, the `handle` method contains huge amount of Switch-Case statements: ``` switch (frameID) //lokking for a certain frame { case 1: switch (ID) // and a certain button { case 1: // lead to the certain scene break; } break; } ``` Not even it complicates the code, it's also really easy to spoil everything by writing brackets or `break` wrong. It also feels a bit like garbage code. So, what's the better way to manage all these buttons? I'm relatively new to Java and OOP, though I'm open to learn new things.<issue_comment>username_1: This api call results in `this.state.pokemon` **being set to an object not an array**: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => this.setState({ pokemon: pokemon })) .catch(err => console.log(err)); ``` I believe you are trying to filter on the `results` property which is an array? In this case, set `this.state.pokemon` to `pokemon.results`: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => this.setState({ pokemon: pokemon.results })) .catch(err => console.log(err)); ``` You could have debugged the fetch to see the object like this: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => console.log(pokemon)) .catch(err => console.log(err)); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: From the actual data you receive from the api. ``` { "count":949, "previous":null, "results": [ { "url":"https:\/\/pokeapi.co\/api\/v2\/pokemon\/1\/", "name":"bulbasaur" }, { "url":"https:\/\/pokeapi.co\/api\/v2\/pokemon\/2\/", "name":"ivysaur" } ] } ``` Change This: ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(pokemon => this.setState({ pokemon: pokemon })) .catch(err => console.log(err)); ``` To ``` fetch('https://pokeapi.co/api/v2/pokemon/') .then(response => response.json()) .then(({results}) => this.setState({ pokemon: results })) .catch(err => console.log(err)); ``` Upvotes: 0
2018/03/21
688
2,478
<issue_start>username_0: I am attempting to use the AWS SDK for JavaScript. This process requires creating a credential file as [explained here](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-shared.html): However I am unable to create this file as descibed for Windows. On Windows systems I am supposed to save the files here: `C:\Users\USERNAME\.aws\credentials`. But I am unable to create a path such as `.aws`. Also ruining `npm install aws-sdk --save` as indicated on the resource page, there are no such folders created for me to add the file. How do I create this file?<issue_comment>username_1: ### Option 1: MANUAL > > Open CMD in the `C:\Users\USERNAME` directory (should be default) > > > Run the following command in CMD: `mkdir .aws` > > > Create the `credentials` file and add your credentials manually > > > ### Option 2: AUTOMATIC Install the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) This will give you access to running AWS commands from the command line. Then since you're using a windows machine, open up CMD and run the following command: `aws configure` It will ask you for some details such as `Access Key ID` and `Secret Access Key` as well as your preferred region and data type. When you enter these details, it will create the `.aws/credentials` file for you. Upvotes: 2 [selected_answer]<issue_comment>username_2: Adding to Toms Answer: 1. Manual way of creating the aws file. -> Covered by Tom 2. Automatic way of doing it by using the CLI. -> Covered by Tom. 3. Programmatic way of doing this: ``` var AWS = require('aws-sdk'); console.log("Configuring AWS SDK with %s region", argv.region); AWS.config.update({ accessKeyId: accessKey, secretAccessKey: secretKey, region: region }); ``` 4. Environment Variables - The SDK can pick up these Env Variables, AWS\_ACCESS\_KEY\_ID, AWS\_SECRET\_ACCESS\_KEY, AWS\_SESSION\_TOKEN 5. Loading from a JSON file, you can use `AWS.config.loadFromPath('/path/')` to load from a config file. 6. If your code is running in a AWS EC2, you can set an IAM role on the EC2 and the SDK can pick that up to configure itself automatically. For further reading use this: <https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html> Personally, I would recommend using either options 3, 4, 5 or 6, this makes your code portable and deploy-able across environments . Upvotes: 2
2018/03/21
352
1,175
<issue_start>username_0: We were told to use this to get a question answered. The following is a breakdown of our Menu; we apologize for the shape of the CSS/HTML/JS but were unaware as to how present it otherwise. We're attempting to have all column/rows on the same height. We're aware of row-eq-height but are apparently doing it wrong. The code is as below: Full Clip: <https://pastebin.com/Kc9KTkZU> Exerp: ``` [#### Sharable Appetizers](#faq-cat-1-sub-1) ```<issue_comment>username_1: to make sur that all the child element will have the same hight use `flex` in the parent container , in your case added this to your css ``` .row-eq-height{ display:flex; } ``` Upvotes: 0 <issue_comment>username_2: `.row-eq-height` is not a native class in Bootstrap. You'll need to add and define it yourself in the CSS: ``` .row-eq-height { display: -webkit-box; display: -webkit-flex; display: -ms-flexbox; display: flex; } ``` Please note: This class uses CSS3's flexbox layout mode, which is not supported in Internet Explorer 9 and below. For more info, see [Bootstrap's info](http://getbootstrap.com.vn/examples/equal-height-columns/). Upvotes: 1
2018/03/21
1,055
3,404
<issue_start>username_0: I have the following `matplotlib` snippet: ``` fig, ax = plt.subplots(figsize=(6,6)) values = np.random.normal(loc=0, scale=1, size=10) ax.plot(range(10), values, 'r^', markersize=15, alpha=0.4); ``` which produces [![enter image description here](https://i.stack.imgur.com/Vlk01.png)](https://i.stack.imgur.com/Vlk01.png) as planned. I'd like to make the line invisible where it overlaps with the points so that the points look more joined by the line rather than lying on top of the line. It is possible to do this by either making the line invisible where they overlap or to create a new line object that simply links the points rather than traces them? To be explicit, I do not want the *entire* line removed, just the sections that overlap with the points.<issue_comment>username_1: I think the best way to go about it is to overlay the triangles over the lines: ``` import matplotlib.pyplot as plt import numpy as np values = np.random.normal(loc=0, scale=1, size=10) plt.plot( range(10), values, marker='^', markerfacecolor='red', markersize=15, color='red', linewidth=2) plt.show() ``` The program outputs: [![output of program](https://i.stack.imgur.com/P07ZV.png)](https://i.stack.imgur.com/P07ZV.png) If you really want the see through aspect, I suggest you somehow calculate where the lines overlap with the markers and only draw the lines inbetween: ``` import numpy as np import matplotlib.pyplot as plt values = np.random.normal(loc= 0, scale=1, size=10) for i in range(9): start_coordinate, end_coordinate = some_function(values[i], values[i+1]) plt.plot([i, i+1], [start_coordinate, end_coordinate], *whatever_other_arguments) plt.scatter(range(10), values, *whatever_other_arguments) plt.show() ``` The hard part here is of course calculating these coordinates (if you want to zoom in this won't work), but honestly, given the difficulty of this question, I think you won't find anything much better... Upvotes: 0 <issue_comment>username_2: It is in general hard to let the lines stop at the edges of the markers. The reason is that lines are defined in data coordinates, while the markers are defined in points. A workaround would be to hide the lines where the markers are. We may think of a three layer system. The lowest layer (`zorder=1`) contains the lines, just as they are. The layer above contains markers of the same shape and size as those which are to be shown. Yet they would be colored in the same color as the background (usually white). The topmost layer contains the markers as desired. ``` import matplotlib.pyplot as plt import numpy as np; np.random.seed(42) fig, ax = plt.subplots(figsize=(6,5)) def plot_hidden_lines(x,y, ax = None, ms=15, color="r", marker="^", alpha=0.4,**kwargs): if not ax: ax=plt.gca() ax.scatter(x,y, c=color, s=ms**2, marker=marker, alpha=alpha, zorder=3) ax.scatter(x,y, c="w", s=ms**2, marker=marker, alpha=1, zorder=2) ax.plot(x,y, color=color, zorder=1,alpha=alpha,**kwargs) values1 = np.random.normal(loc=0, scale=1, size=10) values2 = np.random.normal(loc=0, scale=1, size=10) x = np.arange(len(values1)) plot_hidden_lines(x,values1) plot_hidden_lines(x,values2, color="indigo", ms=20, marker="s") plt.show() ``` [![enter image description here](https://i.stack.imgur.com/Jyn2z.png)](https://i.stack.imgur.com/Jyn2z.png) Upvotes: 2
2018/03/21
1,358
3,484
<issue_start>username_0: Working on a chat bubble: ```css .bubble { display: inline-block; position: relative; width: 35px; height: 20px; padding: 0px; background: rgb(219, 218, 218); -webkit-border-radius: 16px; -moz-border-radius: 16px; border-radius: 16px; border:rgb(107, 107, 107) solid 2px; } .bubble:after { content: ''; position: absolute; border-style: solid; border-width: 6px 5px 0; border-color: rgb(219, 218, 218) transparent; display: block; width: 0; z-index: 1; margin-left: -5px; bottom: -6px; left: 43%; } .bubble:before { content: ''; position: absolute; border-style: solid; border-width: 7px 6px 0; border-color: rgb(107, 107, 107) transparent; display: block; width: 0; z-index: 0; margin-left: -6px; bottom: -9px; left: 43%; } .bubble:hover { background-color: red; } ``` The onhover is currently not working for the caret part of the chat bubble. The desired result is obviously the whole chat bubble being filled up when there is an onhover event. I tried (not a CSS wiz): ``` .bubble:hover, .bubble:hover:before, .bubble:hover:after{ background-color: red; } ``` I also tried a lot of different stuff but it doesn't seem to work. How do I fix this?<issue_comment>username_1: You just have to add ``` .bubble:hover:after { border-color: red transparent; } ``` because the small triangle is actually the border of the `:after` element. ```css .bubble { display: inline-block; position: relative; width: 35px; height: 20px; padding: 0px; background: rgb(219, 218, 218); -webkit-border-radius: 16px; -moz-border-radius: 16px; border-radius: 16px; border: rgb(107, 107, 107) solid 2px; } .bubble:after { content: ''; position: absolute; border-style: solid; border-width: 6px 5px 0; border-color: rgb(219, 218, 218) transparent; display: block; width: 0; z-index: 1; margin-left: -5px; bottom: -6px; left: 43%; } .bubble:before { content: ''; position: absolute; border-style: solid; border-width: 7px 6px 0; border-color: rgb(107, 107, 107) transparent; display: block; width: 0; z-index: 0; margin-left: -6px; bottom: -9px; left: 43%; } .bubble:hover { background-color: red; } .bubble:hover:after { border-color: red transparent; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Maybe simply this: ``` .bubble:hover::after { border-color: red transparent; } ``` ```css .bubble { display: inline-block; position: relative; width: 35px; height: 20px; padding: 0px; background: rgb(219, 218, 218); -webkit-border-radius: 16px; -moz-border-radius: 16px; border-radius: 16px; border:rgb(107, 107, 107) solid 2px; } .bubble:after { content: ''; position: absolute; border-style: solid; border-width: 6px 5px 0; border-color: rgb(219, 218, 218) transparent; display: block; width: 0; z-index: 1; margin-left: -5px; bottom: -6px; left: 43%; } .bubble:before { content: ''; position: absolute; border-style: solid; border-width: 7px 6px 0; border-color: rgb(107, 107, 107) transparent; display: block; width: 0; z-index: 0; margin-left: -6px; bottom: -9px; left: 43%; } .bubble:hover { background-color: red; } .bubble:hover::after { border-color: red transparent; } ``` Upvotes: 1
2018/03/21
798
3,135
<issue_start>username_0: I have a function that I want to automatically take a screenshot and then upload it to the users preferred email app. ``` Date now = new Date(); android.text.format.DateFormat.format("yyyy-MM-dd_hh:mm:ss", now); try { // image naming and path to include sd card appending name you choose for file String mPath = Environment.getExternalStorageDirectory().toString() + "/" + now + ".png"; // create bitmap screen capture View v1 = getWindow().getDecorView().getRootView(); v1.setDrawingCacheEnabled(true); Bitmap bitmap = Bitmap.createBitmap(v1.getDrawingCache()); v1.setDrawingCacheEnabled(false); File imageFile = new File(mPath); FileOutputStream outputStream = new FileOutputStream(imageFile); int quality = 100; bitmap.compress(Bitmap.CompressFormat.PNG, quality, outputStream); outputStream.flush(); outputStream.close(); File filelocation = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + now + mPath); Uri path = Uri.fromFile(filelocation); Intent emailIntent = new Intent(Intent.ACTION_SEND); // set the type to 'email' emailIntent .setType("vnd.android.cursor.dir/email"); String to[] = {"Enter your email address"}; emailIntent .putExtra(Intent.EXTRA_EMAIL, to); // the attachment emailIntent.putExtra(Intent.EXTRA_STREAM, path); // the mail subject emailIntent .putExtra(Intent.EXTRA_SUBJECT, "Journey : "); startActivity(Intent.createChooser(emailIntent , "Select your preferred email app..")); } catch (Throwable e) { // Several error may come out with file handling or DOM e.printStackTrace(); } } ``` This is my function. It is taking the screen shot automatically and its store it on my local device. It is also prompting the user to select their email app. Then I select an app it says "unable to attach file" . I have read and write permissions in my manifest.<issue_comment>username_1: The other app may not have access to external storage. Plus, your code will crash on Android 7.0+, once you raise your `targetSdkVersion` to 24 or higher. Switch to [using `FileProvider`](https://developer.android.com/reference/android/support/v4/content/FileProvider.html) and its `getUriForFile()` method, instead of using `Uri.fromFile()`. And, eventually, move that disk I/O to a background thread. Upvotes: 1 <issue_comment>username_2: check this link - ``` https://www.javacodegeeks.com/2013/10/send-email-with-attachment-in-android.html ``` and [How to send an email with a file attachment in Android](https://stackoverflow.com/questions/9974987/how-to-send-an-email-with-a-file-attachment-in-android) Hope this help. Upvotes: 1 <issue_comment>username_3: **The problem was :** Uri path = Uri.fromFile(filelocation); **Instead I used :** File filelocation = new File(MediaStore.Images.Media.DATA + mPath); Uri myUri = Uri.parse("file://" + filelocation); Hopefully this helps anyone facing the same problem. Upvotes: 0
2018/03/21
789
3,046
<issue_start>username_0: I'm new to Selenium & new to Java as well. So maybe I'm missing something obvious, but I’m spinning on this for a while now, can't move forward & totally desperate. Please help! Here is my set-up: My custom Driver class implements WebDriver & sets property: ``` public class Driver implements WebDriver { private WebDriver driver; private String browserName; public Driver(String browserName) throws Exception { this.browserName = browserName; if(browserName.equalsIgnoreCase("chrome")) { System.setProperty("webdriver.chrome.driver", "src/test/resources/chromedriver"); this.driver = new ChromeDriver(); } else if(browserName.equalsIgnoreCase("firefox")) { System.setProperty("webdriver.gecko.driver", "src/test/resources/geckodriver"); this.driver = new FirefoxDriver(); } else { throw new Exception("Browser is not correct"); } driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); } <...> } ``` BaseTest class gets property & creates new instance of the Driver inside @BeforeClass method (browser name passed with a maven command when running the test): ``` String browserName = getParamater("browser"); driver = new Driver(browserName); ``` In the Test class inside the @Test I create new Actions & pass there the driver from BaseTest: ``` Actions builder = new Actions(driver); Action mouseOverHome = builder .moveToElement(pb.testGoodle) .build(); mouseOverHome.perform(); ``` And this code doesn’t work -> no Actions performed (no mouse over or anything), no errors too. However if I create & define new WebDriver inside the @Test itself: ``` System.setProperty("webdriver.gecko.driver", "src/test/resources/geckodriver"); WebDriver driver = new FirefoxDriver(); ``` Actions perfectly working. Any ideas or hints very appreciated!<issue_comment>username_1: The other app may not have access to external storage. Plus, your code will crash on Android 7.0+, once you raise your `targetSdkVersion` to 24 or higher. Switch to [using `FileProvider`](https://developer.android.com/reference/android/support/v4/content/FileProvider.html) and its `getUriForFile()` method, instead of using `Uri.fromFile()`. And, eventually, move that disk I/O to a background thread. Upvotes: 1 <issue_comment>username_2: check this link - ``` https://www.javacodegeeks.com/2013/10/send-email-with-attachment-in-android.html ``` and [How to send an email with a file attachment in Android](https://stackoverflow.com/questions/9974987/how-to-send-an-email-with-a-file-attachment-in-android) Hope this help. Upvotes: 1 <issue_comment>username_3: **The problem was :** Uri path = Uri.fromFile(filelocation); **Instead I used :** File filelocation = new File(MediaStore.Images.Media.DATA + mPath); Uri myUri = Uri.parse("file://" + filelocation); Hopefully this helps anyone facing the same problem. Upvotes: 0
2018/03/21
680
2,378
<issue_start>username_0: XCode 7, Swift 2. (don't even start on me about that, it's not my favorite set of constraints either) So I'm sending data using [node-apn](https://github.com/node-apn/node-apn) (BIG thanks to those folks!). I'm using the (pushkit option) `*.voip` topic and I got all that working. I can see the notification is received on my device (big shout out to [libimobiledevice](https://stackoverflow.com/a/19148654/1449799)). In composing the note on my server i'm doing ``` var note = new apn.Notification(); note.topic = 'mine.voip'; note.payload = { message: 'text', somethingElse: 'this other one ' payload: { k1: v1, k2: { k3: v2 } } }; ``` How am I supposed to get at my payload object? Following some (3rd party) pushkit examples (maybe it was [1](http://www.nikola-breznjak.com/blog/ios/create-native-ios-app-can-receive-voip-push-notifications/)? that showed like ``` func pushRegistry(registry: PKPushRegistry!, didReceiveIncomingPushWithPayload payload: PKPushPayload!, forType type: String!) { let payloadDict = payload.dictionaryPayload["aps"] as? Dictionary let message = payloadDict?["alert"] } ``` I tried to mimic it like ``` let myWeirdPayload = payload.dictionaryPayload["payload"] as? Dictionary ``` or even ``` let myWeirdPayload = payload.dictionaryPayload["payload"] as? Dictionary ``` but those don't work (I get `nil`). How am I supposed to do this?<issue_comment>username_1: Here's how I log it in Obj-C: ```m NSLog(@"pushRegistry:didReceiveIncomingPushWithPayload:forType:withCompletionHandler:%@", payload.dictionaryPayload); ``` Upvotes: 1 <issue_comment>username_2: Let me try to make this clear by explaining the relation between `PKPushPayload` and its `NSDictionary` property with the data you are looking for. For the `func pushRegistry(registry: PKPushRegistry!, didReceiveIncomingPushWithPayload payload: PKPushPayload!, forType type: String!)` you have the `payload` argument which is a `PKPushPayload`. The data you are looking for are stored in the `_dictionaryPayload` which is a `NSDictionary` and it is a property of the `payload` argument. So to access the data you can do: ``` let payloadDictionary = payload.dictionaryPayload as NSDictionary let myData = payloadDictionary.value(forKey: "myData_unique_key") ``` Upvotes: 0
2018/03/21
607
1,974
<issue_start>username_0: I have a string enum and need to get all the values. For instance, for the below enum, I'd like to return `["Red", "Yellow"]`: ``` export enum FruitColors { Apple = "Red", Banana = "Yellow", } ```<issue_comment>username_1: According to [this GitHub comment](https://github.com/Microsoft/TypeScript/issues/17198#issuecomment-315400819), this can be achieved the following way: ``` Object.keys(FruitColors).map(c => FruitColors[c]); ``` Upvotes: 2 <issue_comment>username_2: You're looking for [`Object.values()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/values) Upvotes: 5 [selected_answer]<issue_comment>username_3: You can inspect the `FruitColors` object. Note that if you do not assign names for the enum values, the generated code will be different and a simple key/value based mapping will lead to wrong results. e.g. ``` export enum FruitColors { "Red", "Yellow", } Object.values(FruitColors); // ["Red", "Yellow", 0, 1] ``` Because the generated code is along these lines: ``` var FruitColors; (function (FruitColors) { FruitColors[FruitColors["Red"] = 0] = "Red"; FruitColors[FruitColors["Yellow"] = 1] = "Yellow"; })(FruitColors = exports.FruitColors || (exports.FruitColors = {})); ``` You could then just filter the results by `typeof value == "string"`. Upvotes: 1 <issue_comment>username_4: Maybe you can try this function, it can return all values in string + number enum type, not only string Enum ```js const getAllValuesEnum = (enumType: any) => { const keysAndValues = Object.values(enumType); const values = []; keysAndValues.forEach((keyOrValue: any) => { if (isNaN(Number(keyOrValue))) { values.push(enumType[keyOrValue] || keyOrValue); } }); return values; }; ``` Example: ```js enum MyEnum { A = "a", B = 7, C = "c", D = 1, } ``` It will return [ 1, 7, 'a', 'c' ] Upvotes: -1
2018/03/21
1,034
3,132
<issue_start>username_0: I need to execute a curl command like this: ``` #!/bin/bash shopid=9932781 itemid=231873991 curl -sSb /tmp/cookies 'https://website.com' -H 'cookie: csrftoken=<PASSWORD>' -H 'x-csrftoken: <PASSWORD>' -H 'content-type: multipart/form-data; boundary=----WebKitFormBoundary' -H 'referer: https://website.com' \ --data-binary $'------WebKitFormBoundary\r\nContent-Disposition: form-data; name="shopid"\r\n\r\n${shopid}\r\n------WebKitFormBoundary\r\nContent-Disposition: form-data; name="itemid"\r\n\r\n${itemid}\r\n------WebKitFormBoundary\r\nContent-Disposition: form-data; name="quantity"\r\n\r\n1\r\n------WebKitFormBoundary\r\nContent-Disposition: form-data; name="donot_add_quantity"\r\n\r\nfalse\r\n------WebKitFormBoundary\r\nContent-Disposition: form-data; name="update_checkout_only"\r\n\r\nfalse\r\n------WebKitFormBoundary\r\nContent-Disposition: form-data; name="source"\r\n\r\n\r\n------WebKitFormBoundary\r\nContent-Disposition: form-data; name="checkout"\r\n\r\ntrue\r\n------WebKitFormBoundary--\r\n' ``` The `$''` quotes are necessary or else (ie. in the double-quoted case) `\r\n` won't work -- but with this form, `$shopid` and `$item` aren't replaced with their values. How can I get both behaviors?<issue_comment>username_1: The variables are **not** expanded in 'single quotes'. It's important to learn how quoting works : > > "Double quote" every literal that contains spaces/metacharacters and *every* expansion: `"$var"`, `"$(command "$var")"`, `"${array[@]}"`, `"a & b"`. Use `'single quotes'` for code or literal `$'s: 'Costs $5 US'`, `ssh host 'echo "$HOSTNAME"'`. See > <http://mywiki.wooledge.org/Quotes> > > <http://mywiki.wooledge.org/Arguments> > > <http://wiki.bash-hackers.org/syntax/words> > > > --- So, try this : ``` data="------WebKitFormBoundary Content-Disposition: form-data; name='shopid' ${shopid} ------WebKitFormBoundary Content-Disposition: form-data; name='itemid' ${itemid} ------WebKitFormBoundary Content-Disposition: form-data; name='quantity' 1 ------WebKitFormBoundary Content-Disposition: form-data; name='donot_add_quantity' false ------WebKitFormBoundary Content-Disposition: form-data; name='update_checkout_only' false ------WebKitFormBoundary Content-Disposition: form-data; name='source' ------WebKitFormBoundary Content-Disposition: form-data; name='checkout' true ------WebKitFormBoundary--" data="$(sed 's/$/\r/' <<< "$data")" curl -sSb /tmp/cookies \ -H 'cookie: csrftoken=mytoken' \ -H 'x-csrftoken: mytoken' \ -H 'content-type: multipart/form-data; boundary=----WebKitFormBoundary' \ -H 'referer: https://website.com' \ --data-binary "$data" \ 'https://website.com' ``` Upvotes: 1 <issue_comment>username_2: You can use multiple quoting styles within a single string. Thus: ``` $'Constant\r\n\n\r\n'"$bar" ``` ...has the `\r\n`s parsed with `$'...'` rules, but the `$bar` expanded with double-quoting rules (such that the expansion also takes place). Upvotes: 1 <issue_comment>username_3: You need to make that code maintainable ``` binary_data=$( cat < ``` Upvotes: 3 [selected_answer]
2018/03/21
513
1,944
<issue_start>username_0: I have 2 table. Table A Payment Table B Phone I want combine result of below 2 query of difference schema into 1 table. ``` select Payment_DT from DW.Payment SELECT PHONE_NUMBER FROM STG_ANALYSIS.PHONE ``` This is the output I am looking for. ``` Payment_Dt Phone_Number 3/31/2018 123-456-7890 ```<issue_comment>username_1: not knowing the logic but to merge the two should be ``` select Payment_DT, (SELECT PHONE_NUMBER FROM STG_ANALYSIS.PHONE) phone_number from DW.Payment ``` if there is a logic to join the two tables then use inner/left join Upvotes: 1 <issue_comment>username_2: DB Links are pretty much the name of the game here. If you can't get one created on your own, then check if there are any public DB links that you could use. It's also possible that your DBAs will be willing to have one of their DB Links used to create a materialized view of S2.Table2 on the S1 instance. Another option might be web services, but my guess is you'd run into much more administrative issues there than you would with a simple DB link. Consider those only if there are good reasons for no links (example: two separate organizations that don't want to open firewall holes between their databases). Failing those, you're getting into really ugly territory but you might be able to make something work. For example: ``` Open up both from a tool that can read from multiple connections at once and do the join there. Access. Toad for Data Analysis, whatever. Use a tool like Toad to copy S2.Table2 to your own schema ("create in another schema" followed by "copy data to another schema") If you have, or can get, complementary directory objects defined on both servers, create a Materialized View of S2 as an external table in a directory which can be written from S2 and read from S1. ``` You really don't want to maintain any of these solutions over the long term, though. Upvotes: 0
2018/03/21
857
3,068
<issue_start>username_0: I'm trying to parallelize time series forecasting in python using dask. The format of the data is that each time series is a column and they have a common index of monthly dates. I have a custom forecasting function that returns a time series object with the fitted and forecasted values. I want to apply this function across all columns of a dataframe (all time series) and return a new dataframe with all these series to be uploaded to a DB. I've gotten the code to work by running: ``` data = pandas_df.copy() ddata = dd.from_pandas(data, npartitions=1) res = ddata.map_partitions(lambda df: df.apply(forecast_func, axis=0)).compute(get=dask.multiprocessing.get) ``` My question is, is there a way in Dask to partition by column instead of row, since in this use case I need to keep the ordered time index as is for the forecasting function to work correctly. If not, how would I re-format the data to allow efficient large-scale forecasting to be possible, and still return the data in the format I need to then push to a DB? [example of data format](https://i.stack.imgur.com/sJDbR.png)<issue_comment>username_1: Dask dataframe only partitions data by rows. See the [Dask dataframe documentation](http://dask.pydata.org/en/latest/dataframe.html) [Dask array](http://dask.pydata.org/en/latest/array.html) however can partition along any dimension. You have you use Numpy semantics though rather than Pandas semantics. You can do anything you want to with [dask delayed](http://dask.pydata.org/en/latest/delayed.html) or [futures](http://dask.pydata.org/en/latest/futures.html). This [parallel computing example](https://github.com/pydata/parallel-tutorial/blob/master/notebooks/02-submit.ipynb) given in a more generic tutorial might give you some ideas. Upvotes: 1 <issue_comment>username_2: Thanks for the help, i really appreciate it. I've used the dask.delayed solution and it's working really well, it takes about 1/3 of the time just using a local cluster. For anyone interested the solution I've implemented: ``` from dask.distributed import Client, LocalCluster import pandas as pd import dask cluster = LocalCluster(n_workers=3,ncores=3) client = Client(cluster) #get list of time series back output = [] for i in small_df: forecasted_series = dask.delayed(custom_forecast_func)(small_df[i]) output.append(forecasted_series) total = dask.delayed(output).compute() #combine list of series into 1 dataframe full_df = pd.concat(total,ignore_index=False,keys=small_df.columns,names=['time_series_names','Date']) final_df = full_df.to_frame().reset_index() final_df.columns = ['time_series_names','Date','value_variable'] final_df.head() ``` This gives you the melted dataframe structure so if you want the series to be the columns you can transform it with ``` pivoted_df = final_df.pivot(index='Date', columns='time_series_names', values='value_variable') ``` [small\_df is in this format in pandas dataframe with Date being the index](https://i.stack.imgur.com/cKcwE.png) Upvotes: 3 [selected_answer]
2018/03/21
959
2,324
<issue_start>username_0: the struct: ``` struct info{ int id; int time; int x; int y; }; ``` The array of structs will always follow this conditions: * The time variable will always be given in a sorted away to it's corresponding id * It's considered duplicated if the variables: `time`, `x`, `y` are equal and `id` is different * The search it's done by looking for two different id values **Example 1:Find the duplicate for the pair - 001 002** ``` struct info *arr = {{002, 10, 30, 40}, {001, 10, 30, 40}, {001, 15, 45, 50}, {001, 20, 23, 37}} ``` **Output:** the duplicate pair would be in position 0 & 1 **Example 2:Find the duplicate for the pair - 001 002** ``` struct info *arr = {{002, 15, 45, 50}, {002, 16, 21, 13}, {001, 10, 30, 40}, {001, 15, 45, 50},} ``` **Output:** the duplicate pair would be in position 0 & 3 **Example 3:Find the duplicate for the pair - 003 004** ``` struct info *arr = {{004, 6, 47, 52}, {003, 6, 47, 52}, {001, 10, 30, 40}, {002, 15, 45, 50},} ``` **Output:** the duplicate pair would be in position 0 & 1 Is it possible to solve this in less then `O(n^2)` time?<issue_comment>username_1: Probably a simple solution: Sorting the array is `O(n*log(n))`, and finding duplicate entries then is a single loop of complexity `O(n)`. So all together, a complexity of `O(n*log(n))`, which is less than the `O(n^2)` you wanted to beat. Hope it helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: Adding N elements to a hash table can be done in O(N), so it can be done in O(N). A working demonstration in Perl: ``` #!/usr/bin/perl use strict; use warnings qw( all ); my @infos = ( { id => '002', time => 10, x => 30, y => 40 }, { id => '001', time => 10, x => 30, y => 40 }, { id => '001', time => 15, x => 45, y => 50 }, { id => '001', time => 20, x => 23, y => 37 }, ); my %seen; for my $i (0..$#infos) { my $info = $infos[$i]; my $key = join(':', $info->{time}, $info->{x}, $info->{y}); push @{ $seen{$key} }, $i; } for my $matches (values(%seen)) { next if @$matches == 1; print("Duplicates:\n"); for my $i (@$matches) { my $info = $infos[$i]; printf(" %d %s %d %d %d\n", $i, @$info{qw( id time x y )}); } } ``` Output: ``` Duplicates: 0: 002 10 30 40 1: 001 10 30 40 ``` Upvotes: 2
2018/03/21
704
2,205
<issue_start>username_0: I use an Add-in to export all vba code to a repository whenever a workbook is opened. It works great for versioning, but I must remember to deactivate it before I leave the office every evening. If I don't, the automated Excel processes that run overnight generate the prompt and can't proceed until the dialog is cleared (i.e. export code...Yes/No?). When I log in, I'm staring at several Excel dialogs waiting for a response. Note that I don't always want to export the code...sometimes I just want to open the file and check something, so the Yes/No dialog is required. How would one automatically deactivate the Add-in every night at a set time before the automated processes start running? Note that manually reactivating it in the morning is not a problem. If automated Add-in deactivation is not possible, automatically answering the dialog with a "No" during the overnight hours might be an option. Is there a way to automatically click the No button of a MsgBox? Thanks in advance!<issue_comment>username_1: Probably a simple solution: Sorting the array is `O(n*log(n))`, and finding duplicate entries then is a single loop of complexity `O(n)`. So all together, a complexity of `O(n*log(n))`, which is less than the `O(n^2)` you wanted to beat. Hope it helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: Adding N elements to a hash table can be done in O(N), so it can be done in O(N). A working demonstration in Perl: ``` #!/usr/bin/perl use strict; use warnings qw( all ); my @infos = ( { id => '002', time => 10, x => 30, y => 40 }, { id => '001', time => 10, x => 30, y => 40 }, { id => '001', time => 15, x => 45, y => 50 }, { id => '001', time => 20, x => 23, y => 37 }, ); my %seen; for my $i (0..$#infos) { my $info = $infos[$i]; my $key = join(':', $info->{time}, $info->{x}, $info->{y}); push @{ $seen{$key} }, $i; } for my $matches (values(%seen)) { next if @$matches == 1; print("Duplicates:\n"); for my $i (@$matches) { my $info = $infos[$i]; printf(" %d %s %d %d %d\n", $i, @$info{qw( id time x y )}); } } ``` Output: ``` Duplicates: 0: 002 10 30 40 1: 001 10 30 40 ``` Upvotes: 2
2018/03/21
390
1,423
<issue_start>username_0: is there any possibility to detect a click on an ImageView with a Handler (eg looking at it for 100 ms)? I mean is there a method of ImageView/View, which give me a boolean when someone has the finger on the ImageView? Thank you! EDIT: For all people in interest for an deep understanding: <https://www.youtube.com/watch?v=SYoN-OvdZ3M&t=139s> That are 4 videos about the thematic. After this all my questions was answered.<issue_comment>username_1: Not sure what you mean. You can use an onTouchListener if you want to know when a view is touched at all. Upvotes: 1 <issue_comment>username_2: Yes its called onClickListerner have you ever tried it? its amazing. > > imageview.setOnClickListener(new View.OnClickListener() { > @Override > public void onClick(View v) { > } > }); > > > or `setOnTouchListener()` Upvotes: 0 <issue_comment>username_3: If you want to detect whether the user has his finger on the imageview, you can use `setOnTouchListener`, ``` imageview.setOnTouchListener(new View.OnTouchListener() { @Override public boolean onTouch(View view, MotionEvent e) { if (e.getActionMasked() == MotionEvent.ACTION_DOWN) { // This is a touch action. } } ``` Other actions include ACTION\_UP, ACTION\_MOVE etc. Depending on what you want, you can trigger the click action by choosing the right touch event. Upvotes: 2 [selected_answer]
2018/03/21
493
1,614
<issue_start>username_0: In Postgres, a query like this: `SELECT a.id, b.id FROM table1 a INNER JOIN table2 b on a.id=b.id;` Will return 'deconflicted' column names like: > > a.id, b.id > > > However, in Oracle, the `id` columns just keep their original names: > > id, id > > > My problem here is that my follow-on tools are confused by the duplicate column names. In this example, I've only listed 2 columns, but in reality there are 100's, and I'd prefer to not have to list them all out. Is there a way to do this without having to specify all of the column names (there are 100's).<issue_comment>username_1: Try this: ``` SELECT a.id as 'table1 a', b.id as 'table2 b' FROM table1 a INNER JOIN table2 b on a.id=b.id group by a.id, b.id; ``` You can name your results with `as` followed by the name. If you want to use `JOIN`, you have to use `GROUP BY` followed by your selected items if you don't want duplicate values. Upvotes: 0 <issue_comment>username_2: You can solve the particular problem with `id` by fixing the `join`: ``` SELECT * FROM table1 a INNER JOIN table2 b USING (id); ``` If `id` is the only duplicate column, then this may solve your problem. Upvotes: 0 <issue_comment>username_3: I suspect there won't be because table names (or aliases) aren't a requirement. For example the following is valid SQL for Oracle, but there's no viable prefix to include in column naming. ``` SELECT * FROM (select 1 a, 2 b from dual) join (select 1 a, 3 b from dual) using (a) ``` But I think Postgres requires those FROM subqueries to be aliased Upvotes: 1
2018/03/21
442
1,694
<issue_start>username_0: Is it possible to share config variables / env variables across subfolders of a monorepo that is set up using yarn workspaces? We've got a monorepo for a project, and several of the subfolders are projects that are built using create-react-app. In those individual folders, we can have .env files to specify config values, and they get used fine when we use the build/start scripts in our package.jsons at the individual level. However, we've also got other subfolders that are just libraries that are imported into the CRA apps. We'd like to specify config/env variables in those libraries, but so far haven't found a way to get the values to propagate when we build or start a project that imports the library. Have tried .env files in the libraries themselves as well as in the CRA app root folders, but nothing seems to work...<issue_comment>username_1: Consider the implications of reading from `.env` as this may adverse affect third-party libraries and dependencies into `process.env`. You can use libraries like <https://github.com/motdotla/dotenv> to do that: 1. Setup a `.env.file` file in your lib: ``` - src - index.js - .env.file ``` 2. in the lib index.js file: ``` import dotenv from 'dotenv' import path from 'path' dotenv.config({ path: path.join(__dirname,'..','.env.file'), }) // the rest of the file... ``` Upvotes: 2 <issue_comment>username_2: You can use `find-yarn-workspace-root` to find the root directory of your repository. ``` import workspacesRoot from "find-yarn-workspace-root"; import { config as dotenv } from "dotenv"; const rootDirectory = workspacesRoot(); dotenv({ path: `${rootDirectory}/.env` }); ``` Upvotes: 1
2018/03/21
704
3,095
<issue_start>username_0: One of the *Units of Functionality* that POSIX states an OS needs to provide to be POSIX compliant is POSIX\_C\_LANG\_SUPPORT. Basically this is the whole C Standard Library with some more things. My question is simple: developers of POSIX compliant OSes usually just download an open source version of C Standard Library (e.g. glib or uClibc) and adapt it to fit POSIX or they implement everything from scratch? Is there any advantage in rewriting the C Library instead of just picking one of the very known implementations and adjust it to my needs?<issue_comment>username_1: Well, *we are all dwarves standing on the shoulders of giants*. Writing a new OS is a *huge* undertaking, so the wise one will re-use whatever (design, libraries, compilers, other software) they can. It's still in all likelyhood far too much work, so why make it even harder by rewriting everything from scratch? Upvotes: 0 <issue_comment>username_2: Really, it is done in the inverse way. We have different Unix versions: two main families: SystemV and BSD, different manufacturers, and so there was a need to standardize. US government wanted also standardized programs, so POSIX (version 1) was created, by standardizing OS interfaces (a step further than just C standard). Windows NT is also POSIX (version 1) compatible, just because government wanted standardized tools. So POSIX was designed very very broad. Then with time, there were need to standardize some more Unix (and similar) systems. Not as just one system, one API, but as common API, and so programs (e.g. GUI libraries or databases) could eventually use extension, but also make sure that program that follow the standard works on compatible system. This was SUS (Single Unix Specification). This required a UNIX like system (unlike POSIX 1). Then POSIX became not so important: application that in theory could work on all POSIX systems didn't really work on POSIX Windows. So the new version of POSIX merged old POSIX plus SUS plus new useful function missing in SUS. Now Linux is important, so Linux implementations (e.g. glibc) is taken into account when updating POSIX. You will see in the mailing list, that POSIX is defined by "vendors" of different Unix and similar systems. So, it is not that operating systems extend POSIX, it is just that POSIX takes the most useful and standard options from different OS. It creates new interfaces just when existing interfaces are so incompatible, that by standardizing, it will break existing programs. For the "second" question: when you develop a new operating system, you choose what way to go. Usually it is just derivation and fork (and distributions): again from the two Unix families, of just deriving Linuxes from RedHat or Debian). Sometime system is build from scratch, because of the design. Kernel provides most of system calls, so e.g. glibc needs a lot of systemcall (given by kernel) implemented in a similar way as POSIX. Glibc is not complete. Note: early Linux distributions used other libraries. GLibc was also written from scratch. Upvotes: 1
2018/03/21
1,299
4,593
<issue_start>username_0: I'm certain there's probably a bunch of things going on here that I don't understand well enough, so forgive me if this is a stupid question or if there's obvious details missing. I have a Visual Studio 2015 solution that I've upgraded from .NET 4.5.1 to .NET 4.7.1. The solution consists of a website (not web app) project, and several libraries. The libraries don't really have any dependencies (except eachother) and while they are targeting .NET 4.7.1, they don't use, need, or reference .NETStandard.Library. When I compile one of the libraries in particular, it keeps copying a bunch of .NET 4.7.1 facade dlls into the website bin folder. Unfortunately, the website is a Kentico 11 application, and it keeps trying to load the System.IO.Compression.ZipFile facade, and chokes on it because it's a reference assembly, not a real assembly. If I delete the .dll, everything runs fine... but I don't want to delete it every time or add a post-build event to delete it. That's just silly. Can anyone help me understand what's going on here, and how to clean it up?<issue_comment>username_1: References to assemblies have their own properties. You can specify there if you want to copy the assembly to the build output directory. Maybe somewhere it is set to `true`. To check that go to Solution Explorer in Visual Studio and right click on the referenced assembly. Then click Properties and look for property named "Copy Local". Upvotes: 1 <issue_comment>username_2: Kentico 11 can only target up to .NET 4.7 so in an attempt to fully support your .NET 4.7.1 libraries I believe it is copying in those additional facade DLLs. This is based on the .NET 4.7.1 release announcement, specifically this section: > > BCL – .NET Standard 2.0 Support > > > .NET Framework 4.7.1 has built-in support for .NET Standard 2.0. .NET Framework 4.7.1 adds about 200 missing APIs that were part of .NET Standard 2.0 but not actually implemented by .NET Framework 4.6.1, 4.6.2 or 4.7. You can refer to details on .NET Standard on .NET Standard Microsoft docs. > > > Applications that target .NET Framework 4.6.1 through 4.7 must deploy additional .NET Standard 2.0 support files in order to consume .NET Standard 2.0 libraries. This situation occurred because the .NET Standard 2.0 spec was finalized after .NET Framework 4.6.1 was released. .NET Framework 4.7.1 is the first .NET Framework release after .NET Standard 2.0, enabling us to provide comprehensive .NET Standard 2.0 support. > > > <https://blogs.msdn.microsoft.com/dotnet/2017/10/17/announcing-the-net-framework-4-7-1/> Reference that led me to this conclusion: <https://github.com/Particular/NServiceBus/issues/5047#issuecomment-339096350> **Update:** I was unable to reproduce your issue in Visual Studio 2017 Version 15.6.2. I installed a Kentico 11 website project targeting .NET 4.7. I then created a library project that targeted .NET 4.7.1. I added some dummy code to the project to make use of Sysetem.IO.Compression and System.Net.Http namespaces. I added a reference to the project from Kentico and ran a build. No facade DLLs where copied to the bin folder. This post indicates the issue was fixed in Visual Studio version 15.6 <https://github.com/dotnet/sdk/issues/1647#issuecomment-364999962> Upvotes: 3 [selected_answer]<issue_comment>username_3: The additional files that get deployed to your bin folder are needed to support referencing and running .NET Standard 1.x and .NET Standard 2.0 libraries in your .NET Framework application. We have documented this [as a known issues with .NET Framework 4.7.1](https://github.com/Microsoft/dotnet/blob/master/releases/net471/KnownIssues/514195-Targeting%20.NET%20Framework%204.7.1%20copies%20extra%20files%20to%20your%20bin%20directory.md). The presence of those additional files is not sufficient however. You also need to have binding redirects generated in order to ensure types correctly unify across libraries. Visual Studio 15.6.3 (and later) have a change that will automatically generate those binding redirects for your application. .NET Framework 4.7.2 addresses the issues that require those additional files to be deployed with your application. When targeting or running on .NET Framework 4.7.2 you won't have any additional files copied to your bin folder and no binding redirects will be automatically generated. You can try .NET Framework 4.7.2 and see what's new by following the instructions [here](https://blogs.msdn.microsoft.com/dotnet/2018/02/05/announcing-net-framework-4-7-2-early-access-build-3052/). Upvotes: 2
2018/03/21
1,440
4,908
<issue_start>username_0: I'm trying to determine the format of a text file by looping through the first 10 lines, perform some regex matching and then compare the results at the end. I can easily loop through the entire file, but I only want the first N lines (in this case 10) I'm familiar with other languages, but the idiosyncrasies of this batch file is throwing me for a loop so to say. Here is what I have so far: ``` @echo off setlocal enableDelayedExpansion set /A REGEXCOUNTER=0 set /A COUNTER=0 for /F %A in (%submitfile%) do ( set /A COUNTER=COUNTER+1 rem echo %A setlocal enableDelayedExpansion echo(%A|findstr /r /c:"[0-9].*" >nul && ( set /A REGEXCOUNTER=REGEXCOUNTER+1 echo %COUNTER% - %REGEXCOUNTER% - FOUND - %A rem any commands can go here ) || ( echo NOT FOUND rem any commands can go here ) rem LOOP END if %COUNTER% GEQ 10 do (goto loop_over) ) ) :loop_over echo "END HERE!" ``` I've got counters set up that incrementally tick up to count my matches and how many times it's looped. However here is some sample output of variable values: ``` 110 - 0 - FOUND - 003 220 - 0 - FOUND - 2 330 - 0 - FOUND - 1 440 - 0 - FOUND - 029 ``` The loop counter variable is increasing by ten for each loop and the regex match counter is not going up at all. I'm pretty sure this has something to do with variable scope but I'm not sure where to begin.<issue_comment>username_1: References to assemblies have their own properties. You can specify there if you want to copy the assembly to the build output directory. Maybe somewhere it is set to `true`. To check that go to Solution Explorer in Visual Studio and right click on the referenced assembly. Then click Properties and look for property named "Copy Local". Upvotes: 1 <issue_comment>username_2: Kentico 11 can only target up to .NET 4.7 so in an attempt to fully support your .NET 4.7.1 libraries I believe it is copying in those additional facade DLLs. This is based on the .NET 4.7.1 release announcement, specifically this section: > > BCL – .NET Standard 2.0 Support > > > .NET Framework 4.7.1 has built-in support for .NET Standard 2.0. .NET Framework 4.7.1 adds about 200 missing APIs that were part of .NET Standard 2.0 but not actually implemented by .NET Framework 4.6.1, 4.6.2 or 4.7. You can refer to details on .NET Standard on .NET Standard Microsoft docs. > > > Applications that target .NET Framework 4.6.1 through 4.7 must deploy additional .NET Standard 2.0 support files in order to consume .NET Standard 2.0 libraries. This situation occurred because the .NET Standard 2.0 spec was finalized after .NET Framework 4.6.1 was released. .NET Framework 4.7.1 is the first .NET Framework release after .NET Standard 2.0, enabling us to provide comprehensive .NET Standard 2.0 support. > > > <https://blogs.msdn.microsoft.com/dotnet/2017/10/17/announcing-the-net-framework-4-7-1/> Reference that led me to this conclusion: <https://github.com/Particular/NServiceBus/issues/5047#issuecomment-339096350> **Update:** I was unable to reproduce your issue in Visual Studio 2017 Version 15.6.2. I installed a Kentico 11 website project targeting .NET 4.7. I then created a library project that targeted .NET 4.7.1. I added some dummy code to the project to make use of Sysetem.IO.Compression and System.Net.Http namespaces. I added a reference to the project from Kentico and ran a build. No facade DLLs where copied to the bin folder. This post indicates the issue was fixed in Visual Studio version 15.6 <https://github.com/dotnet/sdk/issues/1647#issuecomment-364999962> Upvotes: 3 [selected_answer]<issue_comment>username_3: The additional files that get deployed to your bin folder are needed to support referencing and running .NET Standard 1.x and .NET Standard 2.0 libraries in your .NET Framework application. We have documented this [as a known issues with .NET Framework 4.7.1](https://github.com/Microsoft/dotnet/blob/master/releases/net471/KnownIssues/514195-Targeting%20.NET%20Framework%204.7.1%20copies%20extra%20files%20to%20your%20bin%20directory.md). The presence of those additional files is not sufficient however. You also need to have binding redirects generated in order to ensure types correctly unify across libraries. Visual Studio 15.6.3 (and later) have a change that will automatically generate those binding redirects for your application. .NET Framework 4.7.2 addresses the issues that require those additional files to be deployed with your application. When targeting or running on .NET Framework 4.7.2 you won't have any additional files copied to your bin folder and no binding redirects will be automatically generated. You can try .NET Framework 4.7.2 and see what's new by following the instructions [here](https://blogs.msdn.microsoft.com/dotnet/2018/02/05/announcing-net-framework-4-7-2-early-access-build-3052/). Upvotes: 2
2018/03/21
1,262
3,931
<issue_start>username_0: I am working on a client for a RESTful service, using .NET Core 2.0. The remote service returns challenges like this: ``` WwwAuthenticate: Bearer realm="https://somesite/auth",service="some site",scope="some scope" ``` Which need to get turned into token requests like: ``` GET https://somesite/auth?service=some%20site&scope=some%20scope ``` Parsing the header to get a scheme and parameter is easy with `AuthenticationHeaderValue`, but that just gets me the `realm="https://somesite/auth",service="some site",scope="some scope"` string. How can I easily and reliably parse this to the individual `realm`, `service`, and `scope` components? It's not quite JSON, so deserializing it with NewtonSoft `JsonConvert` won't work. I could regex it into something that looks like XML or JSON, but that seems incredibly hacky (not to mention unreliable). Surely there's a better way?<issue_comment>username_1: Since I don't see a non-hacky way. Maybe this *hacky* way may help ``` string input = @"WwwAuthenticate: Bearer realm=""https://somesite/auth"",service=""some site"",scope=""some, scope"""; var dict = Regex.Matches(input, @"[\W]+(\w+)=""(.+?)""").Cast() .ToDictionary(x => x.Groups[1].Value, x => x.Groups[2].Value); var url = dict["realm"] + "?" + string.Join("&", dict.Where(x => x.Key != "realm").Select(x => x.Key + "=" + WebUtility.UrlEncode(x.Value))); ``` **OUTPUT** `url => https://somesite/auth?service=some+site&scope=some%2C+scope` *BTW: I added a `,` in "scope"* Upvotes: 2 [selected_answer]<issue_comment>username_2: > > Possible duplicate of [How to parse values from Www-Authenticate](https://stackoverflow.com/questions/48452636/how-to-parse-values-from-www-authenticate/57263137) > > > Using the schema defined in [RFC6750](https://www.rfc-editor.org/rfc/rfc6750#section-3) and [RFC2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html), a slightly more precise parser implementation is included below. This parser takes into account the possibility that strings might contain `=`, `,`, and/or escaped `"`. ```cs internal class AuthParamParser { private string _buffer; private int _i; private AuthParamParser(string param) { _buffer = param; _i = 0; } public static Dictionary Parse(string param) { var state = new AuthParamParser(param); var result = new Dictionary(); var token = state.ReadToken(); while (!string.IsNullOrEmpty(token)) { if (!state.ReadDelim('=')) return result; result.Add(token, state.ReadString()); if (!state.ReadDelim(',')) return result; token = state.ReadToken(); } return result; } private string ReadToken() { var start = \_i; while (\_i < \_buffer.Length && ValidTokenChar(\_buffer[\_i])) \_i++; return \_buffer.Substring(start, \_i - start); } private bool ReadDelim(char ch) { while (\_i < \_buffer.Length && char.IsWhiteSpace(\_buffer[\_i])) \_i++; if (\_i >= \_buffer.Length || \_buffer[\_i] != ch) return false; \_i++; while (\_i < \_buffer.Length && char.IsWhiteSpace(\_buffer[\_i])) \_i++; return true; } private string ReadString() { if (\_i < \_buffer.Length && \_buffer[\_i] == '"') { var buffer = new StringBuilder(); \_i++; while (\_i < \_buffer.Length) { if (\_buffer[\_i] == '\\' && (\_i + 1) < \_buffer.Length) { \_i++; buffer.Append(\_buffer[\_i]); \_i++; } else if (\_buffer[\_i] == '"') { \_i++; return buffer.ToString(); } else { buffer.Append(\_buffer[\_i]); \_i++; } } return buffer.ToString(); } else { return ReadToken(); } } private bool ValidTokenChar(char ch) { if (ch < 32) return false; if (ch == '(' || ch == ')' || ch == '<' || ch == '>' || ch == '@' || ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' || ch == '/' || ch == '[' || ch == ']' || ch == '?' || ch == '=' || ch == '{' || ch == '}' || ch == 127 || ch == ' ' || ch == '\t') return false; return true; } } ``` Upvotes: 0
2018/03/21
1,833
6,041
<issue_start>username_0: I am new to React and RN. I have looked into every single solution here but I did not find a solution for my case. I am trying to pull google calendar events from calendar v3 api. I have tried two ways, so far. I don't know which one is correct but I did not get a correct result for any of them. Firstly, I have tried to send a request to the <https://www.googleapis.com/calendar/v3/calendars/>${CALENDAR\_ID}/events?key=${API\_KEY}( I don't know if the key parameter is needed. I think we should delete key parameter in front of the api key.I did it like that because otherwise it was giving an error as global not found). This is calendar.js ``` const CALENDAR_ID = 'public@qeqw' const API_KEY = 'key' let url = `https://www.googleapis.com/calendar/v3/calendars/${CALENDAR_ID}/events?key=${API_KEY}` export function getEvents (callback) { request .get(url) .end((err, resp) => { if (!err) { const events = [] JSON.parse(resp.text).items.map((event) => { events.push({ start: event.start.date || event.start.dateTime, end: event.end.date || event.end.dateTime, title: event.summary, }) }) callback(events) } }) } ``` This is app.js ``` import React from 'react' import { render } from 'react-dom' import { getEvents } from './gcal' import { View, Text, StatusBar,Image,AppRegistry,ScrollView,StyleSheet,Platform,FlatList} from 'react-native' class App extends React.Component { constructor () { super() this.state = { events: [] } } componentDidMount () { getEvents((events) => { this.setState({events}) }) } render () { return ( // React Components in JSX look like HTML tags {this.state.events} ) } } ``` However, I got an error in the below. I don't know what I am doing wrong but it should be possible to send a request like that. My only concern is that if I need to get token by giving my client information by using OAuth2 authentication. Do I need to sign up and and get token to reach the API? If I need to do it, I have implemented to do it in node js by reading the sample here.<https://developers.google.com/calendar/quickstart/nodejs> but there are some node modules which I cannot use them in my React native application like fs, googleAuth, readline etc... Some of them can be done by using nodeify but others throw an error. So, I don't know what to do from now on. If someone can guide me how I would use google calendar api in react, I'd be appreciated. Thanks to the everyone who contributes here. ``` { "error": { "errors": [ { "domain": "usageLimits", "reason": "dailyLimitExceededUnreg", "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.", "extendedHelp": "https://code.google.com/apis/console" } ], "code": 403, "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup." } } ```<issue_comment>username_1: Since I don't see a non-hacky way. Maybe this *hacky* way may help ``` string input = @"WwwAuthenticate: Bearer realm=""https://somesite/auth"",service=""some site"",scope=""some, scope"""; var dict = Regex.Matches(input, @"[\W]+(\w+)=""(.+?)""").Cast() .ToDictionary(x => x.Groups[1].Value, x => x.Groups[2].Value); var url = dict["realm"] + "?" + string.Join("&", dict.Where(x => x.Key != "realm").Select(x => x.Key + "=" + WebUtility.UrlEncode(x.Value))); ``` **OUTPUT** `url => https://somesite/auth?service=some+site&scope=some%2C+scope` *BTW: I added a `,` in "scope"* Upvotes: 2 [selected_answer]<issue_comment>username_2: > > Possible duplicate of [How to parse values from Www-Authenticate](https://stackoverflow.com/questions/48452636/how-to-parse-values-from-www-authenticate/57263137) > > > Using the schema defined in [RFC6750](https://www.rfc-editor.org/rfc/rfc6750#section-3) and [RFC2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html), a slightly more precise parser implementation is included below. This parser takes into account the possibility that strings might contain `=`, `,`, and/or escaped `"`. ```cs internal class AuthParamParser { private string _buffer; private int _i; private AuthParamParser(string param) { _buffer = param; _i = 0; } public static Dictionary Parse(string param) { var state = new AuthParamParser(param); var result = new Dictionary(); var token = state.ReadToken(); while (!string.IsNullOrEmpty(token)) { if (!state.ReadDelim('=')) return result; result.Add(token, state.ReadString()); if (!state.ReadDelim(',')) return result; token = state.ReadToken(); } return result; } private string ReadToken() { var start = \_i; while (\_i < \_buffer.Length && ValidTokenChar(\_buffer[\_i])) \_i++; return \_buffer.Substring(start, \_i - start); } private bool ReadDelim(char ch) { while (\_i < \_buffer.Length && char.IsWhiteSpace(\_buffer[\_i])) \_i++; if (\_i >= \_buffer.Length || \_buffer[\_i] != ch) return false; \_i++; while (\_i < \_buffer.Length && char.IsWhiteSpace(\_buffer[\_i])) \_i++; return true; } private string ReadString() { if (\_i < \_buffer.Length && \_buffer[\_i] == '"') { var buffer = new StringBuilder(); \_i++; while (\_i < \_buffer.Length) { if (\_buffer[\_i] == '\\' && (\_i + 1) < \_buffer.Length) { \_i++; buffer.Append(\_buffer[\_i]); \_i++; } else if (\_buffer[\_i] == '"') { \_i++; return buffer.ToString(); } else { buffer.Append(\_buffer[\_i]); \_i++; } } return buffer.ToString(); } else { return ReadToken(); } } private bool ValidTokenChar(char ch) { if (ch < 32) return false; if (ch == '(' || ch == ')' || ch == '<' || ch == '>' || ch == '@' || ch == ',' || ch == ';' || ch == ':' || ch == '\\' || ch == '"' || ch == '/' || ch == '[' || ch == ']' || ch == '?' || ch == '=' || ch == '{' || ch == '}' || ch == 127 || ch == ' ' || ch == '\t') return false; return true; } } ``` Upvotes: 0
2018/03/21
404
1,691
<issue_start>username_0: In a lot of the compiled Javascript modules, somewhere in the preamble there is an invocation to `Function('return this')()` to get the global object. I'm working in an interpreter environment where the use of `Function` constructor (along with `eval`) is forbidden for security reasons. I replaced the above code with `(function(){return this})()` and everything seems to be working. Is this a safe substitution to do? Are there cases where it would fail? Why do most compiled JS modules prefer the constructor version anyway?<issue_comment>username_1: In [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode#Securing_JavaScript) you don't get the global object; you get `undefined` instead: ```js console.log( window === function(){ return (function(){return this})(); }() ); // true console.log( window === function(){ "use strict"; return (function(){return this})(); }() ); // false ``` The Function constructor escapes strict mode, so that you get the same result regardless of whether you're already in strict mode: ```js console.log( window === function(){ return Function('return this')(); }() ); // true console.log( window === function(){ "use strict"; return Function('return this')(); }() ); // true ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In your first case `Function('return this')()` is declared in a global scope and in a non-strict mode if that function is called without setting the this value `this` will be linked to the global object. In your second case `(function(){return this})()` you are creating Immediately Invoked Function Expressions. Upvotes: 0
2018/03/21
196
644
<issue_start>username_0: I have below html code. How to move span element with image right to the div tag? Below is the code ``` " + rank + " ![](test.jpg) " + o.Title + " ![](images/Information.png) ``` [expected ouput](https://i.stack.imgur.com/2beAz.png)<issue_comment>username_1: ```html " + rank + " ![](test.jpg) " + o.Title + " {IMG}![](images/Information.png) ``` Upvotes: -1 [selected_answer]<issue_comment>username_2: float and position:absolute doesn't go together also span is inline element and needs to be made block before it will behave the way you intend. ``` ![](images/Information.png) ``` Upvotes: 0
2018/03/21
177
683
<issue_start>username_0: I am looking for the event that gets fired when the tableview loads the rows to fit the screen. I know tableview loads only the number of rows that fit the screen. I want to execute a set of code when the rows that fit the screen are loaded. Any pointers on how to determine this?<issue_comment>username_1: ```html " + rank + " ![](test.jpg) " + o.Title + " {IMG}![](images/Information.png) ``` Upvotes: -1 [selected_answer]<issue_comment>username_2: float and position:absolute doesn't go together also span is inline element and needs to be made block before it will behave the way you intend. ``` ![](images/Information.png) ``` Upvotes: 0
2018/03/21
426
1,351
<issue_start>username_0: I have a number of divs with the same `season-list` class and each has a `data-episode-count` data attribute. I need to be able to grab the attribute on click and use that value to hide `js-show-more-trigger` if the value of the attribute is greater than 6. I'm currently looping through all `season-list` classes, but not sure how to grab the data attribute from the div: **HTML** ``` [svg-play](/play/3099013) Episode 1 ========= 21min [svg-play](/play/3099014) Episode 2 ========= 21min [svg-play](/play/3099015) Episode 3 ========= 21min ``` **JavaScript** ``` let trigger = document.getElementsByClassName('js-show-more-trigger'); let seasonList = document.getElementsByClassName("season-list") for(let i = 0; i < seasonList.length; i++) { if(seasonList[i].getAttribute('data-episode-count') < 6){ trigger.style.display = "none"; } } ``` Codepen: [Link](https://codepen.io/testermytesty/pen/xWdgzX?editors=1010)<issue_comment>username_1: ```html " + rank + " ![](test.jpg) " + o.Title + " {IMG}![](images/Information.png) ``` Upvotes: -1 [selected_answer]<issue_comment>username_2: float and position:absolute doesn't go together also span is inline element and needs to be made block before it will behave the way you intend. ``` ![](images/Information.png) ``` Upvotes: 0
2018/03/21
1,107
4,003
<issue_start>username_0: I have two classes: ``` class Bar extends Foo { // Foo isn't relevant constructor(value) { if (!(value instanceof Foo)) throw "InvalidArgumentException: (...)"; super(); this.value = value; } } class Baz extends Bar { constructor(value) { super(value); } } ``` The `Bar` `constructor` checks if `value` is an instance of Foo, it throws an error if it isn't. At least, that's what I wanted it to do. If you pass a `Bar` or a `Baz` as value, the if-statement returns `true` as well. The goal is to only let `Foo`s through. I found [this answer](https://stackoverflow.com/questions/1249531/how-to-get-a-javascript-objects-class) already but that didn't really answer my question.<issue_comment>username_1: If you know all of your classes you can use ``` if(!(value instanceof Foo && !(value instanceof Bar) && !(value instanceof Baz))) ``` Upvotes: 0 <issue_comment>username_2: Check the constructor: ``` if (!value || value.constructor !== Foo) throw 'InvalidArgumentException: (...)'; ``` or the prototype of the object (this is more similar to what `instanceof` does): ``` if (!value || Object.getPrototypeOf(value) !== Foo.prototype) throw 'InvalidArgumentException: (...)'; ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: The problem is that all of your classes you reference are descendants of `Foo`. Such that `new Baz() instanceOf Bar && new Bar() instanceOf Foo === true`. So when you ask is Bar instanceOf Foo, it will be true through inheritance. Due to there being no Java `getClass()` equivalent in JS, you should use something like: ``` if (value.constructor.name !== Foo.name) ``` Upvotes: -1 <issue_comment>username_4: You can use a comparison between `Object.getPrototypeOf(yourObj)` and `Foo.prototype` to see if `yourObj` is exactly an instance of `Foo`. And you can move up the chain by just continuing to call `Object.getPrototypeOf` for each level. Example: ```js class Foo {} class Bar extends Foo {} class Baz extends Bar {} const foo = new Foo(); const bar = new Bar(); const baz = new Baz(); // For this function: // - level 0 is self // - level 1 is parent // - level 2 is grandparent // and so on. function getPrototypeAt(level, obj) { let proto = Object.getPrototypeOf(obj); while (level--) proto = Object.getPrototypeOf(proto); return proto; } console.log("bar is a foo:", bar instanceof Foo); console.log("baz is a foo:", baz instanceof Foo); console.log("foo is exactly a foo:", getPrototypeAt(0, foo) === Foo.prototype); console.log("bar is exactly a foo:", getPrototypeAt(0, bar) === Foo.prototype); console.log("bar is direct child of foo:", getPrototypeAt(1, bar) === Foo.prototype); console.log("baz is direct child of foo:", getPrototypeAt(1, baz) === Foo.prototype); console.log("baz is direct child of bar:", getPrototypeAt(1, baz) === Bar.prototype); console.log("baz is grandchild of foo:", getPrototypeAt(2, baz) === Foo.prototype); ``` Upvotes: 2 <issue_comment>username_5: You should test if `value`'s internal `[[Prototype]]` is exactly `Foo.prototype`. You can get the internal `[[Prototype]]` with [Object.getPrototypeOf](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/getPrototypeOf) : ``` if ( Object.getPrototypeOf( value ) !== Foo.prototype ) throw "InvalidArgumentException: (...)"; ``` Upvotes: 1 <issue_comment>username_6: I coined the function for checking relationships between DOM Classes and elements ```js const getCreator = instance => Object.getPrototypeOf(instance).constructor.name; // usage getCreator(document.documentElement) === "HTMLHtmlElement; ``` Upvotes: -1 <issue_comment>username_7: Here is the helper function: ```js function isDirectInstanceOf(object, constructor) { return Object.getPrototypeOf(object) === constructor.prototype; } console.log( isDirectInstanceOf([], Array), // true isDirectInstanceOf([], Object), // false ); ``` Upvotes: 0
2018/03/21
433
1,545
<issue_start>username_0: Is it possible to use drawable resources bundled in your app instead of hosted images for adding stickers to Gboard? Google provides the following code snippet [here](https://android-developers.googleblog.com/2017/09/create-stickers-for-gboard-on-google.html) to show how to add a sticker to Gboard and it looks like the only way is to reference a hosted image: ``` new Indexable.Builder("Sticker") .setName("Bye") // add url for sticker asset .setImage("http://www.snoopysticker.com?id=1234") // see: Support links to your app content section .setUrl("http://sticker/canonical/image/bye") // Set the accessibility label for the sticker. .setDescription("A sticker for Bye") // Add search keywords. .put("keywords", "bye", "snoopy", "see ya", "good bye") .put("isPartOf", new Indexable.Builder("StickerPack") .setName("Snoopy Pack") .build()) .build())}; ``` All help is greatly appreciated!<issue_comment>username_1: Google's AppIndexing [sample project](https://github.com/firebase/quickstart-android/blob/master/app-indexing/app/src/main/java/com/google/samples/quickstart/app_indexing/AppIndexingUtil.java) has an example of the files being generated at run time. This isn't exactly a `Drawable` resource but the same strategy can likely be used. Upvotes: 0 <issue_comment>username_2: ``` String drawableResourceUri = Uri.parse("android.resource://your.package.name/drawable/yourfilenamewithoutextension").toString()); ``` Add this string to your setUrl() method. Upvotes: 2 [selected_answer]
2018/03/21
271
1,080
<issue_start>username_0: I'm writing a Worpress plugin that defines a shortcode. What happens at shortcode registration if there's another shortcode already registered with that same name ? What is the practice ? Is there a possibility to find the conflicting plugin so as to warn the user that he should deactivate it to be able to activate yours ?? Thanks. ``` if ( !shortcode_exists( 'myshortcode' ) ) { add_shortcode('myshortcode','mycallback'); } else ????? ```<issue_comment>username_1: Google's AppIndexing [sample project](https://github.com/firebase/quickstart-android/blob/master/app-indexing/app/src/main/java/com/google/samples/quickstart/app_indexing/AppIndexingUtil.java) has an example of the files being generated at run time. This isn't exactly a `Drawable` resource but the same strategy can likely be used. Upvotes: 0 <issue_comment>username_2: ``` String drawableResourceUri = Uri.parse("android.resource://your.package.name/drawable/yourfilenamewithoutextension").toString()); ``` Add this string to your setUrl() method. Upvotes: 2 [selected_answer]
2018/03/21
367
1,343
<issue_start>username_0: I have two tables, a NextOfKin table and a Course table, the NextOfKin table has the following attributes: ``` StudentID ContactTelNo ``` And the course has this one (only showing relevant attributes): ``` CourseNo ``` I am trying to get an output where I show the studentID and contactTelNo for the next of kin of all students on course with courseNo equal to 1001. This is the code I'm attempting to run ``` SELECT studentID, contactTelNo FROM NextOfKin WHERE courseNo = (SELECT courseNo FROM Course WHERE courseNo = '1001') ``` I'm currently getting an error message that says "Unkown coloumn 'courseNo' in 'where clause' where am I going wrong? p.s I can only used a nested query and not a join<issue_comment>username_1: Google's AppIndexing [sample project](https://github.com/firebase/quickstart-android/blob/master/app-indexing/app/src/main/java/com/google/samples/quickstart/app_indexing/AppIndexingUtil.java) has an example of the files being generated at run time. This isn't exactly a `Drawable` resource but the same strategy can likely be used. Upvotes: 0 <issue_comment>username_2: ``` String drawableResourceUri = Uri.parse("android.resource://your.package.name/drawable/yourfilenamewithoutextension").toString()); ``` Add this string to your setUrl() method. Upvotes: 2 [selected_answer]
2018/03/21
415
1,570
<issue_start>username_0: I am having an issue with my web host changing the permission of one of my configuration files for my website. No matter how many times I change the permissions, they always revert back to writable after a day or so. The web host has been unable to resolve the issue, so I thought I'd try to use a script to ssh into my account and change the permissions daily. My only problem so far is that it prompts me for my ssh key password in the terminal when I execute the script. How can I get this to work automatically so that I can set it to run daily from my computer without my intervention? ``` #!/bin/sh ssh mydomain 'bash -s' << EOF cd public_html chmod 400 configuration.php EOF ``` Thanks for any advice!<issue_comment>username_1: Add your public key to `~/.ssh/authorized_keys` on the remote host. The key you are using should not have a password if you want to use it in this way. Nowadays, this is simply done with the command ``` ssh-copy-id user@remote_server ``` Upvotes: 2 <issue_comment>username_2: I was able to answer my own question after coming across a script on another user's question. I just had to think of a different way of getting the task done. Instead of logging in to my web host via ssh, I just created a script on my web host account and put it in the crontab. ``` #!/bin/bash file=configuration.php if [ -w "$file" ] then chmod 400 "$file" && echo "The file permissions have been set to 400." >> log.txt elif [ ! -w "$file" ] echo "The file is not writable." >> log.txt fi ``` Upvotes: 0
2018/03/21
632
2,017
<issue_start>username_0: I am passing a string variable (std::string) and iterating through the string character by character. Whenever I run into a decimal, I want to combine the previous position on the string (i.e 2) and the next position in the string (i.e 5) into a double. So how would I go about making the char 2, char . , char 5 into one whole value (2.5)? ``` std::double x; std::string varibleName = "4 5 7 2.5"; for (int i = 0; i < variableName.length(); i++) // iterates through variableName { if (variableName[i] == '.'){ // if the current position of the iteration is a decimal, I want to grab the char before the decimal and the char after the decimal so I can combine all three positions of the string making it 2.5 and not 25. } } ```<issue_comment>username_1: Note that giving an example, code snips, and error logs make troubleshooting a lot easier :) It sound like you have some input like `"2.1"` and need to convert it to a double like `2.1`? If that is the case you can use the [`atof`](http://www.cplusplus.com/reference/cstdlib/atof/) function. Example: ``` /* atof example: sine calculator */ #include /\* printf, fgets \*/ #include /\* atof \*/ #include /\* sin \*/ int main () { double n; char buffer[256]; printf ("Enter degrees: "); fgets (buffer,256,stdin); n = atof (buffer); printf ("You entered %s which is a double like %f\n" , buffer, n); return 0; } ``` Upvotes: 0 <issue_comment>username_2: Well, you are wildly overthinking it. The C++ library provides [std::stof, std::stod, std::stold](http://en.cppreference.com/w/cpp/string/basic_string/stof) that does exactly what you want. Convert a string like `"2.5"` to a `float`, `double` or `long double`, e.g. ``` #include int main (void) { std::string s = "2.5"; double d = std::stod(s); std::cout << d << "\n"; } ``` **Example Use/Output** ``` $ ./bin/stodex 2.5 ``` Look things over and let me know if you have further overthinking questions. Upvotes: 1
2018/03/21
518
1,507
<issue_start>username_0: ``` def max(list): max_element = list[0] for i in range(1, len(list)): if list[i] > max_element: max_element = list[i] print(max_element) print(max([1, 2, 8, 4])) #NameError: name 'max_element' is not defined ``` how to fix it?<issue_comment>username_1: Note that giving an example, code snips, and error logs make troubleshooting a lot easier :) It sound like you have some input like `"2.1"` and need to convert it to a double like `2.1`? If that is the case you can use the [`atof`](http://www.cplusplus.com/reference/cstdlib/atof/) function. Example: ``` /* atof example: sine calculator */ #include /\* printf, fgets \*/ #include /\* atof \*/ #include /\* sin \*/ int main () { double n; char buffer[256]; printf ("Enter degrees: "); fgets (buffer,256,stdin); n = atof (buffer); printf ("You entered %s which is a double like %f\n" , buffer, n); return 0; } ``` Upvotes: 0 <issue_comment>username_2: Well, you are wildly overthinking it. The C++ library provides [std::stof, std::stod, std::stold](http://en.cppreference.com/w/cpp/string/basic_string/stof) that does exactly what you want. Convert a string like `"2.5"` to a `float`, `double` or `long double`, e.g. ``` #include int main (void) { std::string s = "2.5"; double d = std::stod(s); std::cout << d << "\n"; } ``` **Example Use/Output** ``` $ ./bin/stodex 2.5 ``` Look things over and let me know if you have further overthinking questions. Upvotes: 1
2018/03/21
387
1,419
<issue_start>username_0: I've noticed through programming in PHP that string interpolation (`"blah blah ${foo}"`) only works in double-quoted strings (`"..."`). For instance, this line will work: ``` $bar = "foo"; echo "I like ${bar}"; >> I like foo ``` But this one won't: ``` $bar = "foo"; echo 'I like ${bar}'; >> I like ${bar} ``` I understand that the [PHP Manual](https://secure.php.net/manual/en/language.types.string.php) talks about the fact that interpolation is only acted upon in such strings, but it doesn't explain why it was chosen to work in this way. So that's my question --- *why* is it that string interpolation only works in double-quoted strings in PHP?<issue_comment>username_1: It works that way by definition. That's the way it was made to work. There is no 'why' other than that's the way the developers of PHP chose to implement the feature. Upvotes: -1 <issue_comment>username_2: The obvious answer is that there are times when you might not want variables to be interpreted in your strings. Take, for example, the following line of code: ``` $currency = "$USD"; ``` This produces an "undefined variable" notice and `$currency` is an empty string. Definitely not what you want. You *could* escape it (`"\$USD"`), but hey, that's a faff. So PHP, as a design decision, chose to have double-quoted strings interpolated and single-quoted strings not. Upvotes: 4 [selected_answer]
2018/03/21
439
1,648
<issue_start>username_0: Suppose I have a module `foo` like this: ``` export const f = x => x + 1; export const g = x => x * 2; ``` I can use this module like this: ``` import { f, g } from 'foo'; console.log(f(g(2))); ``` Or like this: ``` import * as foo from 'foo'; console.log(foo.f(foo.g(2))); ``` I prefer the second way because it prevents name collisions between modules. However, is `import *` less efficient? Does it prevent bundlers (such as Rollup and Webpack) from spotting unused imports and removing them?<issue_comment>username_1: import \* is less efficient in that you are using more memory to pull the entire library as opposed to just the specific methods that you actually need Upvotes: 1 <issue_comment>username_2: When you specify imports as `import { f, g } from 'foo';` you guarantee better performance in terms of speed in compilation and bundle size as you will be getting only the dependencies you need. Notes: as username_3 pointed out, some recent compiler/bundle are able to reference what actually is being used only, this IMO is an additional step which could cost some compilation time (although I did not have time to benchmark this assumption). Upvotes: 3 [selected_answer]<issue_comment>username_3: Webpack at least (not sure about Rollup), is perfectly able to see that `foo.f` is a reference to the `f` exported name, so your two examples will behave the same. Upvotes: 0 <issue_comment>username_4: For most bundlers this does not matter since everything has to be included, 'cause ``` export const g = (() => { console.log("Side effects!"); return x => x * 2; })(); ``` Upvotes: 0
2018/03/21
269
1,041
<issue_start>username_0: I'm trying to make a system that uses the Payment Request Api and for the most part its successful, but TypeScript doesn't seem to have a definition for `canMakePayment` under the `PaymentRequest` object. Am I missing something or do I need to cast it to `any` so my Typescript builds? Thanks<issue_comment>username_1: I would recommend writing a small `.d.ts` file and reference that while the library type definitions do not include the complete interface yet. i.e. containing something like: ``` declare interface PaymentRequest { canMakePayment(): boolean; // Or whatever the correct type signature is. } ``` If you place it in an `@types` directory it might be picked up automatically. Else use a reference comment, e.g. `///` . Upvotes: 1 [selected_answer]<issue_comment>username_2: I found `canMakePayment()` entry in TypeScript's code. <https://github.com/Microsoft/TypeScript/blob/master/src/lib/dom.generated.d.ts#L9737> I'm not a TypeScript expert but I assume you can import this? Upvotes: 1
2018/03/21
674
2,606
<issue_start>username_0: This is a newbie Git question, but I haven't found an answer on Stack Overflow that could make it clear to me. Basically, I was working on a branch on my computer: ``` $ git checkout -b my_branch ``` Then, I made a few changes in my files. Afterwards, inside `my_branch`, I added, committed, and pushed the files I edited. But then, I decided that I wanted to put those changes (the ones on `my_branch`) on `master`. And here comes the problem. I did: ``` $ git checkout master $ git merge my_branch ``` Everything was ok on my local repository. But when I went to the website where I can see my remote repository, my merge was not there. So, I decided to do a simple ``` $ git push ``` without any changes to my files. It worked. So, is executing an "empty" push the only way to see my local merge on the remote repository?<issue_comment>username_1: You can see if you have changes to do with "git status". When you did the merge of my\_branch with master you didn't made any change in the code... just the merge. Also if you want to be sure you have made all the changes to your online repository you can make git push --all and of course, as i said before, check them with git status. Upvotes: -1 <issue_comment>username_2: Your Git repository is local. The remote repository is simply another copy of it. When you add a commit, either by changing something and running `git commit`, or by merging something using `git merge`, this happens locally. Nothing is sent over the network. The only time data is sent over the network is when you explicitly run a command like `git push`, `git fetch`, `git pull`. A merge commit is similar to any other commit. It is simply the result of combining the state of one line of history with another. So even though you did not edit any files, the merge commit will typically have caused changes to files. If there is a commit, it can be pushed or fetched. The kind of commit is irrelevant. Upvotes: 2 <issue_comment>username_3: Its simple that when you create branches on your local computer, those branches are on local computer only, not on your remote repo(i.e. Github profile) So what you need to do is, push it in your Github repo. for example, **1.you created branch:** ``` git checkout ``` **2. you merged it using(to accomplish merge, you must be on your master or where you want to merge that branch),** ``` git merge ``` **3. Now you need to push it to the remote Github repo using following,** ``` git push ``` Hopefully, now its clear. credit: <https://www.atlassian.com/git/tutorials> Upvotes: -1
2018/03/21
563
2,260
<issue_start>username_0: we have an school asignment to make a specific function of string split the header and arguments of function is given cannot change them . this function should split original by delimiter into array of string that store each part of string delimiter not included and in additon for each delimiter increase size ``` void stringSplit(const char *original, char result[50][256], int* size, char delim){ size_t begin = 0; if(!original){ return; } for(size_t i=0; i ```<issue_comment>username_1: You can see if you have changes to do with "git status". When you did the merge of my\_branch with master you didn't made any change in the code... just the merge. Also if you want to be sure you have made all the changes to your online repository you can make git push --all and of course, as i said before, check them with git status. Upvotes: -1 <issue_comment>username_2: Your Git repository is local. The remote repository is simply another copy of it. When you add a commit, either by changing something and running `git commit`, or by merging something using `git merge`, this happens locally. Nothing is sent over the network. The only time data is sent over the network is when you explicitly run a command like `git push`, `git fetch`, `git pull`. A merge commit is similar to any other commit. It is simply the result of combining the state of one line of history with another. So even though you did not edit any files, the merge commit will typically have caused changes to files. If there is a commit, it can be pushed or fetched. The kind of commit is irrelevant. Upvotes: 2 <issue_comment>username_3: Its simple that when you create branches on your local computer, those branches are on local computer only, not on your remote repo(i.e. Github profile) So what you need to do is, push it in your Github repo. for example, **1.you created branch:** ``` git checkout ``` **2. you merged it using(to accomplish merge, you must be on your master or where you want to merge that branch),** ``` git merge ``` **3. Now you need to push it to the remote Github repo using following,** ``` git push ``` Hopefully, now its clear. credit: <https://www.atlassian.com/git/tutorials> Upvotes: -1
2018/03/21
1,227
3,496
<issue_start>username_0: ``` helpful '[2, 4]' '[0, 0]' '[0, 1]' '[7, 13]' '[4, 6]' ``` Column name helpful has a list inside the string. I want to split 2 and 4 into separate columns. ``` [int(each) for each in df['helpful'][0].strip('[]').split(',')] ``` This works the first row but if I do ``` [int(each) for each in df['helpful'].strip('[]').split(',')] ``` gives me attribute error ``` AttributeError: 'Series' object has no attribute 'strip' ``` How can I print out like this in my dataframe?? ``` helpful not_helpful 2 4 0 0 0 1 7 13 4 6 ```<issue_comment>username_1: Assuming what you've described here accurately mimics your real-world case, how about a regex with [`.str.extract()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html)? ``` >>> regex = r'\[(?P\d+),\s\*(?P\d+)\]' >>> df helpful 0 [2, 4] 1 [0, 0] 2 [0, 1] >>> df['helpful'].str.extract(regex, expand=True).astype(np.int64) helpful not\_helpful 0 2 4 1 0 0 2 0 1 ``` Each pattern `(?P...)` is a named capturing group. Here, there are two: helpful/not helpful. This assumes the pattern can be described by: opening bracket, 1 or more digits, comma, 0 or more spaces, 1 or more digits, and closing bracket. The Pandas method (`.extract()`), as its name implies, "extracts" the result of `match.group(i)` for each `i`: ``` >>> import re >>> regex = r'\[(?P\d+),\s\*(?P\d+)\]' >>> re.search(regex, '[2, 4]').group('helpful') '2' >>> re.search(regex, '[2, 4]').group('not\_helpful') '4' ``` Upvotes: 1 <issue_comment>username_2: As suggested by @abarnert, the first port of call is find out *why* your data is coming across as strings and try and rectify that problem. However, if this is beyond your control, you can use `ast.literal_eval` as below. ``` import pandas as pd from ast import literal_eval df = pd.DataFrame({'helpful': ['[2, 4]', '[0, 0]', '[0, 1]', '[7, 13]', '[4, 6]']}) res = pd.DataFrame(df['helpful'].map(literal_eval).tolist(), columns=['helpful', 'not_helpful']) # helpful not_helpful # 0 2 4 # 1 0 0 # 2 0 1 # 3 7 13 # 4 4 6 ``` **Explanation** From the [documentation](https://docs.python.org/3/library/ast.html#ast.literal_eval), `ast.literal_eval` performs the following function: > > Safely evaluate an expression node or a string containing a Python > literal or container display. The string or node provided may only > consist of the following Python literal structures: strings, bytes, > numbers, tuples, lists, dicts, sets, booleans, and None. > > > Upvotes: 3 [selected_answer]<issue_comment>username_3: Just for fun without module. ``` s = """ helpful '[2, 4]' '[0, 0]' '[0, 1]' '[7, 13]' '[4, 6]' """ lst = s.strip().splitlines() d = {'helpful':[], 'not_helpful':[]} el = [tuple(int(x) for x in e.strip("'[]").split(', ')) for e in lst[1:]] d['helpful'].extend(x[0] for x in el) d['not_helpful'].extend(x[1] for x in el) NUM_WIDTH = 4 COLUMN_WIDTH = max(len(k) for k in d) print('{:^{num_width}}{:^{column_width}}{:^{column_width}}'.format( ' ', *sorted(d), num_width=NUM_WIDTH, column_width=COLUMN_WIDTH ) ) for (i, v) in enumerate(zip(d['helpful'], d['not_helpful']), 1): print('{:^{num_width}}{:^{column_width}}{:^{column_width}}'.format( i, *v, num_width=NUM_WIDTH, column_width=COLUMN_WIDTH ) ) ``` Upvotes: 0
2018/03/21
1,247
3,097
<issue_start>username_0: Suppose I pass a 1D array: ``` >>> np.arange(0,20) array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> np.arange(0,20).shape (20,) ``` into argwhere: ``` >>> np.argwhere(np.arange(0,20)<10) array([[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]]) >>> np.argwhere(np.arange(0,20)<10).shape (10, 1) ``` why has the result changed into a 2D array? What's the benefit of this?<issue_comment>username_1: `argwhere` returns the coordinates of where condition is True. In general, coordinates are tuples, therefore the output should be 2D. ``` >>> np.argwhere(np.arange(0,20).reshape(2,2,5)<10) array([[0, 0, 0], [0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 0, 4], [0, 1, 0], [0, 1, 1], [0, 1, 2], [0, 1, 3], [0, 1, 4]]) ``` For consistency, this also applies to the case of 1D input. Upvotes: 4 [selected_answer]<issue_comment>username_2: `numpy.argwhere` finds **indices** of elements that fulfill the condition. it happened that some of your elements are the outputted elements themselves (the index is the same as value). Particularly, in your example the input is one dimensional, the output is one dimension (index) by two (the second is to iterate over values). I hope this is clear, if not, take this example of two dimensional input array presented in the documentation of numpy: ``` >>> x = np.arange(6).reshape(2,3) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.argwhere(x>1) array([[0, 2], [1, 0], [1, 1], [1, 2]]) ``` Upvotes: 0 <issue_comment>username_3: `argwhere` is simply the transpose of `where` (actually `np.nonzero`): ``` In [17]: np.where(np.arange(0,20)<10) Out[17]: (array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]),) In [18]: np.transpose(_) Out[18]: array([[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]]) ``` `where` produces a tuple of arrays, one array per dimension (here a 1 element tuple). `transpose` turns that tuple into an array (e.g. `(1,10)` shape), and then transposes it. So it's number of columns is the `ndim` of the input condition, and the number of rows the number of `finds`. `argwhere` can be useful in visualizing the finds, but is not as useful in programs as the `where` itself. The `where` tuple can be used to index the condition array directly. The `argwhere` array is usually used iteratively. For example: ``` In [19]: x = np.arange(10).reshape(2,5) In [20]: x %2 Out[20]: array([[0, 1, 0, 1, 0], [1, 0, 1, 0, 1]]) In [21]: np.where(x%2) Out[21]: (array([0, 0, 1, 1, 1]), array([1, 3, 0, 2, 4])) In [22]: np.argwhere(x%2) Out[22]: array([[0, 1], [0, 3], [1, 0], [1, 2], [1, 4]]) In [23]: x[np.where(x%2)] Out[23]: array([1, 3, 5, 7, 9]) In [24]: for i in np.argwhere(x%2): ...: print(x[tuple(i)]) ...: 1 3 5 7 9 In [25]: [x[tuple(i)] for i in np.argwhere(x%2)] Out[25]: [1, 3, 5, 7, 9] ``` Upvotes: 0
2018/03/21
1,472
5,075
<issue_start>username_0: In CSS, `position: sticky` enables an element to display with a `position: static` behaviour (ie. it adopts its default position within the document flow) until it reaches a certain scroll position, after which it adopts `position: fixed` behaviour. So... does that mean we cannot use `position: sticky` on an element which requires a normal behaviour of `position: absolute`? --- **Context:** I have an out-of-flow element which occupies a position towards the top-left corner of the viewport. After an inch or two of scrolling, the element hits the top of the viewport and, ideally, I'd like it not to carry on disappearing at that point.<issue_comment>username_1: The point of `position:sticky` is that it is only `fixed` while the parent element is not in view. A `position:absolute` element isn't attached to it's parent. It could be interesting if such a `position` would exist and the rule would be that the element would be `absolute`, while the element it is absolute positioned to is in view, but currently there exists nothing like this nativley, but you could try to recreate it using JS. Upvotes: 1 <issue_comment>username_2: As GibboK says, the default positioning scheme isn't absolute positioning, it's the static position. Elements are laid out in normal flow by default — if out-of-flow were the default, then the default HTML page would be impossible to read. Besides, absolutely positioned elements do scroll with the page most of the time — the *only* time you can make an absolutely positioned behave like a fixed positioned element with respect to page scrolling is [through some semi-complicated CSS](https://stackoverflow.com/questions/14718319/why-does-overflow-x-hidden-make-my-absolutely-positioned-element-become-fixed/14740580#14740580). If you're asking whether it's possible for * a stickily positioned element to be out-of-flow when stuck and unstuck, or * for the containing block of a stickily positioned element to be determined the same way as for an absolutely positioned element, then unfortunately neither of these is supported by sticky positioning. Upvotes: 2 <issue_comment>username_3: You actually can leverage `display: grid` and have a sticky element that doesn't pushes its siblings: ```css header { display: flex; align-items: center; justify-content: center; height: 50vh; border: 1px dashed #f00; } main { display: grid; } div { display: flex; align-items: center; justify-content: center; } .section { grid-column: 1; height: 100vh; border: 1px dashed #0f0; } .first.section { grid-row: 1; } .sticky { grid-row: 1; grid-column: 1; position: sticky; top: 0; height: 30vh; border: 1px dashed #0ff; } footer { display: flex; align-items: center; justify-content: center; height: 100vh; border: 1px dashed #f00; } ``` ```html I'm the header I'm sticky Just some sections I'm the footer ``` The trick here is to place the sticky section **and** its first sibling on the first row and first column of their parent (because grids allow us to place many elements in the same cell). The sticky element remains sticky in its parent so it will stay on scroll beyond its cell. Upvotes: 3 <issue_comment>username_4: A way to make a sticky element *look* like it's absolutely positioned --------------------------------------------------------------------- I came up with this hack that achieves the goal, but I haven't figured out how to fix its one flaw: There's a blank area at the bottom of the scrollable content equal to the height of the sticky element + its initial vertical offset. See the comments in the code for an explanation of how it works. ```css #body { width: 100%; position: relative; background: Linen; font-family: sans-serif; font-size: 40px; } /* to position your sticky element vertically, use the height of this empty/invisible block element */ #sticky-y-offset { z-index: 0; height: 100px; } /* to position your sticky element horizontally, use the width of this empty/invisible inline-block element */ #sticky-x-offset { z-index: 0; width: 100px; display: inline-block; } /* this element is sticky so must have a static position, but we can fake an absolute position relative to the upper left of its container by resizing the invisible blocks above and to the left of it. */ #sticky-item { width: 150px; height: 100px; border-radius: 10px; background-color: rgba(0, 0, 255, 0.3); display: inline-block; position: sticky; top: -80px; bottom: -80px; } /* this div will contain the non-sticky main content of the container. We translate it vertically upward by sticky-y-offset's height + sticky-item's height */ #not-sticky { width: 100%; background-color: rgba(0, 0, 255, 0.1); transform: translateY(-200px); } .in-flow { width: 90%; height: 150px; border-radius: 10px; margin: 10px auto; padding: 10px 10px; background: green; opacity: 30%; } ``` ```html absolute & sticky in flow in flow in flow in flow ``` Upvotes: 1
2018/03/21
1,067
2,135
<issue_start>username_0: I would like to make a loop which calculates mean values by row moving by three values and ignoring missing (NA) values. Here is my example, where mean of a, b and c values and mean of x, y and z should be calculated: ``` df <- data.frame(label=paste0("lab", 1:15), a=1:5, b=6:2, c=25:11, x=5:1, y=2:6, z=11:25, zz=NA) df[,2]<-NA df[1,]<-NA df ``` And my far-from-complete solution: ``` res <- tapply(df[,2:4], df[,5:7], mean, na.rm=F) ``` Expected outcome: ``` head(res,3) label a b c x y z zz mean_abc mean_xyz 1 NA NA NA NA NA NA NA NA NA 2 lab2 NA 5 24 4 3 12 NA 9.7 6.3 ```<issue_comment>username_1: ``` > df$mean_abc <- rowMeans(df[ , c('a', 'b', 'c')], na.rm = TRUE) > df label a b c x y z zz mean_abc 1 NA NA NA NA NA NA NA NaN 2 lab2 NA 5 24 4 3 12 NA 14.5 3 lab3 NA 4 23 3 4 13 NA 13.5 4 lab4 NA 3 22 2 5 14 NA 12.5 5 lab5 NA 2 21 1 6 15 NA 11.5 6 lab6 NA 6 20 5 2 16 NA 13.0 7 lab7 NA 5 19 4 3 17 NA 12.0 8 lab8 NA 4 18 3 4 18 NA 11.0 9 lab9 NA 3 17 2 5 19 NA 10.0 10 lab10 NA 2 16 1 6 20 NA 9.0 11 lab11 NA 6 15 5 2 21 NA 10.5 12 lab12 NA 5 14 4 3 22 NA 9.5 13 lab13 NA 4 13 3 4 23 NA 8.5 14 lab14 NA 3 12 2 5 24 NA 7.5 15 lab15 NA 2 11 1 6 25 NA 6.5 ``` Upvotes: 2 <issue_comment>username_2: I admit, by far not as elegant and efficient as @username_1's answer, but here a possible tidyverse solution (I have named your data frame 'my\_dat') ``` require(dplyr) require(tidyr) my_dat %>% gather(group1, value1, a:c) %>% gather(group2, value2, x:z) %>% group_by(label) %>% summarise_at(vars(value1, value2), funs(mean), na.rm = TRUE) # A tibble: 15 x 3 label value1 value2 1 lab10 9.00 9.00 2 lab11 10.5 9.33 3 lab12 9.50 9.67 4 lab13 8.50 10.0 5 lab14 7.50 10.3 6 lab15 6.50 10.7 7 lab2 14.5 6.33 8 lab3 13.5 6.67 9 lab4 12.5 7.00 10 lab5 11.5 7.33 11 lab6 13.0 7.67 12 lab7 12.0 8.00 13 lab8 11.0 8.33 14 lab9 10.0 8.67 15 NaN NaN ``` I don't like the double gather step and there is certainly some improvement possible. But it gives the mean of the rows. Upvotes: 1
2018/03/21
1,533
5,209
<issue_start>username_0: Trying to get this program to translate letters into numbers so a telephone number with words can be input and will output the number version. (1800GOTJUNK = 18004685865) Not sure where Im going wrong but every output just gives whatever the last letter is and repeats its number for all numbers (1800adgjmptw = 18009999999). Any help would be greatly appreciated, thanks. ``` def transNum(string): number = 1 for ch in string: if ch.lower() in "abc": number = 2 elif ch.lower() in "def": number = 3 elif ch.lower() in "ghi": number = 4 elif ch.lower() in "jkl": number = 5 elif ch.lower() in "mno": number = 6 elif ch.lower() in "pqrs": number = 7 elif ch.lower() in "tuv": number = 8 elif ch.lower() in "wxyz": number = 9 return number def translate(phone): newNum = "" for ch in phone: if ch in ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]: newNum = newNum + str(transNum(phone)) else: newNum = newNum + ch return newNum def main(): phone = input("enter a phone number") noLetters = translate(phone) print("The number you entered: ", phone) print("Translates to: ", noLetters) main() ```<issue_comment>username_1: I can't help you with the entire thing, but at least to make it a bit easier for you to reason about it. Use a dictionary to map the keys to values rather than killing some unicorns with all these ifs. So you can do something like that ``` ch_num_map = {'a': 2, 'b': 2, 'c': 2, 'w': 9, 'z': 9} # you get the idea ``` then you can simply do: ``` ch_num_map.get('a') # output: 2 ``` Upvotes: 1 <issue_comment>username_2: `str(transNum(phone))` should be `str(transNum(ch))` And transNum doesn't need to iterate over its input, since it will only keep the last number (it is designed to have one single letter as input). Upvotes: 3 <issue_comment>username_3: The problem here is that you're looping over the entire string in your `transNum` function. What you want is to pass a single character and get its number representation. Try this: ``` def transNum(ch): number = 1 if ch.lower() in "abc": number = 2 elif ch.lower() in "def": number = 3 elif ch.lower() in "ghi": number = 4 elif ch.lower() in "jkl": number = 5 elif ch.lower() in "mno": number = 6 elif ch.lower() in "pqrs": number = 7 elif ch.lower() in "tuv": number = 8 elif ch.lower() in "wxyz": number = 9 return number def translate(phone): newNum = "" for ch in phone: if ch in "abcdefghijklmnopqrstuvwxyz" newNum = newNum + str(transNum(ch)) else: newNum = newNum + ch return newNum ``` I hope this helps. Upvotes: 0 <issue_comment>username_4: Let's take a look at this function: ``` def transNum(string): number = 1 for ch in string: if ch.lower() in "abc": number = 2 elif ch.lower() in "def": number = 3 elif ch.lower() in "ghi": number = 4 elif ch.lower() in "jkl": number = 5 elif ch.lower() in "mno": number = 6 elif ch.lower() in "pqrs": number = 7 elif ch.lower() in "tuv": number = 8 elif ch.lower() in "wxyz": number = 9 return number ``` What this function does is take a string, loop over its characters, each time assigning the corresponding number to the variable `number`. At the end of the loop, it returns the variable `number`. So what this function is doing is essentially a bunch of useless work and then returning **only** what the **last** character in the string should correspond to as a number. What you want is to pass only a single character to this function and get rid of the for loop. Alternatively, you can create the translated string inside this function and return the full string rather than returning the number. Upvotes: 0 <issue_comment>username_5: I think should exist a more pythonic way, but at the least this should work for your case ``` def transNum(string): number = 1 numberElements={ "a":2,"b":2,"c":2, "d":3,"e":3,"f":3, "g":4,"h":4,"i":4, "j":5,"k":5,"l":5, "m":6,"n":6,"o":6, "p":7,"q":7,"r":7,"s":7, "t":8,"u":8,"v":8, "w":9,"x":9,"y":9,"z":9, } for ch in string: number = numberElements[ch.lower()] return number def translate(phone): newNum = "" for ch in phone: if ch.lower() in ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]: newNum = newNum + str(transNum(ch)) else: newNum = newNum + ch return newNum def main(): phone = input("enter a phone number") noLetters = translate(phone) print("The number you entered: ", phone) print("Translates to: ", noLetters) ``` Upvotes: 0
2018/03/21
1,280
4,394
<issue_start>username_0: I am trying to write a SQL query that would "calculate" status column of the row based on `JOIN` with statuses table. My record is basically: `id | name | statusId`, which is foreign key to statuses table. That table has: `id | statusName` I collect `count()` for each `DISTINCT` statusId. Now, I need return Id of any status based on the following idea - `if count(status0) > 0`, I need to return `status0`, else I need to check `status1` then `status2` etc. Could I write a SQL query to return status for each row status with `JOIN`, `WHERE`, `HAVING` etc without if/else logic?<issue_comment>username_1: I can't help you with the entire thing, but at least to make it a bit easier for you to reason about it. Use a dictionary to map the keys to values rather than killing some unicorns with all these ifs. So you can do something like that ``` ch_num_map = {'a': 2, 'b': 2, 'c': 2, 'w': 9, 'z': 9} # you get the idea ``` then you can simply do: ``` ch_num_map.get('a') # output: 2 ``` Upvotes: 1 <issue_comment>username_2: `str(transNum(phone))` should be `str(transNum(ch))` And transNum doesn't need to iterate over its input, since it will only keep the last number (it is designed to have one single letter as input). Upvotes: 3 <issue_comment>username_3: The problem here is that you're looping over the entire string in your `transNum` function. What you want is to pass a single character and get its number representation. Try this: ``` def transNum(ch): number = 1 if ch.lower() in "abc": number = 2 elif ch.lower() in "def": number = 3 elif ch.lower() in "ghi": number = 4 elif ch.lower() in "jkl": number = 5 elif ch.lower() in "mno": number = 6 elif ch.lower() in "pqrs": number = 7 elif ch.lower() in "tuv": number = 8 elif ch.lower() in "wxyz": number = 9 return number def translate(phone): newNum = "" for ch in phone: if ch in "abcdefghijklmnopqrstuvwxyz" newNum = newNum + str(transNum(ch)) else: newNum = newNum + ch return newNum ``` I hope this helps. Upvotes: 0 <issue_comment>username_4: Let's take a look at this function: ``` def transNum(string): number = 1 for ch in string: if ch.lower() in "abc": number = 2 elif ch.lower() in "def": number = 3 elif ch.lower() in "ghi": number = 4 elif ch.lower() in "jkl": number = 5 elif ch.lower() in "mno": number = 6 elif ch.lower() in "pqrs": number = 7 elif ch.lower() in "tuv": number = 8 elif ch.lower() in "wxyz": number = 9 return number ``` What this function does is take a string, loop over its characters, each time assigning the corresponding number to the variable `number`. At the end of the loop, it returns the variable `number`. So what this function is doing is essentially a bunch of useless work and then returning **only** what the **last** character in the string should correspond to as a number. What you want is to pass only a single character to this function and get rid of the for loop. Alternatively, you can create the translated string inside this function and return the full string rather than returning the number. Upvotes: 0 <issue_comment>username_5: I think should exist a more pythonic way, but at the least this should work for your case ``` def transNum(string): number = 1 numberElements={ "a":2,"b":2,"c":2, "d":3,"e":3,"f":3, "g":4,"h":4,"i":4, "j":5,"k":5,"l":5, "m":6,"n":6,"o":6, "p":7,"q":7,"r":7,"s":7, "t":8,"u":8,"v":8, "w":9,"x":9,"y":9,"z":9, } for ch in string: number = numberElements[ch.lower()] return number def translate(phone): newNum = "" for ch in phone: if ch.lower() in ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]: newNum = newNum + str(transNum(ch)) else: newNum = newNum + ch return newNum def main(): phone = input("enter a phone number") noLetters = translate(phone) print("The number you entered: ", phone) print("Translates to: ", noLetters) ``` Upvotes: 0
2018/03/21
1,486
5,061
<issue_start>username_0: I am trying to call a method from JavaScript from an HTML file. Specifically, call the method "speak" from Dog and Cat (shown below the HTML). I think I should be using `window.onload = function()` or something similar, with `onload`, but I do not know how to call the methods. This is the HTML content: ``` window.onload = function() { } ``` And this is my JavaScript code where the functions I want to call are: ``` function Animal(name, eyeColor) { this.name = name; this.eyeColor = eyeColor; } Animal.prototype.getName=function() { return this.name; }; Animal.prototype.getEyeColor=function() { return this.eyeColor; }; Animal.prototype.toString=function() { return this.name + " " + this.eyeColor; }; function Dog(name, eyeColor) { Animal.call(this, name, eyeColor); } Dog.prototype = new Animal(); Dog.prototype.toString=function() { return Animal.prototype.toString.call(this); }; Dog.prototype.speak=function() { return "woof"; }; function Cat(name, eyeColor) { Animal.call(this, name, eyeColor); } Cat.prototype = new Animal(); Cat.prototype.toString=function() { return Animal.prototype.toString.call(this); }; Cat.prototype.speak=function() { return "meow"; }; ```<issue_comment>username_1: I can't help you with the entire thing, but at least to make it a bit easier for you to reason about it. Use a dictionary to map the keys to values rather than killing some unicorns with all these ifs. So you can do something like that ``` ch_num_map = {'a': 2, 'b': 2, 'c': 2, 'w': 9, 'z': 9} # you get the idea ``` then you can simply do: ``` ch_num_map.get('a') # output: 2 ``` Upvotes: 1 <issue_comment>username_2: `str(transNum(phone))` should be `str(transNum(ch))` And transNum doesn't need to iterate over its input, since it will only keep the last number (it is designed to have one single letter as input). Upvotes: 3 <issue_comment>username_3: The problem here is that you're looping over the entire string in your `transNum` function. What you want is to pass a single character and get its number representation. Try this: ``` def transNum(ch): number = 1 if ch.lower() in "abc": number = 2 elif ch.lower() in "def": number = 3 elif ch.lower() in "ghi": number = 4 elif ch.lower() in "jkl": number = 5 elif ch.lower() in "mno": number = 6 elif ch.lower() in "pqrs": number = 7 elif ch.lower() in "tuv": number = 8 elif ch.lower() in "wxyz": number = 9 return number def translate(phone): newNum = "" for ch in phone: if ch in "abcdefghijklmnopqrstuvwxyz" newNum = newNum + str(transNum(ch)) else: newNum = newNum + ch return newNum ``` I hope this helps. Upvotes: 0 <issue_comment>username_4: Let's take a look at this function: ``` def transNum(string): number = 1 for ch in string: if ch.lower() in "abc": number = 2 elif ch.lower() in "def": number = 3 elif ch.lower() in "ghi": number = 4 elif ch.lower() in "jkl": number = 5 elif ch.lower() in "mno": number = 6 elif ch.lower() in "pqrs": number = 7 elif ch.lower() in "tuv": number = 8 elif ch.lower() in "wxyz": number = 9 return number ``` What this function does is take a string, loop over its characters, each time assigning the corresponding number to the variable `number`. At the end of the loop, it returns the variable `number`. So what this function is doing is essentially a bunch of useless work and then returning **only** what the **last** character in the string should correspond to as a number. What you want is to pass only a single character to this function and get rid of the for loop. Alternatively, you can create the translated string inside this function and return the full string rather than returning the number. Upvotes: 0 <issue_comment>username_5: I think should exist a more pythonic way, but at the least this should work for your case ``` def transNum(string): number = 1 numberElements={ "a":2,"b":2,"c":2, "d":3,"e":3,"f":3, "g":4,"h":4,"i":4, "j":5,"k":5,"l":5, "m":6,"n":6,"o":6, "p":7,"q":7,"r":7,"s":7, "t":8,"u":8,"v":8, "w":9,"x":9,"y":9,"z":9, } for ch in string: number = numberElements[ch.lower()] return number def translate(phone): newNum = "" for ch in phone: if ch.lower() in ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]: newNum = newNum + str(transNum(ch)) else: newNum = newNum + ch return newNum def main(): phone = input("enter a phone number") noLetters = translate(phone) print("The number you entered: ", phone) print("Translates to: ", noLetters) ``` Upvotes: 0
2018/03/21
1,789
6,472
<issue_start>username_0: I have a python program which sends height data from my client to my server program. My server program will not be always running so if I don't recieve a response I would like it to try again. So far what I have is that if a response (from the server) is not given in 20 Seconds it causes an exception and recalls my input. It works fine until I try it a second time. Here is my code: ``` import zmq from time import sleep global radius context = zmq.Context() print("Remote Deployment Application") print("Lightweight ZMQ Communication") print("Connecting to Desk Ctrl Service") socket = context.socket(zmq.REQ) socket.connect("tcp://192.168.1.9:5555") socket.setsockopt(zmq.RCVTIMEO, 30000) socket.setsockopt(zmq.LINGER, 0) # Do 10 requests,waiting each time for a response def __init__(self, height): self.height = height def start(): global height height = input("Enter in request: ") SendHeightVal() def SendHeightVal(): global userinput global height print("Sent Request. Awaiting Reply.") so_bytes = height.encode() socket.send(so_bytes) so_bytes = 0 try: message = socket.recv() except: print("Something went wrong. No response from server!") message = None start() print(message) start() ``` Here is the error: ``` During handling of the above exception, another exception occurred: Traceback (most recent call last): File "client.py", line 44, in start() File "client.py", line 22, in start SendHeightVal() File "client.py", line 37, in SendHeightVal start() File "client.py", line 22, in start SendHeightVal() File "client.py", line 30, in SendHeightVal socket.send(so\_bytes) File "/usr/local/lib/python3.6/site-packages/zmq/sugar/socket.py", line 391, in send return super(Socket, self).send(data, flags=flags, copy=copy, track=track) File "zmq/backend/cython/socket.pyx", line 727, in zmq.backend.cython.socket.Socket.send File "zmq/backend/cython/socket.pyx", line 774, in zmq.backend.cython.socket.Socket.send File "zmq/backend/cython/socket.pyx", line 249, in zmq.backend.cython.socket.\_send\_copy File "zmq/backend/cython/socket.pyx", line 244, in zmq.backend.cython.socket.\_send\_copy File "zmq/backend/cython/checkrc.pxd", line 25, in zmq.backend.cython.checkrc.\_check\_rc zmq.error.ZMQError: Operation cannot be accomplished in current state ```<issue_comment>username_1: Check [this](http://learning-0mq-with-pyzmq.readthedocs.io/en/latest/pyzmq/patterns/client_server.html), you will find this: > > socket zmq.REQ will block on send unless it has successfully received a reply back. > > > Which means that you cannot send another request until receiving an answer. ZMQ has different messaging patterns the one you are using Client / Server `zmq.REQ` / `zmq.REP`. > > Any attempt to send another message to the socket (zmq.REQ/zmq.REP), without having received a reply/request will result in an error: > > > ``` .... socket.send ("Hello") socket.send ("Hello1") .... Error: zmq.core.error.ZMQError: Operation cannot be accomplished in current state ``` That's why your are getting the exception: `zmq.error.ZMQError: Operation cannot be accomplished in current state` Upvotes: 0 <issue_comment>username_2: Lets start with a simple inventory of mandatory features: --------------------------------------------------------- > > " *[agent-A]* **sends height data** from my client to my server program. My server program *[agent-B]* **will not be always running** so if I don't receive a response I would like it *[agent-A]* to **try again**." > > > For this, the **`REQ/REP`** hard-wired two-step of *[**A**].send()-[B].recv()-[B].send()-[**A**].recv()-[**A**].send()-[B].recv()-...* is not the promising, the less a safe choice. 1 ) always set **`zmq.LINGER = 0`** to avoid zombies & hangups ( +1 for doing that ) 2 ) if need not keep all *[A]*-side data delivered, may enjoy **`zmq.IMMEDIATE`** to deliver only to live *[B]* and **`zmq.CONFLATE`** to deliver only the most "fresh" values the *[A]*-side did `.send()` 3 ) if need in a need to keep all *[A]*-side data delivered, will have to either re-factor the strategy of sending on *[A]*-side become robust to missing *[B]*-side, so as to avoid blind and just optimistic data-pumping into the `Context()`-instance controlled, but rather expensive and quite limited resources, or carefully plan and pre-allocate due capacities, so as to remain able to indeed internally store all of 'em till the *[B]*-comes ( or not ) somewhere unsure in the future or the *[A]*-side dropping or blocking or exceptions will be unavoidable. 4 ) never expect the built-in trivial ( primitive ) archetypes to match your production needs, these come as a sort of LEGO-building blocks for some smarter, problem-specific distributed signalling and messaging infrastructure, not as a magic wand to solve all properties ( principally unknown and undefined at the time of these primitive-tools implementation ), so more engineering efforts will come, as one moves outside of school-book examples into designing robust distributed systems. Here could be a way using a composition of `{ PUSH/PULL + PULL/PUSH | PAIR/PAIR | XREQ/XREP }` plus heart-beating watchdog and re-discovery of remote agent(s). Some other archetypes may get later added for N+1 fault resilience or performance boosting with workload balancers or remote-console or latency motivated off-site remote-logging - if needed, all depending on hell many details that are beyond the initial post or a few SLOCs. 5 ) always handle exceptions, even if "masters" tell you one need not do that. Why? In [distributed-system](/questions/tagged/distributed-system "show questions tagged 'distributed-system'") any problem cease to be a just-local isuue, so unhandled exceptions, fallen into a silent death of some agent will result in global impacts of missing such agent ( many ) others do rely on and may easily got blocked without any local-reason visible to 'em. 6 ) in production domain, there will be more efforts needed for protecting any smart infrastructure from remote-failures and also DoS-alike events, that impact local-`Context()` SPOF, so indeed the [distributed-system](/questions/tagged/distributed-system "show questions tagged 'distributed-system'") architecture design is both very interesting and quite demanding domain. Upvotes: 2 [selected_answer]
2018/03/21
451
1,493
<issue_start>username_0: I am using [AGM](https://angular-maps.com) There is [MarkerManager](https://angular-maps.com/api-docs/agm-core/injectables/MarkerManager.html) which I want to use in my @Component ``` import { Component } from '@angular/core'; import { MarkerManager } from '@agm/core'; @Component({ selector: 'ngx-gmaps', providers: [MarkerManager], styleUrls: ['./gmaps.component.scss'], template: ` Google Maps [markerDraggable]="true" \*ngIf="locationChosen"> **JHU/APL** `, }) export class GmapsComponent { lat = 39.163100; lng = -76.899428; locationChosen=true; constructor(private markerManager: MarkerManager){ } onMapReady(event){ console.log('Map is ready'); } onChosenLocation(event) { this.lat = event.coords.lat; this.lng = event.coords.lng; this.locationChosen = true; } } ``` I get the following error: ``` StaticInjectorError(AppModule)[InfoWindowManager -> GoogleMapsAPIWrapper]: StaticInjectorError(Platform: core)[InfoWindowManager -> GoogleMapsAPIWrapper]: NullInjectorError: No provider for GoogleMapsAPIWrapper! ``` How do I import the MarkerManager?<issue_comment>username_1: I figured it out: ``` import { MarkerManager } from '@agm/core'; import {GoogleMapsAPIWrapper} from '@agm/core'; ``` Upvotes: -1 <issue_comment>username_2: In your main module, add this ``` @NgModule({ providers: [ ... GoogleMapsAPIWrapper, ] }) ``` Hope this helps Upvotes: 1
2018/03/21
723
2,369
<issue_start>username_0: I am trying to build libgit2 in release mode on Windows 10 x64. In my libgit2 directory, I run the following: ``` mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release cmake --build . ``` It still builds in debug mode, as far as I can tell. At least when I link it into a Visual Studio project it fails on debug assertions. I have also tried ``` cmake -G "Visual Studio 12 Win64" -DCMAKE_BUILD_TYPE=Release .. ``` to no avail. I even tried to get it working by including all of the `.vxcproj` files in my solution and building them all in release mode manually. No luck yet. Is it supposed to be as simple as passing in the `-DCMAKE_BUILD_TYPE=Release` flag or am I missing something? All I am really trying to do is keep the `git2.dll` from exploding on debug asserts when I run it. At least I think that is what is happening. Here is the code I am trying to run from a Visual Studio 2013 project, where I linked in the `git2.lib` and referenced the `git2.dll` as mentioned [here](https://libgit2.github.com/docs/guides/build-and-link/) ``` #include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { const char *path = "C:/Dev/testing/brokenmerge"; git_repository *repo = NULL; git_repository_open(&repo, path); printf("Opened repo: '%s'\n", path); git_repository_free(repo); return 0; } ``` where `stdafx.h` includes I get the following assertion error in `git_repository_open`: ``` Assertion failed! Program: C:\Dev\testing\libgit2tests\Debug\git2.dll File: C:\Dev\libgit2\src\global.c Line: 202 Expression: git_atomic_get(&git__n_inits) > 0 For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts ``` So unless I linked to `git2` incorrectly or I am doing something else wrong I have no idea how to proceed. Thanks!<issue_comment>username_1: You need to call [`git_libgit2_init`](https://github.com/libgit2/libgit2#initialization) before calling any other libgit2 functions. This sets up the global state that the library needs. Upvotes: 2 [selected_answer]<issue_comment>username_2: I had this pain too. The answer is ``` cmake --build . --config Release ``` see [CMAKE\_BUILD\_TYPE not being used in CMakeLists.txt](https://stackoverflow.com/questions/24460486/cmake-build-type-not-being-used-in-cmakelists-txt) Upvotes: 2
2018/03/21
512
1,886
<issue_start>username_0: I have a simple form ``` class MyForm(forms.ModelForm): class Meta: model = Track fields = ['website', 'keyword'] labels = {'website': 'Website URL', 'keyword': 'Keyword'} ``` and a view ``` def track(request): if request.method != 'POST': form = MyForm() else: form = MyForm(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect(reverse('tb_main:index')) context = {'form': form} return render(request, 'tb_main/track.html', context) ``` and html form template(track.html) ``` {% csrf\_token %} {% bootstrap\_form form %} Submit ``` I would like to be able to dynamically load the form when a user clicks on a button on the homepage without changing the url. I am assuming this could be accomplished using AJAX but I am not sure how to approach this. If i just do an AJAX call to track.html the view functionality won't come with it How can I dynamically load the view & html template to the home page after a button click? I tried `load form` but that doesn't work. Would be happy to hear from some of the more experienced developers on how to approach this. Thank you<issue_comment>username_1: Another way you might achieve this is with JavaScript ...you could have a function in JavaScript called **include\_form** which will be used to include the form inside a div and it might look like this ``` function include_form(){ document.getElementById("form_div").innerHTML = "{% include 'folder/form.html'%}"; this.location.reload() } ``` Upvotes: 0 <issue_comment>username_2: The thing you're missing here is the JS script to attach to that button click. Something like (assuming jQuery): ``` $('#mybutton').get( "{% url 'tb_main:track' %}", function(data) { $('#myform').html(data) } ); ``` and the HTML: ``` load form ``` Upvotes: 3 [selected_answer]
2018/03/21
876
3,180
<issue_start>username_0: Hoping you could help... I'm a little stumped... Basically, I have a google map, with autocomplete working. You search an origin and destination and my correct, set filtered results/markers appear on the map. However, when I view my map on mobile, via chrome/safari, the map does not move and you can not interact with it at all. I've googled and understand I need gestureHandling: 'cooperative'. However, as my app is built in anjular JS and my google maps code sits in a directive, I'm not sure where to place this... Here's a relevant snippet of my directive: ``` googleMap.$inject = []; function googleMap() { return { restrict: 'E', template: '', replace: true, scope: { center: '=', zoom: '=', origin: '=', destination: '=', travelMode: '=', foodType: '=' // gestureHandling: '=' }, link($scope, $element) { const map = new google.maps.Map($element[0], { zoom: $scope.zoom, center: $scope.center, // gestureHandling: 'cooperative', }); map.setOptions({gestureHandling: 'cooperative'}); const directionsService = new google.maps.DirectionsService(); const directionsDisplay = new google.maps.DirectionsRenderer(); const placesService = new google.maps.places.PlacesService(map); const directionsShow = document.getElementById('bottom-panel'); const image = { url: '/assets/images/marker.gif', // url scaledSize: new google.maps.Size(60, 60), // scaled size origin: new google.maps.Point(0,0) // origin }; directionsDisplay.setMap(map); ``` However, when i drop {gestureHandling: 'cooperative'} into "return", my map still isnt working on mobile. I've even tried dropping in, the below code in my directive. Doesn't work. map.setOptions({gestureHandling: 'cooperative'}); Here's how the google map is looking in my views folder. I even tried dropping it here... ``` ``` Kinda stumped I can't get this working! Thanks, Reena<issue_comment>username_1: Try ``` googleMap.$inject = []; function googleMap() { return { restrict: 'E', template: '', replace: true, scope: { center: '=', zoom: '=', origin: '=', destination: '=', travelMode: '=', foodType: '=', gestureHandling: '=' }, link($scope, $element) { const map = new google.maps.Map($element[0], { zoom: $scope.zoom, center: $scope.center, gestureHandling: $scope.gestureHandling }); //map.setOptions({gestureHandling: 'cooperative'}); const directionsService = new google.maps.DirectionsService(); const directionsDisplay = new google.maps.DirectionsRenderer(); const placesService = new google.maps.places.PlacesService(map); const directionsShow = document.getElementById('bottom-panel'); const image = { url: '/assets/images/marker.gif', // url scaledSize: new google.maps.Size(60, 60), // scaled size origin: new google.maps.Point(0,0) // origin }; directionsDisplay.setMap(map); ``` HTML ``` ``` Upvotes: 0 <issue_comment>username_2: thanks for coming back to me. It turns it the issue was being caused by CSS. There was a zindex set to -1 on the map. And it basically stopped the map from working... Thanks for your response though! Upvotes: 1
2018/03/21
826
2,977
<issue_start>username_0: I only just started learning how to work with Angular 2 and Ionic 3.20.0 but now facing a challenge. I'm trying to display product images in my home page but the app is throwing this error > > ERROR Error: Uncaught (in promise): Error: StaticInjectorError(AppModule)[HomePage -> ProductProvider] > > > I have imported the ProductProvider service and added it to providers in my app.module.ts file. ``` import { ProductProvider } from '../providers/product/product'; @NgModule({ ... providers: [ StatusBar, SplashScreen, {provide: ErrorHandler, useClass: IonicErrorHandler}, ProductProvider ] ``` Now this is the products.ts and home.ts files respectively product.ts ``` import { HttpClient } from '@angular/common/http'; import { Injectable } from '@angular/core'; import 'rxjs/add/operator/map'; @Injectable() export class ProductProvider { constructor(public http: HttpClient) { console.log('Hello ProductProvider Provider'); } getProducts(){ return this.http.get('/assets/data.json') .map(response => response); } } ``` home.ts ``` import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; import { Http } from '@angular/http'; import "rxjs/add/operator/map"; import { ProductProvider } from '../../providers/product/product'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { public allProducts = []; constructor(private productProvide: ProductProvider, private http: Http, public navCtrl: NavController) { } ionViewDidLoad(){ this.productProvide.getProducts() .subscribe((response) => { this.allProducts = response; }); } } ``` Why is my code throwing this error?<issue_comment>username_1: Try ``` googleMap.$inject = []; function googleMap() { return { restrict: 'E', template: '', replace: true, scope: { center: '=', zoom: '=', origin: '=', destination: '=', travelMode: '=', foodType: '=', gestureHandling: '=' }, link($scope, $element) { const map = new google.maps.Map($element[0], { zoom: $scope.zoom, center: $scope.center, gestureHandling: $scope.gestureHandling }); //map.setOptions({gestureHandling: 'cooperative'}); const directionsService = new google.maps.DirectionsService(); const directionsDisplay = new google.maps.DirectionsRenderer(); const placesService = new google.maps.places.PlacesService(map); const directionsShow = document.getElementById('bottom-panel'); const image = { url: '/assets/images/marker.gif', // url scaledSize: new google.maps.Size(60, 60), // scaled size origin: new google.maps.Point(0,0) // origin }; directionsDisplay.setMap(map); ``` HTML ``` ``` Upvotes: 0 <issue_comment>username_2: thanks for coming back to me. It turns it the issue was being caused by CSS. There was a zindex set to -1 on the map. And it basically stopped the map from working... Thanks for your response though! Upvotes: 1
2018/03/21
492
1,758
<issue_start>username_0: I have a hidden div that I want to show on a button press however because the ACF repeater is repeating the id it's opening all the hidden divs at once. ``` //This is inside a repeater field causing the #buy to repeat ![](images/right-arrow.png) php the\_sub\_field('eventbrite\_widget'); ? // JQuery $('button').click(function () { $( "#buy" ).slideToggle("slow"); }); ``` I think I need to find #buy using .find() or .next() however I haven't had much luck using those.<issue_comment>username_1: Try ``` googleMap.$inject = []; function googleMap() { return { restrict: 'E', template: '', replace: true, scope: { center: '=', zoom: '=', origin: '=', destination: '=', travelMode: '=', foodType: '=', gestureHandling: '=' }, link($scope, $element) { const map = new google.maps.Map($element[0], { zoom: $scope.zoom, center: $scope.center, gestureHandling: $scope.gestureHandling }); //map.setOptions({gestureHandling: 'cooperative'}); const directionsService = new google.maps.DirectionsService(); const directionsDisplay = new google.maps.DirectionsRenderer(); const placesService = new google.maps.places.PlacesService(map); const directionsShow = document.getElementById('bottom-panel'); const image = { url: '/assets/images/marker.gif', // url scaledSize: new google.maps.Size(60, 60), // scaled size origin: new google.maps.Point(0,0) // origin }; directionsDisplay.setMap(map); ``` HTML ``` ``` Upvotes: 0 <issue_comment>username_2: thanks for coming back to me. It turns it the issue was being caused by CSS. There was a zindex set to -1 on the map. And it basically stopped the map from working... Thanks for your response though! Upvotes: 1
2018/03/21
1,199
4,533
<issue_start>username_0: I have this simple example. I need to pass an argument to a callback function. In this code, when I do not pass any arguments, the timer works fine and it executes the callback function after the specified time interval elapsed. But if I passed `x`, the function gets executed without waiting for 5 seconds (or whatever time specified. Can you explain how to pass an argument to the callback function? ``` var x=1; console.log("starting the script"); setTimeout(myFunction(x),5000); function myFunction(x) { console.log("This should appear after 5 seconds"); if (x==1) console.log("option 1"); else console.log("option 2"); } ```<issue_comment>username_1: Because you're immediately executing that function. An alternative is wrapping it with a function declaration ```js var x = 1; console.log("starting the script"); setTimeout(function() { myFunction(x); }, 1000); function myFunction(x) { console.log("This should appear after 1 second"); if (x == 1) console.log("option 1"); else console.log("option 2"); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: this is because you are actually calling the function, thats why it behaves like that. ``` myFunction(x) ``` that will trigger the function, giving you the returned value of it and [setTimeout](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout "setTimeout") expects a FUNCTION instead of the RESULT OF A FUNCTION, thus you need to wrap your function with another function, so just do the following: using ES6 ``` setTimeout(()=> myFunction(x),5000); ``` or using regular JS: ``` setTimeout(function(){ myFunction(x) }, 5000); ``` Upvotes: 1 <issue_comment>username_3: try using `.bind()` should work. ``` setTimeout(myFunction.bind(x),5000); ``` With your approach you are executing the function immediately so that is why it does not wait for specified amount of time. `.bind()` will add the x to the context of that function which might not be the best solution if you are considering modularity in your code otherwise this should fair for your provided code. Upvotes: 0 <issue_comment>username_4: Just to add that what you are passing to the timeout really is the result of myFunction(x), which is undefined. Therefore, the timeOut will have nothing to execute. If your myFunction(x) did return a function object, then you'd see the results of the execution of myFunction(x) immediately, and 5 seconds later that other function would execute. There is another way to fix that. All you need is to pass a function object to timeout. If you want to avoid creating the closure, you can do this: ``` setTimeout(myFunction.bind(this,x), 5000); ``` This creates a new function object *without any free argument* that will receive *this* as context, and call myFunction(x). The context is probably unnecessary here, but is something you really should be aware of when using setTimeout: it always takes the window as default context, so if you are calling it from inside a scope where *this* has a useful meaning and you want to conserve it, you should bind it the function to it. Because you just bound the function, you don't actually invoke it: setTimeout thus receives a function object that it will trigger after 5 seconds. It will not pass new arguments to it, but that's ok, because you have bound the argument already. Upvotes: 0 <issue_comment>username_5: @username_1 answer above is perfectly correct. There is a slight nuance though. If `var x = 1` has changed its value while `myFunction` execution was delayed with `setTimeout` you will see "option 2" on console not "option 1". This may be desired behavior but it also may be not. It all depends on your design goals. If you need `myFunction` execution depends on the value x had at the moment the timeout was set not at the moment `myFunction` is actually executing you will have to create a closure around `myFunction` and pass it to `setTimeout`. Try to run following code based on @username_1 example to see the difference between the two approaches. ```js var x = 1; console.log("starting the script"); setTimeout(function() { myFunction(x); }, 1000); setTimeout( closureBuiler(x) , 2000); x = 2; function myFunction(x) { console.log("Delayed output"); if (x == 1) console.log("option 1"); else console.log("option 2"); } function closureBuiler(x) { return function(){ myFunction(x); } } ``` Upvotes: 0
2018/03/21
718
2,560
<issue_start>username_0: I'm trying to write a reusable linq expression that I'm passing multiple parameters into: ``` private System.Linq.Expressions.Expression> IsPrevFeeDetail { get { return (feeLevelDetail, currentFeeLevelDetail) => feeLevelDetail.ToDt < currentFeeLevelDetail.FromDt); } } ``` But every example of an expression I've seen only takes one parameter (the Entity type that you are querying against). Is there any way to do this out of the box?<issue_comment>username_1: Your question at the time of my answer simply asks if Expressions with multiple parameters is possible. They can be used, and are. `.Join` and `.GroupJoin` both have, as a last parameter, an expression parameter that takes types deriving from each side of the join and return a single entity and return a single type. ``` tableA.Join(tableB, e => e.Id, e => e.TableAId, (a, b) => new { IsPrevFee = a.ToDt < b.FromDt, AEntry = a, BEntry = b }).Where(e => e.IsPrevFee); ``` However, judging from the example in your question you seem to want to use it in a `Where`. That doesn't work because any of the Linq query functions give a collection of a single type as the return. You've seen above I've essentially converted the joining of the 2 entities into a single type of output. I've used an anonymous type but it may be helpful to note that this can also be a concrete type which you can then put in a reusable expression for the join, and then you've got a single-input expression you can pass to a `Where` clause. **EDIT for comments:** You should be able to create an `Expression>` by closing around another argument. I'm doing this on the fly so ignore any trivial syntax errors but an example would be something like this: ``` Expression> GetMyWhereClause(int statusId) { return myEntity => myEntity.StatusId == statusId; } ``` Usage: ``` var MyWhereClause = GetMyWhereClause(5); var result = db.MyEntities.Where(MyWhereClause); ``` I'm not certain but I don't believe this will work for non-primitives as LINQ-to-SQL may not be able to resolve the resulting expression. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` Expression> filter = q => q.PartitionKey.Equals("1998") && q.RowKey.Equals("Movie-775"); ``` (OR) ``` Expression> filter = q => q.PartitionKey.Equals("1998"); filter = filter.And(q => q.RowKey.Equals("Movie-775")); ``` (OR) ``` IEnumerable>>? filters; // Collection of expressions. var filter = filters.First(); filters.Skip(1).ToList().ForEach(x => { filter = filter.And(x); }); ``` Upvotes: 0
2018/03/21
612
1,855
<issue_start>username_0: Created the program to randomize the movement of the turtle but cannot get it to bounce off the window/canvas limits. Tried a few solutions posted with similar questions but still no luck. ``` from turtle import Turtle, Screen import random def createTurtle(color, width): tempName = Turtle("arrow") tempName.speed("fastest") tempName.color(color) tempName.width(width) return tempName def inScreen(screen, turt): x = screen.window_height() / 2 y = screen.window_height() / 2 min_x, max_x = -x, x min_y, max_y = -y, y turtleX, turtleY = turt.pos() while (min_x <= turtleX <= max_x) and (min_y <= turtleY <= max_y): turt.left(random.randrange(360)) turt.fd(random.randrange(50)) turtleX, turtleY = turt.pos() print(turtleX, ",", turtleY) wn = Screen() alpha = createTurtle("red", 3) inScreen(wn, alpha) wn.exitonclick() ```<issue_comment>username_1: Something like this: ``` while true: if (min_x <= turtleX <= max_x) and (min_y <= turtleY <= max_y): turt.left(random.randrange(360)) turt.fd(random.randrange(50)) turtleX, turtleY = turt.pos() print(turtleX, ",", turtleY) else: # Put code here to move the turtle to where it intersected the edge # and then bounce off ``` I guess you have to figure out where the intersection point is. Upvotes: 0 <issue_comment>username_2: Something like ``` old_position = turtle.position() # Assume we're good here. turtle.move_somehow() # Turtle computes its new position. turtle_x, turtle_y = turtle.position() # Maybe we're off the canvas now. if not (min_x <= turtle_x <= max_x) or not (min_y <= turtle_y <= max_y): turtle.goto(*old_position) # Back to safely. turtle.setheading(180 - turtle.heading()) # Reflect. ``` Upvotes: 2 [selected_answer]
2018/03/21
469
1,487
<issue_start>username_0: How do I unit test my script for incorrect command line arguments? For example, > > my\_script.py -t > > > should give an error since -t flag is not present, as shown in the code below: ``` parser = OptionParser() parser.add_option("-d", action="callback", callback=get_bios_difference, help="Check difference between two files" ) (options, args) = parser.parse_args() if len(sys.argv) == 1: # if only 1 argument, it's the script name parser.print_help() exit() ```<issue_comment>username_1: Something like this: ``` while true: if (min_x <= turtleX <= max_x) and (min_y <= turtleY <= max_y): turt.left(random.randrange(360)) turt.fd(random.randrange(50)) turtleX, turtleY = turt.pos() print(turtleX, ",", turtleY) else: # Put code here to move the turtle to where it intersected the edge # and then bounce off ``` I guess you have to figure out where the intersection point is. Upvotes: 0 <issue_comment>username_2: Something like ``` old_position = turtle.position() # Assume we're good here. turtle.move_somehow() # Turtle computes its new position. turtle_x, turtle_y = turtle.position() # Maybe we're off the canvas now. if not (min_x <= turtle_x <= max_x) or not (min_y <= turtle_y <= max_y): turtle.goto(*old_position) # Back to safely. turtle.setheading(180 - turtle.heading()) # Reflect. ``` Upvotes: 2 [selected_answer]
2018/03/21
456
1,862
<issue_start>username_0: I'm using MS Bot Framework, C#, with LUIS.ai. I've found that LUIS.ai adds a space both before and after dashes or underscores in the entities it finds in an utterance. For example if the user types: "search for transition-document files in my project" and "transition-document files" is determined to be an entity the actual LUIS entity object changes it to be "transition - document files" Other than simply replacing all " - " and " \_ " with "-" and "\_" respectively, can I just stop LUIS from doing this?<issue_comment>username_1: > > LUIS.ai adds a space both before and after dashes or underscores in the entities it finds in an utterance. > > > As you said, the whitespaces will be added when an utterance contains some special characters. For example, when I do with something like URL that contains `/` and `.` etc, same issue appears. And as I know, currently LUIS seems not enable us to stop this by doing some configuration or settings. To solve this problem, you can try to handle that in your code to remove whitespace from matched entities `"transition - document"` by using regular expression. Upvotes: 2 <issue_comment>username_2: You can override this on a LuisDialog and replace any entities where you need the exact text: ``` protected override Task DispatchToIntentHandler(IDialogContext context, IAwaitable item, IntentRecommendation bestIntent, LuisResult result) { // remove spaces LUIS adds before and after dashes and underscores EntityRecommendation resourceNameEntity = result.Entities.Where(e => e.Type == Luis\_EntityTypes.ResourceName).FirstOrDefault(); if (resourceNameEntity != null) { resourceNameEntity.Entity = resourceNameEntity.Entity.Replace(" - ", "-").Replace(" \_ ", "\_"); } return base.DispatchToIntentHandler(context, item, bestIntent, result); } ``` Upvotes: 1
2018/03/21
484
1,707
<issue_start>username_0: When I use `EarlyStopping` callback does Keras save best model in terms of `val_loss` or it save model on save\_epoch = [best epoch in terms of `val_loss`] + YEARLY\_STOPPING\_PATIENCE\_EPOCHS ? If it's second option, how to just save best model? Here is code snippet: ``` early_stopping = EarlyStopping(monitor='val_loss', patience=YEARLY_STOPPING_PATIENCE_EPOCHS) history = model.fit_generator( train_generator, steps_per_epoch=100, # 1 epoch = BATCH_SIZE * steps_per_epoch samples epochs=N_EPOCHS, validation_data=test_generator, validation_steps=20, callbacks=[early_stopping]) #Save train log to .csv pd.DataFrame(history.history).to_csv('vgg16_binary_crossentropy_train_log.csv', index=False) model.save('vgg16_binary_crossentropy.h5') ```<issue_comment>username_1: From my experience using the 'earlystopping' callback, the model will not be saved automatically...it will just stop training and when you save it manually, it will be the second option you present. To have your model save each time val\_loss decreases, see the following documentation page: <https://keras.io/callbacks/> and look at the "Example: model checkpoints" section which will tell you exactly what to do. note that if you wish to re-use your saved model, I have had better luck using 'save\_weights' in combo with saving the architecture in json. YMMV. Upvotes: 2 <issue_comment>username_2: In v2.2.4+ of Keras, [EarlyStopping](https://keras.io/callbacks/#earlystopping) has a `restore_best_weights` parameter which, when set to `True`, will set the model to the state of best CV performance. For example: ``` EarlyStopping(restore_best_weights=True) ``` Upvotes: 2
2018/03/21
2,492
7,620
<issue_start>username_0: Given n sorted lists A1, A2, ..., An of integers in decreasing order, is there an algorithm to efficiently generate all elements of their cartesian product in decreasing tuple sum order ? For example, n=3 `A1 = [9, 8, 0]` `A2 = [4, 2]` `A3 = [5, 1]` The expected output would be the cartesian product of A1xA2xA3 in the following order: `combination sum` `9, 4, 5 18` `8, 4, 5 17` `9, 2, 5 16` `8, 2, 5 15` `9, 4, 1 14` `8, 4, 1 13` `9, 2, 1 12` `8, 2, 1 11` `0, 4, 5 9` `0, 2, 5 7` `0, 4, 1 5` `0, 2, 1 3`<issue_comment>username_1: Here is some Python for this. (Not very efficient - it may be better to just generate the whole list then sort it.) ``` #! /usr/bin/env python import heapq def decreasing_tuple_order(*lists): # Each priority queue element will be: # (-sum, indices, incrementing_index, sliced) # The top element will have the largest sum. if 0 < min((len(l) for l in lists)): indices = [0 for l in lists] sliced = [lists[i][indices[i]] for i in range(len(indices))] queue = [(-sum(sliced), indices, 0, sliced)] while 0 < len(queue): #print(queue) (_, indices, indexable, sliced) = heapq.heappop(queue) yield sliced # Can we increment this index? if indices[indexable] + 1 < len(lists[indexable]): new_indices = indices[:] new_indices[indexable] = indices[indexable] + 1 sliced = [lists[i][new_indices[i]] for i in range(len(indices))] heapq.heappush(queue, (-sum(sliced), new_indices, indexable, sliced)) # Start indexing the next index? while indexable + 1 < len(lists): indexable = indexable + 1 if 1 < len(lists[indexable]): # Start incrementing here. indices[indexable] = 1 sliced = [lists[i][indices[i]] for i in range(len(indices))] heapq.heappush(queue, (-sum(sliced), indices, indexable, sliced)) a1 = [9, 8, 0] a2 = [4, 2] a3 = [5, 1] for x in decreasing_tuple_order(a1, a2, a3): print((x,sum(x))) ``` Upvotes: 0 <issue_comment>username_2: If the problem instance has N sets to cross, then you can think of the tuples in the product as an N-dimensional "rectangular" grid, where each tuple corresponds to a grid element. You'll start by emitting the max-sum tuple [9,4,5], which is at one corner of the grid. You'll keep track of a "candidate set" of un-emitted tuples that are one smaller on each dimension with respect to at least one already emitted. If it helps, you can visualize the already-emitted tuples as a "solid" in the grid. The candidate set is all tuples that touch the solid's surface. You'll repeatedly choose the next tuple to emit from the candidate set, then update the set with the neighbors of the newly emitted tuple. When the set is empty, you're done. After emitting [9,4,5], the candidate set is ``` [8,4,5] (one smaller on first dimension) [9,2,5] (one smaller on second dimension) [9,4,1] (one smaller on third dimension) ``` Next emit the one of these with the largest sum. That's [8,4,5]. Adjacent to that are ``` [0,4,5], [8,2,5], [8,4,1] ``` Add those to the candidate set, so we now have ``` [9,2,5], [9,4,1], [0,4,5], [8,2,5], [8,4,1] ``` Again pick the highest sum. That's [9,2,5]. Adjacent are ``` [8,2,5], [9,2,1]. ``` So the new candidate set is ``` [9,4,1], [0,4,5], [8,2,5], [8,4,1], [9,2,1] ``` Note [8,2,5] came up again. Don't duplicate it. This time the highest sum is [8,2,5]. Adjacent are ``` [0,2,5], [8,2,1] ``` At this point you should have the idea. Use a max heap for the candidate set. Then finding the tuple with max sum requires O(log |C|) where C is the candidate set. How big can the set get? Interesting question. I'll let you think about it. For 3 input sets as in your example, it's ``` |C| = O(|A1||A2| + |A2||A3| + |A1||A3|) ``` So the cost of emitting each tuple is ``` O(log(|A1||A2| + |A2||A3| + |A1||A3|)) ``` If the sets have size at most N, then this is O(log 3 N^2) = O(log 3 + 2 log N) = O(log N). There are |A1||A2||A3| tuples to emit, which is O(N^3). The simpler algorithm of generating all tuples and sorting is O(log N^3) = O(3 log N) = O(log N). It's very roughly only 50% slower, which is asymptotically the same. The main advantage of the more complex algorithm is that it saves O(N) space. The heap/priority queue size is only O(N^2). Here is a quick Java implementation meant to keep the code size small. ``` import java.util.Arrays; import java.util.HashSet; import java.util.PriorityQueue; import java.util.Set; public class SortedProduct { final SortedTuple [] tuples; final NoDupHeap candidates = new NoDupHeap(); SortedProduct(SortedTuple [] tuple) { this.tuples = Arrays.copyOf(tuple, tuple.length); reset(); } static class SortedTuple { final int [] elts; SortedTuple(int... elts) { this.elts = Arrays.copyOf(elts, elts.length); Arrays.sort(this.elts); } @Override public String toString() { return Arrays.toString(elts); } } class RefTuple { final int [] refs; final int sum; RefTuple(int [] index, int sum) { this.refs = index; this.sum = sum; } RefTuple getSuccessor(int i) { if (refs[i] == 0) return null; int [] newRefs = Arrays.copyOf(this.refs, this.refs.length); int j = newRefs[i]--; return new RefTuple(newRefs, sum - tuples[i].elts[j] + tuples[i].elts[j - 1]); } int [] getTuple() { int [] val = new int[refs.length]; for (int i = 0; i < refs.length; ++i) val[i] = tuples[i].elts[refs[i]]; return val; } @Override public int hashCode() { return Arrays.hashCode(refs); } @Override public boolean equals(Object o) { if (o instanceof RefTuple) { RefTuple t = (RefTuple) o; return Arrays.equals(refs, t.refs); } return false; } } RefTuple getInitialCandidate() { int [] index = new int[tuples.length]; int sum = 0; for (int j = 0; j < index.length; ++j) sum += tuples[j].elts[index[j] = tuples[j].elts.length - 1]; return new RefTuple(index, sum); } final void reset() { candidates.clear(); candidates.add(getInitialCandidate()); } int [] getNext() { if (candidates.isEmpty()) return null; RefTuple next = candidates.poll(); for (int i = 0; i < tuples.length; ++i) { RefTuple successor = next.getSuccessor(i); if (successor != null) candidates.add(successor); } return next.getTuple(); } /** A max heap of indirect ref tuples that ignores addition of duplicates. */ static class NoDupHeap { final PriorityQueue heap = new PriorityQueue<>((a, b) -> Integer.compare(b.sum, a.sum)); final Set set = new HashSet<>(); void add(RefTuple t) { if (set.contains(t)) return; heap.add(t); set.add(t); } RefTuple poll() { RefTuple t = heap.poll(); set.remove(t); return t; } boolean isEmpty() { return heap.isEmpty(); } void clear() { heap.clear(); set.clear(); } } public static void main(String [] args) { SortedTuple [] tuples = { new SortedTuple(9, 8, 0), new SortedTuple(4, 2), new SortedTuple(5, 1), }; SortedProduct product = new SortedProduct(tuples); for (;;) { int[] next = product.getNext(); if (next == null) break; System.out.println(Arrays.toString(next)); } } } ``` Upvotes: 4 [selected_answer]