date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/20
1,434
4,937
<issue_start>username_0: ``` $e = $ErrorActionPreference $ErrorActionPreference="stop" $E_Subnet_1 = '10.0.1' $E_Subnet_2 = '10.0.2' $O_Subnet_1 = '10.11.1' $O_Subnet_2 = '10.11.2' $D_Subnet_1 = '10.12.1' $D_Subnet_2 = '10.12.2' $Ethernet0 = 'Ethernet0' $All_Subnets = @("$E_Subnet_1", "$E_Subnet_2", "$O_Subnet_1", "$O_Subnet_2", "$D_Subnet_1", "$D_Subnet_2") $result = (Get-NetAdapter | ? status -eq 'up' | Get-NetIPAddress -ErrorAction 0 | ? PrefixOrigin -eq 'Manual' | ? IPAddress -match $All_Subnets | foreach { $Ethernet0 -eq $_.InterfaceAlias}) Write-Host "interface_alias=$result" ``` If you will please consider the PowerShell snippet above which queries the network interfaces and based on the matching subnet it then checks if the interface name equals "Ethernet0" producing a boolean value. The IPAddress of the server I am currently working matches the first three octets of `$D_Subnet_1` and produces a value of `interface_alias=True` if I target `$D_Subnet_1` like this: ``` $result = (Get-NetAdapter | ? status -eq 'up' | Get-NetIPAddress -ErrorAction 0 | ? PrefixOrigin -eq 'Manual' | ? IPAddress -match $D_Subnet_1 | foreach { $Ethernet0 -eq $_.InterfaceAlias}) Write-Host "interface_alias=$result" ``` But if I try to run the command using the `$All_Subnets` array: ``` $result = (Get-NetAdapter | ? status -eq 'up' | Get-NetIPAddress -ErrorAction 0 | ? PrefixOrigin -eq 'Manual' | ? IPAddress -match $All_Subnets | foreach { $Ethernet0 -eq $_.InterfaceAlias}) Write-Host "interface_alias=$result" ``` It just produces `interface_alias=` with no value at all. I have tried swapping `-match` for `-contain` and `like` with no luck. How can I fix this?<issue_comment>username_1: You're using the wrong comparisons here: ``` Where-Object -Property 'IPAddress' -Match @('10.30.2','10.40.2') ``` You need to grab the first 3 octets to do this comparison properly: ``` Where-Object -FilterScript { $_.IPAddress.Substring(0, $_.IPAddress.LastIndexOf('.')) -in $All_Subnets } ``` And if you're not on version 3+, just swap the comparison's sides and change the operator to `-contains` Upvotes: 3 [selected_answer]<issue_comment>username_2: PowerShell has two operators which work with collections: `-contains` and `-in`. The other operators, I think all of them, can't work with collections but PowerShell pretends they can. They can be used for two types of matching - "*compare these two single things with each other and output a result*", which is `$a -eq $z` or `$a -match $z` and the output is a boolean true/false - did it work? And it can do "*compare this collection of things against this single thing and output filter the collection to only the things that worked*". This is `($a, $b, $c) -eq $z` or `($a, $b, $c) -match $z` and the output is the things for which the test worked: `($b, $c)`. Specifically, collection on the left, single thing on the right. What is tripping you up is that `-match` does a regex test, and regular expressions can match substrings. So your direct subnet test is doing `"10.0.1.2" -match "10.0.1"` and it works. But your second attempt to use the array is going all wrong because putting the array on the right changes what happens: ``` IPAddress -match $All_Subnets ``` turns into "-match works with a regular expression on the right, the array gets cast to a string, it's joined together with a space between each item, and becomes: `10.0.1 10.0.2 10.11.1 10.11.2 10.12.1 10.12.2` And now you have `"10.0.1.2" -match "10.0.1 10.0.2 10.11.1 10.11.2 10.12.1 10.12.2"` which it doesn't. This also explains why `-in` and `-contains` don't work - because they are *not* doing regular expression tests, and cannot match substrings. They look for whether the IPAddress is in the list of subnets exactly - which it isn't, because they have no last octet. @TheIncorrigible1's answer works around this by converting the IP address to just the first three octets, then looking for that in the array, which it a test that can work. Another approach would be to loop over the contents of the array and test each one. e.g. ``` ? { $All_Subnets | foreach { $IPAddress -match $_ } } | ``` "Where (there is any output from testing each subnet against the IP)". This assumes that all your subnets are /24 which may or may not be good. You could instead change your array to include network addresses and subnet masks and do a "proper" subnet check, e.g. something like <http://get-powershell.com/post/2010/01/29/Determining-if-IP-addresses-are-on-the-same-subnet.aspx> Upvotes: 2
2018/03/20
614
2,198
<issue_start>username_0: Completely new to VBA coding but working on a project. Need help! I have a set of dates in Column J. I have to manually enter dates in cells B3, C3, D3 .... so on till K3. If the dates in the cells B3 to K3 (only if values are present in these cells) match the date in column J then i have to autofill Column H with value "Create". (Not necessary that all cells from B3 to K3 will be filled). I tried doing this coding, but throwing error. Can someone help me in fixing my Code? Thanks. ``` Sub NDate_Input() ' 'Autofill for Create Date & Update Date ' ' Worksheets("ORD_CS").Activate Dim sht As Worksheet Dim LR As Long Dim i As Long Set sht = ActiveWorkbook.Worksheets("ORD_CS") LR = sht.UsedRange.Rows.Count With sht For i = 8 To LR If Range("B3:K3").Value = Range("J" & i).Value Then Range("H" & i).Value = "Create" End If Next i End With End Sub ```<issue_comment>username_1: Place this code inside the sheet module where the data is and it will fire each time you change a date in `B3:K3`. ``` Option Explicit Private Sub Worksheet_Change(ByVal Target As Range) If Not Intersect(Target, Range("B3:K3")) Is Nothing Then If Target.Value <> vbNullString Then Dim findMe as Range Set findMe = Range("J1:J100000").Find(Target.Value, lookat:=xlWhole) If Not findMe Is Nothing Then Range("H" & findMe.Row).Value = "Create" End If End If End If End Sub ``` Upvotes: 1 <issue_comment>username_2: you could try this ``` Option Explicit Sub NDate_Input() Dim i As Long, nVals As Long Dim str As String With Worksheets("ORD_CS") With .Range("B3:K3") str = WorksheetFunction.Trim(Join(Application.Transpose(Application.Transpose(.Value)), " ")) nVals = WorksheetFunction.Count(.Cells) End With For i = 8 To .UsedRange.Rows.Count If WorksheetFunction.Trim(WorksheetFunction.Rept(.Range("J" & i).Value & " ", nVals)) = str Then .Range("H" & i).Value = "Create" Next End With End Sub ``` Upvotes: 0
2018/03/20
1,195
3,595
<issue_start>username_0: I have a flat array like this, I'm supposed to build a flat array for it. The object will be an children property of it's parent object if the pid is not null. How should I do this? ``` var a = [ {id: 1, pid: null}, {id: 2, pid: 1}, {id: 3, pid: 1}, {id: 4, pid: 3}, {id: 5, pid: 3} ]; ``` Expect output: ``` var result = [{id: 1, children: [ {id: 2, children: []}, {id: 3, children: [{id: 4}, {id: 5}]} ]}] ```<issue_comment>username_1: You could use `reduce()` method and create recursive function. ```js var a = [{id: 1, pid: null}, {id: 2, pid: 1}, {id: 3, pid: 1}, {id: 4, pid: 3}, {id: 5, pid: 3}]; function tree(data, parent) { return data.reduce((r, {id,pid}) => { if (parent == pid) { const obj = {id} const children = tree(data, id); if (children.length) obj.children = children; r.push(obj) } return r; }, []) } const result = tree(a, null); console.log(result); ``` Upvotes: 1 <issue_comment>username_2: You could use a single loop approach which works for unsorted arrays as well. ```js var a = [{ id: 1, pid: null }, { id: 2, pid: 1 }, { id: 3, pid: 1 }, { id: 4, pid: 3 }, { id: 5, pid: 3 }], tree = function (data, root) { return data.reduce(function (o, { id, pid }) { o[id] = o[id] || { id }; o[pid] = o[pid] || { id: pid }; o[pid].children = o[pid].children || []; o[pid].children.push(o[id]); return o; }, {})[root].children; }(a, null); console.log(tree); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 2 <issue_comment>username_3: Both answers using reduce here are great, one slight issue is that there both multi-pass, IOW: if the tree was very big there is going to be a lot of linear searching. One solution to this is to first build a map, and then flatten the map.. mmm, actually flatten is probably the wrong word here, maybe expand.. :) But you get the idea.. Below is an example. ```js const a = [ {id: 1, pid: null}, {id: 2, pid: 1}, {id: 3, pid: 1}, {id: 4, pid: 3}, {id: 5, pid: 3} ]; function flatern(map, parent) { const g = map.get(parent); const ret = []; if (g) { for (const id of g) { const k = {id}; ret.push(k); const sub = flatern(map, id); if (sub) k.children = sub; } return ret; } return null; } function tree(a) { const m = new Map(); a.forEach((i) => { const g = m.get(i.pid); if (!g) { m.set(i.pid, [i.id]); } else { g.push(i.id); } }); return flatern(m, null); } console.log(tree(a)); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 1 [selected_answer]<issue_comment>username_4: ``` var a = [ {id: 1, pid: null}, {id: 2, pid: 1}, {id: 3, pid: 1}, {id: 4, pid: 3}, {id: 5, pid: 3} ]; function processToTree(data) { const map = {}; data.forEach(item => { map[item.id] = item; item.children = []; }); const roots = []; data.forEach(item => { const parent = map[item.pid]; if (parent) { parent.children.push(item); } else { roots.push(item); } }); return roots; } ``` Learnt this method today, I think this is the best Upvotes: 0
2018/03/20
1,112
3,402
<issue_start>username_0: I currently have this setup and it works fine as I get the `first_name` in the URL as desired: ``` resources :pilots, param: first_name, constraints: { first_name: /.*/ } def to_param first_name end ``` How can I do this with two parameters - so a `first_name` and a `last_name` in the URL?<issue_comment>username_1: You could use `reduce()` method and create recursive function. ```js var a = [{id: 1, pid: null}, {id: 2, pid: 1}, {id: 3, pid: 1}, {id: 4, pid: 3}, {id: 5, pid: 3}]; function tree(data, parent) { return data.reduce((r, {id,pid}) => { if (parent == pid) { const obj = {id} const children = tree(data, id); if (children.length) obj.children = children; r.push(obj) } return r; }, []) } const result = tree(a, null); console.log(result); ``` Upvotes: 1 <issue_comment>username_2: You could use a single loop approach which works for unsorted arrays as well. ```js var a = [{ id: 1, pid: null }, { id: 2, pid: 1 }, { id: 3, pid: 1 }, { id: 4, pid: 3 }, { id: 5, pid: 3 }], tree = function (data, root) { return data.reduce(function (o, { id, pid }) { o[id] = o[id] || { id }; o[pid] = o[pid] || { id: pid }; o[pid].children = o[pid].children || []; o[pid].children.push(o[id]); return o; }, {})[root].children; }(a, null); console.log(tree); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 2 <issue_comment>username_3: Both answers using reduce here are great, one slight issue is that there both multi-pass, IOW: if the tree was very big there is going to be a lot of linear searching. One solution to this is to first build a map, and then flatten the map.. mmm, actually flatten is probably the wrong word here, maybe expand.. :) But you get the idea.. Below is an example. ```js const a = [ {id: 1, pid: null}, {id: 2, pid: 1}, {id: 3, pid: 1}, {id: 4, pid: 3}, {id: 5, pid: 3} ]; function flatern(map, parent) { const g = map.get(parent); const ret = []; if (g) { for (const id of g) { const k = {id}; ret.push(k); const sub = flatern(map, id); if (sub) k.children = sub; } return ret; } return null; } function tree(a) { const m = new Map(); a.forEach((i) => { const g = m.get(i.pid); if (!g) { m.set(i.pid, [i.id]); } else { g.push(i.id); } }); return flatern(m, null); } console.log(tree(a)); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 1 [selected_answer]<issue_comment>username_4: ``` var a = [ {id: 1, pid: null}, {id: 2, pid: 1}, {id: 3, pid: 1}, {id: 4, pid: 3}, {id: 5, pid: 3} ]; function processToTree(data) { const map = {}; data.forEach(item => { map[item.id] = item; item.children = []; }); const roots = []; data.forEach(item => { const parent = map[item.pid]; if (parent) { parent.children.push(item); } else { roots.push(item); } }); return roots; } ``` Learnt this method today, I think this is the best Upvotes: 0
2018/03/20
984
3,033
<issue_start>username_0: I am trying to figure this out. My goal is to have the two `buttons`... the `yes` and `no` buttons to show **below** the text. I can change the html ofcourse, but I want/need to do it with **CSS only**. Possible? ```css .box { background-color: yellow; } .yes { background-color: green; padding: 10px; } .no { background-color: red; padding: 10px; } ``` ```html yes no Here you can vote ```<issue_comment>username_1: Wrap your buttons inside a ```css .box { background-color: yellow; } .yes { background-color: green; padding: 10px; } .no { background-color: red; padding: 10px; } ``` ```html Here you can vote yes no ``` CSS Way which you are expecting ------------------------------- ```css .box { background-color: yellow; height: 30px; line-height: 30px; } .yes { background-color: green; padding: 10px; top: 40px; position: absolute; } .no { background-color: red; padding: 10px; top: 40px; left: 50px; position: absolute; } ``` ```html yes no Here you can vote ``` Upvotes: 1 <issue_comment>username_2: I am not a fan of using `position: absolute;` on anything unless it is absolutely necessary or you know exactly what you're doing. Instead you can wrap your buttons in another container and then make your `.box` a flex-container with direction `column`. like this: ```css .box { background-color: yellow; display: flex; flex-direction: column; } .yes { background-color: green; padding: 10px; } .no { background-color: red; padding: 10px; } ``` ```html yes no Here you can vote ``` fiddle: <https://jsfiddle.net/1yz4gfdv/> Upvotes: 2 <issue_comment>username_3: Without editing your HTML at all you can indeed use CSS to move the buttons below (although editing your HTML would be easiest). Something like this would work: ``` .box { background-color: yellow; overflow: auto; height: 100px; position: relative; } .yes { background-color: green; padding: 10px; } .no { background-color: red; padding: 10px; left: 50px; } button { position: absolute; top: 30px; } ``` Upvotes: 1 <issue_comment>username_4: The first thing, you can use float: right attribute to float 2 buttons "Yes" and "No" to the right. The second thing, you can use margin-top to move down 2 buttons "Yes" and "No" to the bottom of the text. The Third thing, you can use left position to set the width of these two buttons with the left side. The final thing, you can use the position absolute attribute to fix the position of these two buttons. Hope it help, regard! ```css .box { background-color: yellow; } .yes { background-color: green; padding: 10px; float: right; margin-top: 52px; left: 0; position: absolute; } .no { background-color: red; padding: 10px; float: right; margin-top: 52px; left: 40px; position: absolute; } ``` ```html yes no Here you can vote ``` Upvotes: 1
2018/03/20
378
1,300
<issue_start>username_0: I am using Pythonnet to call a C# function which returns a clr Object ( an **n** x **m** matrix). In python the type is **System.Object[,]**. How can I convert this variable to a Pandas DataFrame or something more manageable? Thank you.<issue_comment>username_1: At the end the only solution I could come up is to crawl it until getting an IndexError like this: ``` import pandas as pd def ObjectToDataFrame_nx2(obj) ts=pd.DataFrame(columns=['Dim1','Dim2']) i=0 while True: try: dim1=obj[i,0] dim2=obj[i,1] except IndexError: break ts=ts.append({'Dim1': dim1, 'Dim2': dim2},ignore_index=True) i+=1 return(ts) ``` Edit: this is the n x m version ``` def ObjectToDataFrame_nxm(obj): i=0 vvec=[] while True: j=0 vec=[] try: while True: try: vec.append(obj[i,j]) except IndexError: break j+=1 dummy = obj[i,0] vvec.append(vec) except IndexError: break i+=1 return(pd.DataFrame(vvec)) ``` Upvotes: 1 <issue_comment>username_2: ``` pd.DataFrame([[obj[j, i] for j in range(obj.GetLength(1))] for i in range(obj.GetLength(0))]) ``` Upvotes: 3 [selected_answer]
2018/03/20
381
1,155
<issue_start>username_0: ``` Ep 12 ``` I want to add another div like "ep" div in span with class "dub" with javascript or Jquery<issue_comment>username_1: ``` var newDiv = document.createElement('div'); newDiv.classList.add('class-name'); document.getElementsByClassName('bar')[0].append(newDiv); ``` Upvotes: -1 [selected_answer]<issue_comment>username_2: I think you can easily find the answer by search it on internet . There are many ways to do this. But I do it with the easiest way. Give your span a id because changes we done to the class will affect all it’s members. So I give ep\_bar as id. ``` Ep 12 ``` Then import jquery. After that, ``` Var div1 = 'Ep 13' $(“#ep_bar”).append(div1); ``` Or else find another way/s [here](https://stackoverflow.com/questions/395525/how-do-i-add-a-dom-element-with-jquery) or [here](https://stackoverflow.com/questions/11749314/using-jquery-to-dynamically-create-div) Upvotes: 2 <issue_comment>username_3: ``` `var $div = $(""); $div.addClass("ep").html("Ep 12"); var $span = $(""); $span.addClass("dub"); $div.appendTo($span);` ``` And then append your span wherever you need Upvotes: 0
2018/03/20
1,132
4,848
<issue_start>username_0: I have a program which executes things asynchronously using a ThreadPoolExecutor. I use CompletableFutures in Java 8 to schedule these tasks and then have them executed by the threads available in the thread pool. My code looks like this: ``` public class ThreadTest { public void print(String m) { System.out.println(m); } public class One implements Callable { public Integer call() throws Exception { print("One..."); Thread.sleep(6000); print("One!!"); return 100; } } public class Two implements Callable { public String call() throws Exception { print("Two..."); Thread.sleep(1000); print("Two!!"); return "Done"; } } @Test public void poolRun() throws InterruptedException, ExecutionException { int n = 3; // Build a fixed number of thread pool ExecutorService pool = Executors.newFixedThreadPool(n); CompletableFuture futureOne = CompletableFuture.runAsync(() -> new One()); // Wait until One finishes it's task. CompletableFuture futureTwo = CompletableFuture.runAsync(() -> new One()); // Wait until Two finishes it's task. CompletableFuture futureTwo = CompletableFuture.runAsync(() -> new Two()); CompletableFuture.allOf(new CompletableFuture[]{futureOne, futureTwo, futureThree}).get(); pool.shutdown(); } } ``` I need to set a timeout on each individual thread, for example to timeout at 10 minutes. I looked into the .get(TimeUnit timeUnit) method for CompletableFuture, but I wasn't sure if that sets a timeout on the thread pool or on the individual thread itself. <https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html#get-long-java.util.concurrent.TimeUnit-> Or should I be changing the way I use the executor service to set timeouts on individual threads? Thanks!<issue_comment>username_1: The `get` method you're referring to only refers to how long you wait for that individual future. By default, even if it times out, the task will continue executing. If you want to halt the thread executing if it times out (so that this thread can then do something else), you will need to `cancel` the task if a `TimeoutException` is caught. For example: ``` try { result = future.get(10, TimeUnit.MINUTES); } catch (TimeoutException e) { future.cancel(true); } ``` Note: This assumes that you have a task that regularly checks for interruptions. IO-bound tasks usually do this, but CPU-bound tasks will regularly need to explicitly check for an interrupted thread in order for the cancellation to take effect. Upvotes: 1 <issue_comment>username_2: CompletableFuture.get does not stop the thread running your task. The calling thread waits as long as you specify for the result, and if it times out it will throw an exception. But the thread running the task will continue until it is done. Here's the underlying reality: Java will not allow you to arbitrarily a terminate a task at any time. There was a time when this was part of the API, there were Thread.suspend/resume/stop methods on the Thread class. These were deprecated because there is no way to know if a suspended or stopped thread was holding locks that could block execution of other threads. So it's inherently unsafe to stop a thread at arbitrary times and places. You end up with deadlock in your program. See here: <https://docs.oracle.com/javase/8/docs/technotes/guides/concurrency/threadPrimitiveDeprecation.html> The same argument applies to any of the pools and executors and other classes you find in the concurrent package. You cannot arbitrarily stop a thread, or a task. You must put the logic for stopping and completion into the task itself, or you simply must wait until it is done. In this case you have one that runs for a second and one for 6 seconds. You can use things like mutexes and semaphores, you can use many of the things in the concurrent package and the concurrent.locks package, all that are useful to coordinate threads and to pass information about where they are. But you will not find a method anywhere that allows you to kill an arbitrary thread at any point in time, except for the ones that were previously deprecated as listed above, and those methods you are encouraged to stay away from. Future.cancel will stop a task from starting, and it will try to interrupt a task that is running, but all it does it stop a thread (by causing it to throw InterruptedException) that is currently blocked on an interruptible method call like Thread.sleep(), Object.wait(), or Condition.await(). If your task is doing anything else it will not stop until it completes or until it calls an interruptible method call. This will work on the code above since you are calling Thread.sleep. But once you have your task doing work, it will behave as I described. Upvotes: 3 [selected_answer]
2018/03/20
1,550
4,348
<issue_start>username_0: I've written a function to simulate genetic drift and i'm wanting to loop it over various values of t (number of generations), however whenever I do this I get the following error: ``` locifreq<-runif(49, .4, 0.8) gen<-1:100 for (i in 1:length(gen)){ pop[i]<-lapply(locifreq,wright.fisher,3000,200,gen[i]) } Error in `[<-`(`*tmp*`, i, j, value = rbinom(1, 2 * N, prob = k[i - 1, : subscript out of bounds ``` I think it's because my function isn't able to generate the appropriate matrices or it's can't access the matrix within the list (though I may be completely wrong!), however I'm not sure how to fix this, as each attempt results in an incorrect number of subscripts. The code for my function is below ``` wright.fisher<-function(p,Ne,nsim,t){ N <-Ne/2 NA1 <- 2*N*p NA2 <- 2*N*(1-p) k <- matrix(0, nrow = t, ncol = nsim) k[1,] <- rep(NA1, nsim) for (j in 1:nsim) { for (i in 2:t) { k[i, j] <- rbinom(1, 2*N, prob = k[i-1, j] / (2*N)) } } k <- as.matrix(k/(2*N)) t(k) } ``` Does anyone know how to fix this?<issue_comment>username_1: Your problem is reproduced with this portion of your code: ``` wright.fisher(locifreq[1], 3000, 200, 1) ``` The particular step that causes the subscript out of bounds error is the assignment of `k[i,j]` in line 9 on the first step through the loop. At that point, `i` is 2, `j` is 1. You can typically extend an object by assigning it to an index beyond its range, but you can't do that with an array in a manner that would require changing the number of dimensions. (which is what `k[2,1]` does at that moment, since it's a matrix of 1 row and 200 columns). ``` k <- matrix(1:200, 1) k[2,1] <- 5 # Error in `[<-`(`*tmp*`, 2, 1, value = 5) : subscript out of bounds ``` You can solve this by having your function initialize the matrix to have the correct number of dimensions. Make your changes to line 5 of your function ``` k <- matrix(0, nrow = max(2, t), ncol = nsim) ``` I chose `max(2, t)` because that is what the requirements of your nested for loop impose: `for (i in 2:t)`. It completes, addressing the question in your post, but please confirm that it produces the behavior you want. Additionally, there are a number of not particularly efficient idioms in your code, but that's another question. ### How to Debug in R If you want to know how to debug something like this... I'm thinking of writing a tutorial on how to use the browser (it comes up frequently here). Add a line at the top of your function body, `browser()`. Then when you try to run the browser, it will pull it up in the browser, which will allow you to step through the function one statement at a time. Enter `help` to see how to navigate it. In RStudio, keep on eye on the Environment tab to see what values the variable holds. Notice that you can evaluate any R expression while you're in the browser. This will help you check on what you think is causing problems. Upvotes: 2 [selected_answer]<issue_comment>username_2: Your Wright-Fisher simulation looks a bit complicated, but I don't know what you are simulating, so maybe that is correct. I will use a simpler version and just sample the gene frequency in the next generation given the frequency in this and the effective population size. ``` wf <- function(f, Ne) { rbinom(1, 2*Ne, prob = f) / (2*Ne) } ``` The real issue is the table and how to index into it. If `wf` computes one frequency given `f` and `Ne`, and you want `no_sims` simulations over `no_gen` generations, you can build a `no_gens x no_sims` table and fill it up, such that rows correspond to generations and columns correspond to independent simulations. It can look like this: ``` no_sims <- 5 no_gens <- 4 Ne <- 10000 locifreq <- runif(5, .4, 0.8) sims <- matrix(NA, ncol = no_sims, nrow = no_gens) sims[1,] <- runif(5, .4, 0.8) # first generation for (gen in 2:no_gens) { sims[gen,] <- sapply(sims[gen-1,], FUN = wf, Ne = Ne) } ``` The result with these (small) parameters look like this: ``` > sims [,1] [,2] [,3] [,4] [,5] [1,] 0.5948922 0.7185469 0.6290239 0.4303951 0.5701607 [2,] 0.5987000 0.7108500 0.6254000 0.4270500 0.5754000 [3,] 0.6020000 0.7103500 0.6260000 0.4320500 0.5723500 [4,] 0.5982500 0.7110500 0.6276000 0.4332500 0.5751000 ``` Upvotes: 0
2018/03/20
379
1,532
<issue_start>username_0: We have an Oracle server in sister-company that is a bottleneck for our SQL Server-based business process. When we use SSIS to query all tables we need, it completes in 5 hours (millions of records, but very few changes). Business wants data refresh slightly faster. How to replicate the data from Oracle to SQL Server in easy way with no third party tools? No admin rights in Oracle for us (only reading data and listing tables/views), but we've got sysadmin in SQL Server (2014 Enterprise).<issue_comment>username_1: Set up a linked server. pull the tables using a proc. Not guaranteed to be faster. Also, have you tried tuning the SSIS package? Set accurate values for commit sizes, maxed out the buffer settings. Tried bulk-data-distributor, etc? Upvotes: 1 <issue_comment>username_2: From Oracle to SQL Server => I would consider using sqlplus to offload data into CSV or delimited format and bcp'ing it into SQL\*Server. Upvotes: 0 <issue_comment>username_3: Try changing the batch size property. I always try 10000 rows but you should experiment. Many users have seen 10-50% improvement. You can also buy an Attunity Connector which can also improve performance considerably. Upvotes: 1 <issue_comment>username_4: We have used [SQL Server Migration Assistant](https://learn.microsoft.com/en-us/sql/ssma/sql-server-migration-assistant?view=sql-server-ver15) which was quite efficient. For the long term, you could consider SSIS to incrementally load your staging database in SQL Server. Upvotes: 0
2018/03/20
673
2,250
<issue_start>username_0: I am trying to update the emails of all customers who are located in the 'United States' by referencing the Country table however I keep getting returned an error: Error Code: 1054. Unknown column 'country' in 'IN/ALL/ANY subquery' ``` Update Customer Set email = concat(substr(first_name,1,1), last_name,'@<EMAIL>') where country in(Select country From customer join address using(address_id) join city using(city_id) join country using(country_id) Group by country Having country = 'United States'); ``` Also tried... ``` Update Customer Set email = concat(substr(first_name,1,1), last_name, '@sakilac<EMAIL>er.com.us') where country in(Select country From country where country = 'United States'); ``` I am providing an image of the EER Diagram that can be used as a reference [![EER Diagram](https://i.stack.imgur.com/ma0vv.png)](https://i.stack.imgur.com/ma0vv.png)<issue_comment>username_1: This seems so complicated. How about this? ``` Update Customer c join address a using (address_id) join city ci using (city_id) join country co using (country_id) set c.email = concat(left(c.first_name, 1), clast_name, '@sakilacustomer.com.us') where co.country = 'United States'; ``` Somehow, I think the issue with your query is that `customer` doesn't have `country`, so the error is the `country in` rather than inside the subquery. The error message itself is rather confusing, though. Upvotes: 1 <issue_comment>username_2: Try something like this: ``` Update a set a.email = concat(substr(first_name,1,1), last_name,'@<EMAIL>') from customer a join (select * from customer a join address b on a.address_id=b.address_id join city c on a.city_id=c.city_id join country d on a.country_id=d.country_id where country='united states' ) b on a.customer_id=b.customer_id ``` Upvotes: 0 <issue_comment>username_3: ``` update Customer set email = concat(substr(first_name,1,1), last_name, '@<EMAIL>') where customer_id in ( select customer_id from customer join address using(address_id) join city using(city_id) join country using(country_id) where country = 'United States' ); ``` Upvotes: 0
2018/03/20
520
1,657
<issue_start>username_0: I have to do a file comparison but I want you to exclude the comments for now it gives me a result like this. At the moment I am using: `diff -b -B [patch] [patch]` which gives me: ``` < # < # WeblogicHost bgri.wls.ri < # WeblogicPort 20015 < # SetHandler weblogic-handler < # < 45c39,43 ```<issue_comment>username_1: This seems so complicated. How about this? ``` Update Customer c join address a using (address_id) join city ci using (city_id) join country co using (country_id) set c.email = concat(left(c.first_name, 1), clast_name, '@sakilacustomer.com.us') where co.country = 'United States'; ``` Somehow, I think the issue with your query is that `customer` doesn't have `country`, so the error is the `country in` rather than inside the subquery. The error message itself is rather confusing, though. Upvotes: 1 <issue_comment>username_2: Try something like this: ``` Update a set a.email = concat(substr(first_name,1,1), last_name,'@<EMAIL>') from customer a join (select * from customer a join address b on a.address_id=b.address_id join city c on a.city_id=c.city_id join country d on a.country_id=d.country_id where country='united states' ) b on a.customer_id=b.customer_id ``` Upvotes: 0 <issue_comment>username_3: ``` update Customer set email = concat(substr(first_name,1,1), last_name, '@sakilac<EMAIL>er.com.us') where customer_id in ( select customer_id from customer join address using(address_id) join city using(city_id) join country using(country_id) where country = 'United States' ); ``` Upvotes: 0
2018/03/20
3,487
10,923
<issue_start>username_0: I need to get day of year (day1 is 1rst of january), week of year, and month of year from a dart DateTime object. I did not find any available library for this. Any idea ?<issue_comment>username_1: Day of year ``` final date = someDate; final diff = now.difference(new DateTime(date.year, 1, 1, 0, 0)); final diffInDays = diff.inDays; ``` Week of year ``` final date = someDate; final startOfYear = new DateTime(date.year, 1, 1, 0, 0); final firstMonday = startOfYear.weekday; final daysInFirstWeek = 8 - firstMonday; final diff = date.difference(startOfYear); var weeks = ((diff.inDays - daysInFirstWeek) / 7).ceil(); // It might differ how you want to treat the first week if(daysInFirstWeek > 3) { weeks += 1; } ``` Month of year ``` final monthOfYear = new DateTime.now().month; ``` Caution: That's not battle-tested code. Upvotes: 4 <issue_comment>username_2: This is my implementation of ISO 8601 Week of Year in Dart: ``` int getWeekOfYear(DateTime date) { final weekYearStartDate = getWeekYearStartDateForDate(date); final dayDiff = date.difference(weekYearStartDate).inDays; return ((dayDiff + 1) / 7).ceil(); } DateTime getWeekYearStartDateForDate(DateTime date) { int weekYear = getWeekYear(date); return getWeekYearStartDate(weekYear); } int getWeekYear(DateTime date) { assert(date.isUtc); final weekYearStartDate = getWeekYearStartDate(date.year); // in previous week year? if(weekYearStartDate.isAfter(date)) { return date.year - 1; } // in next week year? final nextWeekYearStartDate = getWeekYearStartDate(date.year + 1); if(isBeforeOrEqual(nextWeekYearStartDate, date)) { return date.year + 1; } return date.year; } DateTime getWeekYearStartDate(int year) { final firstDayOfYear = DateTime.utc(year, 1, 1); final dayOfWeek = firstDayOfYear.weekday; if(dayOfWeek <= DateTime.thursday) { return addDays(firstDayOfYear, 1 - dayOfWeek); } else { return addDays(firstDayOfYear, 8 - dayOfWeek); } } ``` Note that the "week year" is not always the calendar year, it could also be the one before or after: ``` void printWeekOfYear(DateTime date) { print('week ${getWeekOfYear(date)} in year ${getWeekYear(date)}'); } printWeekOfYear(DateTime.utc(2017, 1, 1)); // --> week 52 in year 2016 printWeekOfYear(DateTime.utc(2019, 12, 31)); // --> week 1 in year 2020 ``` Upvotes: 3 <issue_comment>username_3: [ORIGINAL ANSWER - Please scroll below to the updated answer, which has an updated calculation] Week of year: ``` /// Calculates week number from a date as per https://en.wikipedia.org/wiki/ISO_week_date#Calculation int weekNumber(DateTime date) { int dayOfYear = int.parse(DateFormat("D").format(date)); return ((dayOfYear - date.weekday + 10) / 7).floor(); } ``` The rest is available through [DateFormat](https://pub.dartlang.org/documentation/intl/latest/intl/DateFormat-class.html) (part of the [intl package](https://pub.dartlang.org/packages/intl)). [UPDATED ANSWER] As pointed out by <NAME> in a comment, the original answer did not include the necessary correction for certain dates. Here is a full implementation of the ISO week date calculation. ``` /// Calculates number of weeks for a given year as per https://en.wikipedia.org/wiki/ISO_week_date#Weeks_per_year int numOfWeeks(int year) { DateTime dec28 = DateTime(year, 12, 28); int dayOfDec28 = int.parse(DateFormat("D").format(dec28)); return ((dayOfDec28 - dec28.weekday + 10) / 7).floor(); } /// Calculates week number from a date as per https://en.wikipedia.org/wiki/ISO_week_date#Calculation int weekNumber(DateTime date) { int dayOfYear = int.parse(DateFormat("D").format(date)); int woy = ((dayOfYear - date.weekday + 10) / 7).floor(); if (woy < 1) { woy = numOfWeeks(date.year - 1); } else if (woy > numOfWeeks(date.year)) { woy = 1; } return woy; } ``` Upvotes: 5 <issue_comment>username_4: I wrote another solution based on your answers, it seem to work fine, but please feel free to give me feedback if you see a problem: ``` class DateUtils { static int currentWeek() { return weekOfYear(DateTime.now()); } static int weekOfYear(DateTime date) { DateTime monday = weekStart(date); DateTime first = weekYearStartDate(monday.year); int week = 1 + (monday.difference(first).inDays / 7).floor(); if (week == 53 && DateTime(monday.year, 12, 31).weekday < 4) week = 1; return week; } static DateTime weekStart(DateTime date) { // This is ugly, but to avoid problems with daylight saving DateTime monday = DateTime.utc(date.year, date.month, date.day); monday = monday.subtract(Duration(days: monday.weekday - 1)); return monday; } static DateTime weekEnd(DateTime date) { // This is ugly, but to avoid problems with daylight saving // Set the last microsecond to really be the end of the week DateTime sunday = DateTime.utc(date.year, date.month, date.day, 23, 59, 59, 999, 999999); sunday = sunday.add(Duration(days: 7 - sunday.weekday)); return sunday; } static DateTime weekYearStartDate(int year) { final firstDayOfYear = DateTime.utc(year, 1, 1); final dayOfWeek = firstDayOfYear.weekday; return firstDayOfYear.add(Duration(days: (dayOfWeek <= DateTime.thursday ? 1 : 8) - dayOfWeek)); } } ``` Upvotes: 2 <issue_comment>username_5: Try this really simple dart package, [Jiffy](https://pub.dev/packages/jiffy). The code below will help To get date day of year ``` // This will return the day of year from now Jiffy.now().dayOfYear; // 295 // You can also pass in a dateTime object Jiffy.parseFromDateTime(DateTime(2019, 1, 3)).dayOfYear; // 3 ``` To get week of year ``` Jiffy.now().week; // 43 // You can also pass in an Array or Map Jiffy.parseFromList([2019, 1, 3]).week; // 1 ``` To get month of year ``` Jiffy.now().month; // 10 Jiffy.parseFromMap({ "year": 2019, "month": 1, "day": 3 }).month; // 1 ``` Hope this answer helps Upvotes: 4 <issue_comment>username_6: Number Week according to ISO 8601 ================================= ``` int isoWeekNumber(DateTime date) { int daysToAdd = DateTime.thursday - date.weekday; DateTime thursdayDate = daysToAdd > 0 ? date.add(Duration(days: daysToAdd)) : date.subtract(Duration(days: daysToAdd.abs())); int dayOfYearThursday = dayOfYear(thursdayDate); return 1 + ((dayOfYearThursday - 1) / 7).floor(); } int dayOfYear(DateTime date) { return date.difference(DateTime(date.year, 1, 1)).inDays; } ``` Upvotes: 3 <issue_comment>username_7: Dart SDK2.8.4 and later: **day of the year** , with no packages: ``` void main(){ final now = new DateTime.now(); final todayInDays = now.difference(new DateTime(now.year,1,1,0,0)).inDays; //return 157 } ``` reference (official)> [inDays, from Dart Official documentation](https://api.dart.dev/stable/2.8.4/dart-core/Duration/inDays.html) Upvotes: 3 <issue_comment>username_8: ``` getWeekOfYear(){ DateTime _kita=DateTime.now(); int d=DateTime.parse("${_kita.year}-01-01").millisecondsSinceEpoch; int t= _kita.millisecondsSinceEpoch; double daydiff= (t- d)/(1000 * (3600 * 24)); double week= daydiff/7; return(week.ceil()); } ``` Tested and working you do not need any package Upvotes: 0 <issue_comment>username_9: This calculation works for me. ``` int dayOfWeek({DateTime date}) { if (date == null) date = DateTime.now(); int w = ((dayOfYear(date) - date.weekday + 10) / 7).floor(); if (w == 0) { w = getYearsWeekCount(date.year-1); } else if (w == 53) { DateTime lastDay = DateTime(date.year, DateTime.december, 31); if (lastDay.weekday < DateTime.thursday) { w = 1; } } return w; } int getYearsWeekCount(int year) { DateTime lastDay = DateTime(year, DateTime.december, 31); int count = dayOfWeek(date: lastDay); if (count == 1) count = dayOfWeek(date: lastDay.subtract(Duration(days: 7))); return count; } int dayOfYear(DateTime date) { int total = 0; for (int i = 1; i < date.month; i++) { total += getDayOfMonth(date.year, i); } total+=date.day; return total; } int getDayOfMonth(int year, int month) { final List days = [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]; if (year % 4 == 0) days[DateTime.february]++; return days[month]; } ``` Upvotes: 0 <issue_comment>username_10: the previous most voted solution is not working, if the year changes. for example has December 2020 a 53. week and if i change to January 2021 the previous solution computed 0 and not 53. so i wrote a DateTime extension to cover year change. ``` int get weekNumber { if (weekday > DateTime.thursday) { int toSubstract = weekday - DateTime.thursday; DateTime thursday = subtract(Duration(days: toSubstract)); if (thursday.year != year) { return thursday.weekNumber; } } int dayOfYear = int.parse(format('D')); return ((dayOfYear - weekday + 10) / 7).floor(); } ``` Upvotes: 0 <issue_comment>username_11: The correct answer of @username_3 but as DateTime extension ``` extension DateTimeExt on DateTime { /// Calculates week number from a date as per https://en.wikipedia.org/wiki/ISO_week_date#Calculation int get weekNumber { int dayOfYear = int.parse(DateFormat("D").format(this)); int woy = ((dayOfYear - weekday + 10) / 7).floor(); if (woy < 1) { woy = _numOfWeeks(year - 1); } else if (woy > _numOfWeeks(year)) { woy = 1; } return woy; } /// Calculates number of weeks for a given year as per https://en.wikipedia.org/wiki/ISO_week_date#Weeks_per_year int _numOfWeeks(int year) { DateTime dec28 = DateTime(year, 12, 28); int dayOfDec28 = int.parse(DateFormat("D").format(dec28)); return ((dayOfDec28 - dec28.weekday + 10) / 7).floor(); } } ``` Upvotes: 0 <issue_comment>username_12: ``` static int getWeekNumber(DateTime datetime) { var day1 = DateTime(datetime.year); DateTime firstMonday; switch (day1.weekday) { case 1: // mon firstMonday = day1; break; case 2: // tue firstMonday = day1.add(const Duration(days: 6)); break; case 3: // wed firstMonday = day1.add(const Duration(days: 5)); break; case 4: // thir firstMonday = day1.add(const Duration(days: 4)); break; case 5: // fri firstMonday = day1.add(const Duration(days: 3)); break; case 6: // sat firstMonday = day1.add(const Duration(days: 2)); break; case 7: // sun firstMonday = day1.add(const Duration(days: 1)); break; default: firstMonday = day1; } Duration sinceStartOfYear = datetime.diff(firstMonday); double weekNo = (sinceStartOfYear.inDays / 7); var no = weekNo.floor(); return no + 1; } ``` My full tested method. Upvotes: 0
2018/03/20
3,992
13,474
<issue_start>username_0: ``` void Bst_DeleteStudent(struct BstStudent** root, char student_name[]){ struct BstStudent* current = *root; struct BstStudent* parent = NULL; int flag = 0; int i; while(current != NULL){ if(strcmp(current->name, student_name) > 0){ parent = current; current = current->left; } else if(strcmp(current->name, student_name) < 0){ parent = current; current = current->right; } else{ flag = 1; //If node has no children if(current->left == NULL && current->right == NULL){ if(parent->left == current){ parent->left = NULL; } else{ parent->right = NULL; } free(current); return; } //If current has one child else if((current->left == NULL && current->right != NULL) || (current->left != NULL && current->right == NULL)){ //If node has a right child if(current->right != NULL && current->left != NULL){ if(parent->right == current){ parent->right = current->right; } else if(parent->left == current){ parent->left = current->right; } } //If node has a left child else if(current->left != NULL && current->right == NULL){ if(parent->right == current){ parent->right = current->left; } else if(parent->left == current){ parent->left = current->left; } } free(current); return; } //If current has two children else{ struct BstStudent* swap_this = current->right; struct BstStudent* swap_this_prev = current; while(swap_this->left != NULL){ swap_this_prev = swap_this; swap_this = swap_this->left; } strcpy(current->name, swap_this->name); current->id = swap_this->id; for(i=0; i<5; i++){ current->marks[i] = swap_this->marks[i]; } if(swap_this_prev->left == swap_this){ swap_this_prev->left = swap_this->right; } else if(swap_this_prev->right == swap_this){ swap_this_prev->right = swap_this->right; } free(swap_this); return; } } } if(flag == 1){ printf("\nStudent named '%s' removed\n", student_name); } else{ printf("\nNo student named '%s' is found in the list!\n", student_name); } } ``` Hi guys, I'm currently want to make a delete function for a binary search tree implementation which sorts the nodes based on names, alphabetically. My code works perfectly fine can delete most of the time. The code only gives a segmentation fault in a specific case when I want to delete the root node and the root node has only one child or no children. Every other deletion works. Can you guys please help me?<issue_comment>username_1: You are trying to access the left/right node of NULL(parent), in the case of deletion of root node. Add a condition before accessing parent whether parent is NULL or not if parent is not NULL then only assign the value to its node pointer. For example ``` if(parent != NULL) { if(parent->left == current){ parent->left = NULL; } else{ parent->right = NULL; } } ``` Add the same condition in other parts of code also. Upvotes: 0 <issue_comment>username_2: ``` #include #include #include #include struct BstStudent{ char name[50]; int id; float marks[5]; struct BstStudent\* left; struct BstStudent\* right; }; void Bst\_IntroduceStudent(struct BstStudent\*\* root, char student\_name[], int student\_id){ struct BstStudent\* new\_student = (struct BstStudent\*)malloc(sizeof(struct BstStudent)); struct BstStudent\* current = \*root; struct BstStudent\* previous = NULL; int i; strcpy(new\_student->name, student\_name); new\_student->id = student\_id; new\_student->left = NULL; new\_student->right = NULL; for(i=0; i<5; i++){ new\_student->marks[i] = 0; } //Check if the tree is empty if(\*root == NULL){ \*root = new\_student; } else{ //If not empty, go through the tree until we find the right spot for the student while(current != NULL){ if(strcmp(current->name, new\_student->name) > 0){ previous = current; current = current->left; } else if(strcmp(current->name, new\_student->name) < 0){ previous = current; current = current->right; } else if(strcmp(current->name, new\_student->name) == 0){ printf("\n\*\* A student with that name already exists! \*\*\n"); free(new\_student); return; } } //If we found the right node after which we want to place the student, decide if place right or left if(strcmp(previous->name, new\_student->name) > 0){ previous->left = new\_student; } else{ previous->right = new\_student; } } } void Bst\_DeleteStudent(struct BstStudent\*\* root, char student\_name[]){ struct BstStudent\* current = \*root; struct BstStudent\* parent = NULL; int flag = 0; int i; while(current != NULL){ if(strcmp(current->name, student\_name) > 0){ parent = current; current = current->left; } else if(strcmp(current->name, student\_name) < 0){ parent = current; current = current->right; } else{ flag = 1; //If node has no children if(current->left == NULL && current->right == NULL){ if(parent->left == current){ parent->left = NULL; } else{ parent->right = NULL; } free(current); return; } //If current has one child else if((current->left == NULL && current->right != NULL) || (current->left != NULL && current->right == NULL)){ //If node has a right child if(current->right != NULL && current->left != NULL){ if(parent->right == current){ parent->right = current->right; } else if(parent->left == current){ parent->left = current->right; } } //If node has a left child else if(current->left != NULL && current->right == NULL){ if(parent->right == current){ parent->right = current->left; } else if(parent->left == current){ parent->left = current->left; } } free(current); return; } //If current has two children else{ struct BstStudent\* swap\_this = current->right; struct BstStudent\* swap\_this\_prev = current; while(swap\_this->left != NULL){ swap\_this\_prev = swap\_this; swap\_this = swap\_this->left; } strcpy(current->name, swap\_this->name); current->id = swap\_this->id; for(i=0; i<5; i++){ current->marks[i] = swap\_this->marks[i]; } if(swap\_this\_prev->left == swap\_this){ swap\_this\_prev->left = swap\_this->right; } else if(swap\_this\_prev->right == swap\_this){ swap\_this\_prev->right = swap\_this->right; } free(swap\_this); return; } } } if(flag == 1){ printf("\nStudent named '%s' removed\n", student\_name); } else{ printf("\nNo student named '%s' is found in the list!\n", student\_name); } } void Bst\_Marks(struct BstStudent \*student){ printf("Insert the student marks!\n"); //Declaring variables for looping and inserting marks int i; float mark; //Loop through each module (element) in the marks array and inserting a mark for( i=0; i<5; i++){ printf("Insert the mark for the %d module!\n",i+1); scanf("%f",&mark); student->marks[i] = mark; } } void Bst\_IntroMarks(struct BstStudent\* root, char student\_name[]){ struct BstStudent\* current = root; int flag = 0; while(current != NULL){ if(strcmp(current->name, student\_name) > 0){ current = current->left; } else if(strcmp(current->name, student\_name) < 0){ current = current->right; } else{ Bst\_Marks(current); flag = 1; break; } } if(flag == 0){ printf("\nThere is no student named: %s\n", student\_name); } } void Bst\_SearchPrint(struct BstStudent\* root, char student\_name[]){ struct BstStudent\* current = root; int i, flag = 0; while(current != NULL){ if(strcmp(current->name, student\_name) > 0){ current = current->left; } else if(strcmp(current->name, student\_name) < 0){ current = current->right; } else{ printf("\n----------------\n"); printf("Name: %s\n", current->name); printf("Student ID: %d\n", current->id); for(i=0; i<5; i++){ printf("Module %d: %f\n", i+1, current->marks[i]); } flag = 1; break; } } if(flag == 0){ printf("\nThere is no student named: %s\n", student\_name); } } void Bst\_PrintAll(struct BstStudent\*\* root){ struct BstStudent\* temp = \*root; int i; if(temp == NULL){ return; } else{ Bst\_PrintAll(&temp->left); printf("\n----------------\n"); printf("Name: %s\n", temp->name); printf("Student ID: %d\n", temp->id); for(i=0; i<5; i++){ printf("Module %d: %f\n", i+1, temp->marks[i]); } Bst\_PrintAll(&temp->right); } } void leftRotateBinary(struct BstStudent\*\* current){ struct BstStudent\* temp; struct BstStudent\* original; struct BstStudent\* right; if(\*current == NULL || (\*current)->right == NULL){ return; } original = \*current; right = original->right; temp = (struct BstStudent\*)malloc(sizeof(struct BstStudent)); int i; strcpy(temp->name, original->name); temp->id = original->id; for(i=0; i<5; i++){ temp->marks[i] = original->marks[i]; } strcpy(original->name,right->name); original->id = right->id; for(i=0; i<5; i++){ original->marks[i] = right->marks[i]; } temp->right = right->left; temp->left = original->left; original->right = right->right; original->left = temp; free(right); } void rightRotateBinary(struct BstStudent\*\* current){ struct BstStudent\* temp; struct BstStudent\* original; struct BstStudent \*left; if(\*current == NULL || (\*current)->left == NULL){ return; } original = \*current; left = original->left; temp = (struct BstStudent\*)malloc(sizeof(struct BstStudent)); int i; strcpy(temp->name, original->name); temp->id = original->id; for(i=0; i<5; i++){ temp->marks[i] = original->marks[i]; } strcpy(original->name, left->name); original->id = left->id; for(i=0; i<5; i++){ original->marks[i] = left->marks[i]; } temp->left = left->right; temp->right = original->right; original->left = left->left; original->right = temp; free(left); } void balanceBinary(struct BstStudent \*\*root){ struct BstStudent\* current = \*root; int expected, i, odd\_node; int num\_nodes = 0; while(current != NULL){ while(current->left != NULL){ rightRotateBinary(&current); } current = current->right; num\_nodes++; } expected = num\_nodes - (pow(2,(floor(log2(num\_nodes+1)))) - 1); current = \*root; for(i=0; iright; } current = \*root; num\_nodes = num\_nodes - expected; odd\_node = (num\_nodes+1)/2; while(odd\_node > 1){ leftRotateBinary(&(\*root)); for(i=0; i<(odd\_node-1); i++){ leftRotateBinary(&(current->right)); current = current->right; } odd\_node = (odd\_node+1)/2; } } int main(){ //Pointer to root node initially points to empty tree struct BstStudent\* rootPtr = NULL; int user\_choice; char new\_name[20], new\_name2[20], marks\_name[20], report\_name[20], delete\_name[20]; int new\_ID, new\_ID2; //Keep displaying the menu until the user decides to quit the program do{ //Main menu printf("\nManage data for students: (Type an option and press ENTER)\n"); printf("1) Introduce new student:\n"); printf("2) Remove student:\n"); printf("3) Introduce marks for a student:\n"); printf("4) Print report for a student:\n"); printf("5) Print report for all students:\n"); printf("6) Save to a file:\n"); printf("7) Retrieve data from a file:\n"); printf("8) Quit\n\n"); //Ask the user to choose from the menu options above scanf("%d", &user\_choice); switch(user\_choice){ case 1: //Ask the user for the name and ID of student he wants to introduce printf("Insert the name of new student: \n"); scanf("%s", new\_name); printf("Insert the id of new student: \n"); scanf("%d", &new\_ID); Bst\_IntroduceStudent(&rootPtr, new\_name, new\_ID); balanceBinary(&rootPtr); break; case 2: printf("Insert the name of student you want to remove: \n"); scanf("%s", delete\_name); Bst\_DeleteStudent(&rootPtr, delete\_name); balanceBinary(&rootPtr); break; case 3: printf("Insert the ID of the student you want to introduce marks for!\n"); scanf("%s", marks\_name); //Insert the marks Bst\_IntroMarks(rootPtr, marks\_name); break; break; case 4: //Ask the user which student's report want to be printed printf("Insert the ID of the student you want to print a report!\n"); scanf("%s", report\_name); //Print the report for that student Bst\_SearchPrint(rootPtr, report\_name); break; case 5: printf("Print report of all students:\n\n"); Bst\_PrintAll(&rootPtr); break; case 6: break; case 7: break; case 8: //Quit the program printf("\nProgram ended!\n"); return 0; default: break; } } while(user\_choice!= 8); return 0; } ``` Here is my entire code, I didnt want to put it bc its really long, but if this can give you guys better context for helping me out, I would be really really glad Upvotes: 1
2018/03/20
1,374
4,191
<issue_start>username_0: I am trying to create a package in Oracle, but i am getting the following errors: > > > ``` > PLS-00323 > PLS-00371 > > ``` > > The package code is: ``` CREATE OR REPLACE PACKAGE mahalanobis_distance_package AS max_d NUMBER; TYPE tbnumber IS TABLE OF NUMBER INDEX BY PLS_INTEGER; FUNCTION rel_mahalanobis_distance_aux_3 ( a NUMBER, b1 NUMBER, b2 NUMBER, b3 NUMBER ) RETURN tbnumber; END mahalanobis_distance_package; / CREATE OR REPLACE PACKAGE BODY mahalanobis_distance_package AS max_d NUMBER; TYPE tbnumber IS TABLE OF NUMBER INDEX BY PLS_INTEGER; FUNCTION rel_mahalanobis_distance_aux_3 ( a NUMBER, b1 NUMBER, b2 NUMBER, b3 NUMBER ) RETURN tbnumber IS tbnumber_obj tbnumber; BEGIN max_d := 0; FOR j IN 1..3 LOOP tbnumber_obj(j) := 0; END LOOP; tbnumber_obj(1) := abs(a - b1); IF tbnumber_obj(1) > max_d THEN max_d := tbnumber_obj(1); END IF; tbnumber_obj(2) := abs(a - b2); IF tbnumber_obj(2) > max_d THEN max_d := tbnumber_obj(2); END IF; tbnumber_obj(3) := abs(a - b3); IF tbnumber_obj(3) > max_d THEN max_d := tbnumber_obj(3); END IF; RETURN tbnumber_obj; END rel_mahalanobis_distance_aux_3; END mahalanobis_distance_package; / ``` Why am I getting these errors? Thanks in advance!<issue_comment>username_1: There is no need for package variable/type declaration inside package body: ``` CREATE OR REPLACE PACKAGE BODY mahalanobis_distance_package AS --max_d NUMBER; --TYPE tbNumber IS TABLE OF NUMBER INDEX BY PLS_INTEGER; function rel_mahalanobis_distance_aux_3( A NUMBER, B1 NUMBER,B2 NUMBER,B3 NUMBER) RETURN tbNumber IS tbNumber_obj tbNumber; BEGIN max_d := 0; FOR j IN 1..3 LOOP tbNumber_obj(j) := 0; END LOOP; tbNumber_obj(1) := ABS(A - B1); IF tbNumber_obj(1) > max_d THEN max_d := tbNumber_obj(1); END IF; tbNumber_obj(2) := ABS(A - B2); IF tbNumber_obj(2) > max_d THEN max_d := tbNumber_obj(2); END IF; tbNumber_obj(3) := ABS(A - B3); IF tbNumber_obj(3) > max_d THEN max_d := tbNumber_obj(3); END IF; return tbNumber_obj; END rel_mahalanobis_distance_aux_3; END mahalanobis_distance_package; / ``` **[DBFiddle Demo](http://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=31eff91a4a60aa20d7eafb771aae40a3)** Upvotes: 2 [selected_answer]<issue_comment>username_2: You already declared the package variable and the TYPE tbnumber in the package spec. Remove those declarations from the body, and it will compile just fine. Although you'll get compiler warnings (if enabled) ``` Warning(2,11): PLW-06026: package specification exposes global variable ``` Here's your code ``` create or replace PACKAGE mahalanobis_distance_package AS max_d NUMBER; TYPE tbnumber IS TABLE OF NUMBER INDEX BY PLS_INTEGER; FUNCTION rel_mahalanobis_distance_aux_3 ( a NUMBER, b1 NUMBER, b2 NUMBER, b3 NUMBER ) RETURN tbnumber; END mahalanobis_distance_package; / create or replace PACKAGE BODY mahalanobis_distance_package AS FUNCTION rel_mahalanobis_distance_aux_3 ( a NUMBER, b1 NUMBER, b2 NUMBER, b3 NUMBER ) RETURN tbnumber IS tbnumber_obj tbnumber; BEGIN max_d := 0; FOR j IN 1..3 LOOP tbnumber_obj(j) := 0; END LOOP; tbnumber_obj(1) := abs(a - b1); IF tbnumber_obj(1) > max_d THEN max_d := tbnumber_obj(1); END IF; tbnumber_obj(2) := abs(a - b2); IF tbnumber_obj(2) > max_d THEN max_d := tbnumber_obj(2); END IF; tbnumber_obj(3) := abs(a - b3); IF tbnumber_obj(3) > max_d THEN max_d := tbnumber_obj(3); END IF; RETURN tbnumber_obj; END rel_mahalanobis_distance_aux_3; END mahalanobis_distance_package; / ``` Upvotes: 0
2018/03/20
3,180
10,052
<issue_start>username_0: I have been working on a hangman game in the console with JavaScript and I can't seem to figure out how to get it to randomly select past the first 2 categories. I know it must be an easy solution but I can't seem to figure it out. This is my start game function where the problem is ``` function start () { player.guessedLetters = [] player.strikes = 0 player.maxStrikes = 3 player.display = [] player.status = true displayIn = [] const game = Math.floor(Math.random() * (categories.length)) if (game === 0) { console.log('The category is Easy Words') const selectEasyWords = easyWords[Math.floor(Math.random() * (6))] player.display = selectEasyWords for (let i = 0; i < selectEasyWords.length; i++) { if (selectEasyWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else if (game === 1) { console.log('The category is Medium Words') const selectMediumWords = mediumWords[Math.floor(Math.random() * (6))] player.display = selectMediumWords for (let i = 0; i < selectMediumWords.length; i++) { if (selectMediumWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else if (game === 1) { console.log('The category is Hard Words') const selectHardWords = hardWords[Math.floor(Math.random() * (6))] player.display = selectHardWords for (let i = 0; i < selectHardWords.length; i++) { if (selectHardWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else if (game === 1) { console.log('The category is Extreme Words') const selectExtremeWords = extremeWords[Math.floor(Math.random() * (2))] player.display = selectExtremeWords for (let i = 0; i < selectExtremeWords.length; i++) { if (selectExtremeWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else { return Where did you go wrong } } ``` Here is the rest of my code for context. ``` console.log('HANGMAN\nTry to solve the puzzle by guessing letters using guess(letter).\nIf you miss a letter you get a strike.\nGet 3 strikes and you lose the game.\nTo select difficulty, type difficulty(difficulty).\nDifficulties:\nEasy\nMedium\nHard\nExtreme\nTo start game type start().') const player = { guessedLetters: [], strikes: 0, maxStrikes: 3, display: [], status: false } const easyWords = [ 'DOG', 'CAT', 'HELLO', 'FISH', 'RED', 'FOOD' ] const mediumWords = [ 'I LIKE THE COLOR PINK', 'MY FISHES NAME IS BEN', 'THE GREATEST SHOWMAN IS THE BEST MOVIE', 'OK GOOGLE HOW TO PASS IMD', 'I WORK AT LANDMARK CINEMAS', 'LEGO BATMAN IS THE ONLY GOOD BATMAN MOVIE' ] const hardWords = [ 'THIS IS AN EXAMPLE OF A HARDER PHRASE THIS PROJECT IS SO HARD', 'IVE BEEN STARING AT THIS PROJECT FOR 4 HOURS TODAY I DONT KNOW IF I CAN DO THIS ANYMORE', 'I REALLY MISS MY DOG HER NAME IS CASSY AND SHES A SHIH TZU AND BARKS A LOT', 'MY FAVOURITE SONG IS CALLED MASTERPIECE THEATRE PART 3 BY <NAME>', 'I BOUGHT THE HEDLEY TICKETS 5 MONTHS BEFORE THE ALLEGATIONS CAME OUT', 'CAN SOMEONE PLEASE HELP ME WITH THIS PROJECT OH MY GOD' ] const extremeWords = [ 'LOREM IPSUM DOLOR SIT AMET, CONSECTETUR ADIPISCING ELIT, SED DO EIUSMOD TEMPOR INCIDIDUNT UT LABORE ET DOLORE MAGNA ALIQUA. UT ENIM AD MINIM VENIAM, QUIS NOSTRUD EXERCITATION ULLAMCO LABORIS NISI UT ALIQUIP EX EA COMMODO CONSEQUAT. DUIS AUTE IRURE DOLOR IN REPREHENDERIT IN VOLUPTATE VELIT ESSE CILLUM DOLORE EU FUGIAT NULLA PARIATUR. EXCEPTEUR SINT OCCAECAT CUPIDATAT NON PROIDENT, SUNT IN CULPA QUI OFFICIA DESERUNT MOLLIT ANIM ID EST LABORUM', 'According to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don`t care what humans think is impossible. Yellow, black. Yellow, black. Yellow, black. Yellow, black.' ] const categories = ['Easy Words', 'Medium Words', 'Hard Words', 'Extreme Words'] let displayIn = [] function start () { player.guessedLetters = [] player.strikes = 0 player.maxStrikes = 3 player.display = [] player.status = true displayIn = [] const game = Math.floor(Math.random() * (categories.length)) if (game === 0) { console.log('The category is Easy Words') const selectEasyWords = easyWords[Math.floor(Math.random() * (6))] player.display = selectEasyWords for (let i = 0; i < selectEasyWords.length; i++) { if (selectEasyWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else if (game === 1) { console.log('The category is Medium Words') const selectMediumWords = mediumWords[Math.floor(Math.random() * (6))] player.display = selectMediumWords for (let i = 0; i < selectMediumWords.length; i++) { if (selectMediumWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else if (game === 1) { console.log('The category is Hard Words') const selectHardWords = hardWords[Math.floor(Math.random() * (6))] player.display = selectHardWords for (let i = 0; i < selectHardWords.length; i++) { if (selectHardWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else if (game === 1) { console.log('The category is Extreme Words') const selectExtremeWords = extremeWords[Math.floor(Math.random() * (2))] player.display = selectExtremeWords for (let i = 0; i < selectExtremeWords.length; i++) { if (selectExtremeWords[i] === ' ') { displayIn.push(' ') } else { displayIn.push('_') } } return ${displayIn.join(' ')} } else { return Where did you go wrong } } function guess (letter) { if (player.status) { if (displayIn.indexOf('_') !== -1) { if (player.strikes < player.maxStrikes) { const value = letter.toUpperCase() player.guessedLetters.push(value) const arrayPuzzle = player.display if (arrayPuzzle.indexOf(value) !== -1) { for (let d = arrayPuzzle.indexOf(value); d < player.display.length; d++) { if (arrayPuzzle.indexOf(value) !== -1 && arrayPuzzle[d] === value) { displayIn[d] = value } } if (displayIn.indexOf('_') !== -1) { console.log(There are ${value}s in the phrase.) return ${displayIn.join(' ')} } else { player.status = false console.log(${displayIn.join(' ')}) return Congrats. You won!!!! } } else { player.strikes++ if (player.strikes === player.maxStrikes) { player.status = false return You ran out of strikes. G A M E O V E R } else { return This letter is non existant. } } } else { player.status = false return You ran out of strikes. G A M E O V E R. } } else { player.status = false console.log(${displayIn.join(' ')}) return Congrats U WON!!! } } else { return Please start the game. } } ```<issue_comment>username_1: There is no need for package variable/type declaration inside package body: ``` CREATE OR REPLACE PACKAGE BODY mahalanobis_distance_package AS --max_d NUMBER; --TYPE tbNumber IS TABLE OF NUMBER INDEX BY PLS_INTEGER; function rel_mahalanobis_distance_aux_3( A NUMBER, B1 NUMBER,B2 NUMBER,B3 NUMBER) RETURN tbNumber IS tbNumber_obj tbNumber; BEGIN max_d := 0; FOR j IN 1..3 LOOP tbNumber_obj(j) := 0; END LOOP; tbNumber_obj(1) := ABS(A - B1); IF tbNumber_obj(1) > max_d THEN max_d := tbNumber_obj(1); END IF; tbNumber_obj(2) := ABS(A - B2); IF tbNumber_obj(2) > max_d THEN max_d := tbNumber_obj(2); END IF; tbNumber_obj(3) := ABS(A - B3); IF tbNumber_obj(3) > max_d THEN max_d := tbNumber_obj(3); END IF; return tbNumber_obj; END rel_mahalanobis_distance_aux_3; END mahalanobis_distance_package; / ``` **[DBFiddle Demo](http://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=31eff91a4a60aa20d7eafb771aae40a3)** Upvotes: 2 [selected_answer]<issue_comment>username_2: You already declared the package variable and the TYPE tbnumber in the package spec. Remove those declarations from the body, and it will compile just fine. Although you'll get compiler warnings (if enabled) ``` Warning(2,11): PLW-06026: package specification exposes global variable ``` Here's your code ``` create or replace PACKAGE mahalanobis_distance_package AS max_d NUMBER; TYPE tbnumber IS TABLE OF NUMBER INDEX BY PLS_INTEGER; FUNCTION rel_mahalanobis_distance_aux_3 ( a NUMBER, b1 NUMBER, b2 NUMBER, b3 NUMBER ) RETURN tbnumber; END mahalanobis_distance_package; / create or replace PACKAGE BODY mahalanobis_distance_package AS FUNCTION rel_mahalanobis_distance_aux_3 ( a NUMBER, b1 NUMBER, b2 NUMBER, b3 NUMBER ) RETURN tbnumber IS tbnumber_obj tbnumber; BEGIN max_d := 0; FOR j IN 1..3 LOOP tbnumber_obj(j) := 0; END LOOP; tbnumber_obj(1) := abs(a - b1); IF tbnumber_obj(1) > max_d THEN max_d := tbnumber_obj(1); END IF; tbnumber_obj(2) := abs(a - b2); IF tbnumber_obj(2) > max_d THEN max_d := tbnumber_obj(2); END IF; tbnumber_obj(3) := abs(a - b3); IF tbnumber_obj(3) > max_d THEN max_d := tbnumber_obj(3); END IF; RETURN tbnumber_obj; END rel_mahalanobis_distance_aux_3; END mahalanobis_distance_package; / ``` Upvotes: 0
2018/03/20
670
2,213
<issue_start>username_0: I have class with property `this.eyes`. Code Below. As you can see in first case in `data.num` saved value in second reference. In my project I have no needs to array, so I need somehow to make first sample to save reference like second one. Any idea how? ```js // Warning! Pseudo Code // Sample One class Human { constructor() { this.eyes = null; } addEye() { this.eyes = 1; } } const william = new Human(); const data = { num: william.eyes } william.addEye(); // data = { num: null } // Warning! Pseudo Code // Sample One class Human { constructor() { this.eyes = []; } addEye() { this.eyes.push(1); } } const william = new Human(); const data = { num: william.eyes } william.addEye(); // data = { num: [1] } ```<issue_comment>username_1: If I got what you trying to do here correctly , the following should work for you : ``` class Human { constructor() { this.eyes = 0; } addEye() { this.eyes += 1; } } ``` Upvotes: -1 <issue_comment>username_2: JavaScript has a small set of built-in types, one of those is `number` which is a "value type" (and as far as I can tell, your `human.eyes` value is always `null` or a `number` value). JavaScript does not support references to `number` values, so after this `data = { num: william.eyes }` the `data.num` value will be a copy of `william.eyes` and not a reference as you have correctly surmised. However, JavaScript *does* support non-trivial properties with custom getter/setter logic (`Object.defineProperty`). You could use this, in conjunction with a reference to the `william` object, to have the behaviour you want: ``` const william = new Human(); const data = {}; // new empty object Object.defineProperty( data, 'num', { enumerable: true, configurable: true, writable: true, get: function() { return this.human.eyes; }, set: function(newValue) { this.human.eyes = newValue; } } ); // Need to give `data` a reference to the `Human`: data.human = william; william.addEye(); william.addEye(); console.log( data.num ); // outputs "2" data.num++; console.log( william.eyes ); // outputs "3" ``` Upvotes: 3 [selected_answer]
2018/03/20
603
2,459
<issue_start>username_0: I am stuck at accessing the installation page, can't seem to get it to work, it always gives me a 500 Internal Server Error error. I have looked into some of the similar question solutions but they didn't work.<issue_comment>username_1: Following error you have given in your post's comment should lead you to the solution after a little search. The problem is Magento's .htaccess file consists command. In order to use this feature **mod\_version** module should have **enabled** on your Apache web server. Otherwise it will give 500 error. In Wamp it is **disabled by default**. To enable it in Wamp you can do following steps: 1. **Find your apache config file (httpd.conf) :** It usually is under /wamp/bin/apache/apache(version)/conf/httpd.conf 2. **Open the configuration file with your favourite text editor:** You can simply use Notepad. But I suggest any other one since notepad sometimes doesn't show line breaks. It will be more readable if you do (e.g. Notepad++) 3. **Search the following text:** "mod\_version" 4. **You will see following line:** `#LoadModule version_module modules/mod_version.so` 5. **Uncomment it, so it will become enabled:** Simply delete **#** sign to uncomment it 6. **Left Click on Wamp and press Restart All Services:** Since you have made changes in the configuration file in order to let Apache server know about the changes you should restart at least Apache server. Now you can check if Magento is working. If it isn't check the error log again. There might be another issue. Following 6 steps should easily solve the error that you have provided in the comment. Also, please edit your post and add more details e.g. error log. Upvotes: 6 [selected_answer]<issue_comment>username_2: If first time issue after installation, probably You must enable the Apache mod\_rewrite and mod\_version modules, simply uncomment strings in file httpd.conf(LoadModule version\_module modules/mod\_version.so and LoadModule rewrite\_module modules/mod\_rewrite.so ) save and restart server. <https://devdocs.magento.com/guides/v2.3/install-gde/system-requirements-tech.html> Apache 2.2 or 2.4 In addition, you must enable the Apache mod\_rewrite and mod\_version modules. The mod\_rewrite module enables the server to perform URL rewriting. The mod\_version module provides flexible version checking for different httpd versions. For more information, see our Apache documentation. Upvotes: 0
2018/03/20
622
2,517
<issue_start>username_0: I installed tensorflow on my mac and now I can't seem to open anaconda-navigator. When I launch the app, it appears in the dock but disappears quickly. When I launch anaconda-navigator the terminal I get the following error(s). KeyError: 'pip.\_vendor.urllib3.contrib'<issue_comment>username_1: Following error you have given in your post's comment should lead you to the solution after a little search. The problem is Magento's .htaccess file consists command. In order to use this feature **mod\_version** module should have **enabled** on your Apache web server. Otherwise it will give 500 error. In Wamp it is **disabled by default**. To enable it in Wamp you can do following steps: 1. **Find your apache config file (httpd.conf) :** It usually is under /wamp/bin/apache/apache(version)/conf/httpd.conf 2. **Open the configuration file with your favourite text editor:** You can simply use Notepad. But I suggest any other one since notepad sometimes doesn't show line breaks. It will be more readable if you do (e.g. Notepad++) 3. **Search the following text:** "mod\_version" 4. **You will see following line:** `#LoadModule version_module modules/mod_version.so` 5. **Uncomment it, so it will become enabled:** Simply delete **#** sign to uncomment it 6. **Left Click on Wamp and press Restart All Services:** Since you have made changes in the configuration file in order to let Apache server know about the changes you should restart at least Apache server. Now you can check if Magento is working. If it isn't check the error log again. There might be another issue. Following 6 steps should easily solve the error that you have provided in the comment. Also, please edit your post and add more details e.g. error log. Upvotes: 6 [selected_answer]<issue_comment>username_2: If first time issue after installation, probably You must enable the Apache mod\_rewrite and mod\_version modules, simply uncomment strings in file httpd.conf(LoadModule version\_module modules/mod\_version.so and LoadModule rewrite\_module modules/mod\_rewrite.so ) save and restart server. <https://devdocs.magento.com/guides/v2.3/install-gde/system-requirements-tech.html> Apache 2.2 or 2.4 In addition, you must enable the Apache mod\_rewrite and mod\_version modules. The mod\_rewrite module enables the server to perform URL rewriting. The mod\_version module provides flexible version checking for different httpd versions. For more information, see our Apache documentation. Upvotes: 0
2018/03/20
619
1,941
<issue_start>username_0: when I m running sails -v, sails lift or even installing npm I m getting this error Please help - ``` Error: Cannot find module 'commander' at Function.Module._resolveFilename (module.js:536:15) at Function.Module._load (module.js:466:25) at Module.require (module.js:579:17) at require (internal/module.js:11:18) at Object. (/usr/local/lib/node\_modules/sails/bin/\_commander.js:6:15) at Module.\_compile (module.js:635:30) at Object.Module.\_extensions..js (module.js:646:10) at Module.load (module.js:554:32) at tryModuleLoad (module.js:497:12) at Function.Module.\_load (module.js:489:3) ```<issue_comment>username_1: Most likely `commander` is missing from `package.json`. Install it and add it to `package.json` by doing: ``` npm install commander --save ``` Upvotes: 4 <issue_comment>username_2: Uninstalling sails and reinstalling with `'sudo'` resolved it. ``` npm uninstall sails -g sudo npm install sails -g ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: In my case it was because of bad version of node. I've tried node 8.15.1 and it works. Upvotes: 0 <issue_comment>username_4: Install `commander` globally. `sudo npm install commander -g` * This will install commander module under`/usr/lib/node_modules/`. Upvotes: 3 <issue_comment>username_5: When i try setup new app with the help of express-generator. given error:Cannot find module 'commander' ``` Node version:12.16.1 switch node version :10.16.3 ``` after switch node version is working fine.. Upvotes: 1 <issue_comment>username_6: On my end, after doing `npm install`, and attempt to run `npm run watch` I encountered this same issue of `Error: Cannot find module 'commander'`. I did below: ```js // delete node_modules (can do manually) or command below // rm -rf node_modules npm install ``` And I was able to do `npm run watch` again. This maybe because of corrupted packages. Upvotes: 0
2018/03/20
807
2,826
<issue_start>username_0: What would be the vanilla JS equivalent of :has in this jQuery selector? ``` $('.main-container').children('.analytics:has(a)').not('#promo') ``` Within `.main-container`, I'm trying to select all `.analytics` elements without an id of "promo" that contain tags. **What I've tried:** ``` document.querySelectorAll('.main-container .analytics:not(#promo)') ``` This will give me close to what I want, but I still have to filter out those `.analytics` parents that do **NOT** have tags. What would be the best way to approach this using vanilla JS?<issue_comment>username_1: You could select the and then get their parentNodes: ```js var a = document.querySelectorAll('.main-container .analytics:not(#promo) a'); var yourElements = []; for (var i = 0; i < a.length; i++) { yourElements.push(a[i].parentNode); } yourElements.forEach(e => e.style.background = "red"); ``` ```html Should be red Should not be red Schould not be red ``` **EDIT:** just noticed this only works if the is a direct child of your wanted element. Upvotes: 2 <issue_comment>username_2: There is no equivalent selector for `:has`, you'll have to use an initial selection and then filter them ``` var el = document.querySelectorAll('.main-container > .analytics:not(#promo)'); var res = []; for (let x = 0; x < el.length; x++){ if (el[x].querySelector('a')) res.push(el[x]); } //res has has the array of elements needed. ``` Upvotes: 2 <issue_comment>username_3: 1. Query the document for using your desired selector, in this case: `.analytics:not(#promo)` 2. Convert the NodeList to an Array 3. Filter the array using the predicate: `element => element.querySelector('your-selector')` > > ### `element.querySelector('your-selector')` will evaluate to `null` (which is falsey) if no child element is found > > > Generally as a function ======================= ```js function has(nodeList, selector) { return Array.from(nodeList).filter(e => e.querySelector(selector)) } const nodeList = document.querySelectorAll('.main-container > .analytics:not(#promo)') has(nodeList, 'a').forEach(e => e.style.background = "red") ``` ```html Should be red Should not be red Should not be red ``` As a NodeList.prototype ======================= ```js NodeList.prototype.has = function(selector) { return Array.from(this).filter(e => e.querySelector(selector)) } document .querySelectorAll('.main-container > .analytics:not(#promo)') .has('a') .forEach(e => e.style.background = 'red') ``` ```html Should be red Should not be red Should not be red ``` Upvotes: 4 [selected_answer]<issue_comment>username_4: [<NAME>'s polyfill-css-has](https://github.com/jplhomer/polyfill-css-has) provides a `querySelectorAllWithHas` method based on vanilla JavaScript. Upvotes: 0
2018/03/20
266
1,234
<issue_start>username_0: I know that a good way to store data like db passwords, etc. is via environment variables, but setting environment variables manually for every server instance created is time consuming. I'm planning to deploy my project to the cloud (using aws ebs or heroku). where should I store my db password? I think the .ebextensions file isn't a good option because it's tracked in vcs<issue_comment>username_1: My recommendation is to create a properties file which can be stored in the resources folder of your application and the code can access the resources. do not need environment variable. One property file can contain all db's userid and passwords. Deploy job based on url mapping in the properties file. For example, look at a spring hibernate example project which uses a property file. Or look at ant deploy scripts. Hope it helps. Upvotes: 0 <issue_comment>username_2: Don't ever store secrets in source control. A common practice is to either put them in a secure file or in something like <https://www.vaultproject.io/> then inject them (programmatically via a script or some other deployment/configuration tool) into the environment when you bring up your VM (or container or whatever). Upvotes: 2
2018/03/20
456
1,651
<issue_start>username_0: I have a problem with FOSUserBundle and HWIOauthBundle. I use custom `UserChecker` to check whether user is banned (custom table in database as I need a ban history). When I use Facebook to log-in my `CustomChecker` is not used. It does work when I'm using login and password as authentication. Issue on GitHub: <https://github.com/hwi/HWIOAuthBundle/issues/1358> Anyone has an idea how to fix that?<issue_comment>username_1: from the [docs](https://symfony.com/doc/current/bundles/override.html) > > Services & Configuration > > > If you want to modify service definitions of another bundle, you can > use a compiler pass to change the class of the service or to modify > method calls. In the following example, the implementing class for the > `original-service-id` is changed to `App\YourService`: > > > ``` // src/Kernel.php namespace App; // ... + use App\Service\YourService; + use Symfony\Component\DependencyInjection\ContainerBuilder; + use Symfony\Component\DependencyInjection\Compiler\CompilerPassInterface; class Kernel extends BaseKernel implements CompilerPassInterface { + public function process(ContainerBuilder $container) + { + $definition = $container->findDefinition('original-service-id'); + $definition->setClass(YourService::class); + } } ``` Upvotes: 1 <issue_comment>username_2: I confirm, the solution of @Domagoj work for Symfony 3.4. ``` # app/config/services.yml services: ... security.user_checker: class: AppBundle\Service\YourUserChecker ``` With this code, the hwi.user\_checker is override by your UserChecker. Thanks. Upvotes: 0
2018/03/20
384
1,496
<issue_start>username_0: When I try to drag a MySQL table to a DataSet after successfully adding a connection to a database in Server Explorer, I receive two error messages, one after the other: > > "Some updating commands could not be generated automatically. The database returned the following error: Unexpected error." > > > "Mouse drag operation failed. Could not retrieve schema information for table or view." > > > Based on answers for similiar problems, I reinstalled `'MySqlConnector/Net'` and `'MySql for Visual Studio'`, and the same problem persists. I would like to know what is going wrong.<issue_comment>username_1: I found the solution. After hours and hours of tinkering I found out that the current versions of MySql-for-visual-studio and MySqlConnector/Net have a bug, which can only be fixed by unninstalling those and installing `MySQL-for-visual-studio-1.2.6.msi` and `MySqlConnector-6.9.8.msi`, exactly in this order. Upvotes: 3 [selected_answer]<issue_comment>username_2: This worked, except I had to install 1.2.7 before I could get MYSQL to show up as an option as data provider. Upvotes: 2 <issue_comment>username_3: in my case the error statement was "you have a usable connection already" rather than "Unexpected Error" when adding a table as a data source, the above solution with MySql for VS 1.2.7 and MySql .net connector 6.9.8 worked for me as well. these need to be installed in the exact order as mentioned above. I'm using VS2017 15.9.11. Upvotes: 0
2018/03/20
354
1,193
<issue_start>username_0: I want to trigger a function when has an specific class, in this case `Clicked` so I want to return the alert `(unliked)` **HTML** ``` ``` **JQUERY** ``` $(".social").find(".LikePost").trigger("click"); /*when has class "clicked"*/ alert("unliked") ``` **TRIGGERED FUNCTION** ``` $('.LikePost').on('click', function() { if ($(this).hasClass("default")) { $(this).addClass("Clicked"); $(this).removeClass("default"); alert("post liked"); } else { $(this).addClass("default"); $(this).removeClass("Clicked"); alert("post unliked"); } }); ```<issue_comment>username_1: Alright x-D ``` $(".Clicked").trigger("click") ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can rewrite your trigger like this: ``` $('.LikePost').on('click', function(){ // just invert classes state $(this).toggleClass("default"); $(this).toggleClass("Clicked"); // if is default, its liked, if not, its not liked if($(this).hasClass("default")) { alert("post liked"); } else { alert("post unliked"); } }); ``` And if you want to manually trigger all liked posts, just do it: ``` $('.Clicked').trigger('click'); ``` Upvotes: 1
2018/03/20
503
1,802
<issue_start>username_0: The purpose of the app is to check whether a student is absent or present. For testing purposed. I have this code on my app. HTML ---- ``` Students Present Absent Present Absent ``` TS -- ``` attendance: any[] = [{ 'studentName': [], 'status': [] }] ``` Current, I am using two-way binding "[(ngModel)]" for the input and select tags. But when I type and select an option, both of these are being edited. But when I use one-way binding "[ngModel]". The problem of both inputs and select options being edited at the same time is gone. But I get nothing on console.log<issue_comment>username_1: You are targetting the same object in each `ion-item`. You have one class property named `attendance`. Whenever Angular assigns a value to `attendance.studentName`, it overwrites whatever value was previously set, or the default, initial value (an empty array). Each ion-input and `ion-select` must target a distinct object in your component class. The typical way would be to use an array of objects, and then loop through the array in your HTML template (with \*ngFor). The array could be like : ``` public attendance: any[] = [ { studentName: 'Alice', status: 'whatever' }, { studentName: 'Bob', status: 'whatever' } ] ``` Upvotes: 0 <issue_comment>username_2: **First of all edit in `.ts` file** ``` public attendance: any[] = [ { studentName: 'Alice', status: 'whatever' }, { studentName: 'Bob', status: 'whatever' } ]; ``` **Need to change in `.html` using `*ngFor`** ``` Students Present Absent ``` **Here is output image** [![output image](https://i.stack.imgur.com/kalNr.png)](https://i.stack.imgur.com/kalNr.png) Upvotes: 2 [selected_answer]
2018/03/20
815
2,706
<issue_start>username_0: hashmap key includes registration number and hashmap value includes owners name. Here is my code: ``` public static void main(String[] args) { HashMap data = new HashMap<>(); Scanner reader = new Scanner(System.in); data.put("AAA-111", "Jack"); data.put("BBB-222", "Matt"); data.put("CCC-333", "Jack"); for (HashMap.Entry entry: data.entrySet()) { System.out.println(entry.getValue()); //data.values().remove(entry.getValue()); } } ``` And the current output: ``` Jack Matt Jack ``` The problem is that I don't want to print out same owner two times. My goal output is: ``` Jack Matt ``` How I can print same values only one time?<issue_comment>username_1: The easiest way to do that would be with Streams: ``` data.values().stream().distinct().forEach(System.out::println); ``` Upvotes: 2 <issue_comment>username_2: You can create a `Set` like this : ``` Set names = new HashSet<>(data.values()); ``` --- **Outputs** ``` [Matt, Jack] ``` Upvotes: 3 <issue_comment>username_3: **1**. With `Streams` to obtain a `List` or a `Set` : ``` // 1 list List list = data.values().stream() .distinct().collect(Collectors.toList()); //[Jack, Matt] // 2 set Set set = data.values().stream() .distinct().collect(Collectors.toSet()); //[Matt, Jack] ``` **2**. Without `Streams` to obtain a basic or sorted `Set` : ``` // 1 simple Set Set set = new HashSet<>(data.values()); //[Matt, Jack] // 2 sorted Set Set set = new TreeSet<>(data.values()); //[Jack, Matt] ``` Upvotes: 2 <issue_comment>username_4: It also can be done like this using `Map::values` and `Stream::distinct`: ``` data.values().stream().distinct().forEach(System.out::println); ``` Upvotes: 1 <issue_comment>username_5: You must put like a key the owner and the code like a value. The value of last owner overrides the last. Upvotes: -1 <issue_comment>username_6: HashMap maps *unique* keys with values, values may be duplicate. The `put()` method accepts two parameter first being the Key and second being the Value. ``` data.put("AAA-111", "Jack"); data.put("BBB-222", "Matt"); data.put("CCC-333", "Jack"); ``` Now `data.put("BBB-222", "Jill");` will replace "Matt" and "BBB-222" will be mapped with "Jill". To have unique elements you can use `Set` and add elements, it will not add duplicate element. If you are getting a HashMap then either you can use Stream API as mentioned in other answers to extract distinct values or you can iterate the HashMap `values()` and can keep on adding each value to `Set` (you can choose `HashSet` ,`TreeSet`). The duplicate values will be discarded and be added only once. Then you can iterate the set to read distinct elements. Upvotes: 1
2018/03/20
469
1,582
<issue_start>username_0: Is there a way to use the new "async" javascript keyword to replace async.map from the async module? Basically I want to avoid using the async module if possible. For example, read many files at once, then do something when all the files have been read.<issue_comment>username_1: Yes, generally you can do this with `Promise.all`. ``` let urls = [...]; let promises = urls.map(function(url) { return fetch(url).then(result => result.json()); // or whatever }); Promise.all(promises).then(function(results) { // deal with the results as you wish }); ``` Or, to do it in a one-liner: ``` Promise.all(urls.map(url => fetch(url).then(res => res.json()))).then(function(results) { // deal with the results as you wish }); ``` Though that ain't easy to read, I fear... It's not quite as sleek as `async.map`, though of course writing an appropriate wrapper would not be hard. Upvotes: 3 [selected_answer]<issue_comment>username_2: Helper function: ``` async function asyncMap(array, callback) { let results = []; for (let index = 0; index < array.length; index++) { const result = await callback(array[index], index, array); results.push(result); } return results; } ``` Sample usage: ``` const titles = await asyncMap([1, 2, 3], async number => { const response = await fetch( `https://jsonplaceholder.typicode.com/todos/${number}` ); const json = await response.json(); return json.title; }); ``` Inspired by this [async forEach](https://gist.github.com/Atinux/fd2bcce63e44a7d3addddc166ce93fb2) Upvotes: 0
2018/03/20
1,948
7,175
<issue_start>username_0: I have a terraform file that creates an EC2 instance along with a couple of volumes: ``` resource "aws_instance" "generic" { count = "${lookup(var.INSTANCE_COUNT, var.SERVICE)}" ami = "${var.AMI}" instance_type = "${lookup(var.INSTANCE_TYPE, var.BLD_ENV)}" subnet_id = "${element(var.SUBNET,count.index)}" vpc_security_group_ids = ["${var.SECURITY_GROUP}"] key_name = "${var.AWS_KEY_NAME}" availability_zone = "${element(var.AWS_AVAILABILITY_ZONE,count.index)}" iam_instance_profile = "${var.IAM_ROLE}" root_block_device { volume_type = "gp2" delete_on_termination = "${var.DELETE_ROOT_ON_TERMINATION}" } ebs_block_device { device_name = "${lookup(var.DEVICE_NAME,"datalake")}" volume_type = "${lookup(var.DATALAKE_VOLUME_TYPE, var.SERVICE)}" volume_size = "${var.NONDATADIR_VOLUME_SIZE}" delete_on_termination = "${var.DELETE_ROOT_ON_TERMINATION}" encrypted = true } ebs_block_device { device_name = "${lookup(var.DEVICE_NAME,"datalake_logdir")}" delete_on_termination = "${var.DELETE_ROOT_ON_TERMINATION}" volume_type = "${lookup(var.LOGDIR_VOLUME_TYPE, var.SERVICE)}" volume_size = "${var.NONDATADIR_VOLUME_SIZE}" encrypted = true } volume_tags { Name = "${lookup(var.TAGS, "Name")}-${count.index}" } } ``` If the ec2 instance terminates how can I attach the existing volumes to the new ec2 instance created when I rerun terraform? I was hoping that terraform could somehow tell from the state file the the instance is gone but the volumes aren't and therefore they should be attached to the newly created EC2. Thanks in advance!<issue_comment>username_1: 1. Create the EBS volume using a separate resource: [aws\_ebs\_volume](https://www.terraform.io/docs/providers/aws/r/ebs_volume.html). 2. Configure the Instance to attach the volume during boot. For example, you could have a [User Data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-api-cli) script that uses the [attach-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/attach-volume.html) command of the AWS CLI. 3. If the Instance crashes, or if you want to replace it to deploy new code, you run `terraform apply`, and the replacement Instance will boot up and reattach the same EBS Volume. If you want the Instance to be able to recover itself automatically, it gets trickier. * You can configure your Instance with [Auto Recovery](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html), but that only detects if the actual VM dies; it won't detect if the app running on that VM dies (e.g., crashes, runs out of memory). * A better approach is to use an [Auto Scaling Group (ASG)](https://aws.amazon.com/autoscaling/) with a [Load Balancer](https://aws.amazon.com/elasticloadbalancing/). If any of the Instances in the ASG fail the Load Balancer health checks, they will be replaced automatically. The catch is that an Instance can only attach an EBS Volume in the same Availability Zone (AZ), but an ASG can launch Instances in *any* AZ, so an Instance might launch in an AZ without any EBS Volume! Solving this, especially in a way that supports zero-downtime deployment, typically requires going outside of Terraform. For example, the Auto Scaling Group module in the [Gruntwork IaC Library](https://gruntwork.io/infrastructure-as-code-library/) implements this using multiple ASGs and a Python script activated via an [external data source](https://www.terraform.io/docs/providers/external/data_source.html). Upvotes: 1 <issue_comment>username_2: First, separate your instances, volumes and volume attachments like so: ``` resource "aws_instance" "generic" { ami = "${var.ami_id}" instance_type = "${var.instance_type}" count = "${var.node_count}" subnet_id = "${var.subnet_id}" key_name = "${var.key_pair}" root_block_device = { volume_type = "gp2" volume_size = 20 delete_on_termination = false } vpc_security_group_ids = ["${var.security_group_ids}"] } resource "aws_ebs_volume" "vol_generic_data" { size = 120 count = "${var.node_count}" type = "gp2" } resource "aws_volume_attachment" "generic_data_vol_att" { device_name = "/dev/xvdf" volume_id = "${element(aws_ebs_volume.vol_generic_data.*.id, count.index)}" instance_id = "${element(aws_instance.vol_generic_data.*.id, count.index)}" count = "${var.node_count}" } ``` Then, if your instance gets manually terminated TF should detect that the instance is gone but still referenced in TF state and should try to recreate it and attach the existing volume. I have not tried this. However, I have tried importing an existing instance and its volume into TF state so the same logic should apply for just importing the volume alone and attaching to an existing TF managed instance. You should be able to simply import the existing volume like so: ``` terraform import module.generic.aws_ebs_volume.vol_generic_data vol-0123456789abcdef0 ``` Then TF will attach the volume or update the state if already attached. Upvotes: 2 <issue_comment>username_3: After a lot of searching and reading documentation, I came to a solution for this problem. Here, I will illustrate with a simple example how to preserve your `ebs` volumes using terraform i.e. you can create and destroy instances and they will be attached to the same `ebs` volume each time: 1. I have created a new Terraform folder in which I have written a script to create an `ebs` volume with a specific tag. 2. In my main script I have added a `data source` to search for `ebs` volumes with specific tags: ``` data "aws_ebs_volume" "test" { filter { name = "volume-type" values = ["gp2"] } most_recent = true } locals { # save the volume id value in this local ebs_vol_id = "${data.aws_ebs_volume.test.id}" } output "volume_id" { # print the volume id value value = "${local.ebs_vol_id}" } ``` 3. I have used this local (which now holds the volume id) in my `aws_volume_attachment` resource. ``` # attach the instacne to a volume resource "aws_volume_attachment" "ebs_att" { device_name = "/dev/sdh" volume_id = "${local.default_ami}" instance_id = aws_instance.ec2_instance.id skip_destroy = true # (if true) Don't detach the volume from the instance to which it is attached at destroy time, and instead just remove the attachment from Terraform state. } ``` 4. Holaaa, know every time you run `terraform apply` or `terraform destroy, your` ec2`instance will connect to the same`ebs` volume. --- **Discussion:** 1. This is such a workaround to achieve the intended behavior. 2. You can achieve the same by using `terraform import` but, I think this way is easier. 3. The main drawback of this solution is that now we have two terraform states which is now a recommended option. Upvotes: 0
2018/03/20
1,803
6,575
<issue_start>username_0: I am hosting a mongo database on [mlab](https://mlab.com), and I have an extremely simple program to insert a document into an existing collection. Insertion works for me in JS but not in Python for some reason. This is what it looks like in Python: ``` from pymongo import MongoClient client = MongoClient('mongodb://:@/') db = client.congresspersons posts = db.posts post\_data = { 'title': 'Python and MongoDB', 'content': 'PyMongo is fun, you guys', 'author': 'Scott' } result = posts.insert\_one(post\_data) ``` This code mostly came from [here](https://realpython.com/introduction-to-mongodb-and-python/#mongodb). However, I keep getting this error: ``` pymongo.errors.OperationFailure: not authorized on config to execute command { insert: "congresspersons.posts", ordered: true, documents: [ { content: "PyMongo is fun, you guys", _id: ObjectId('5ab16ae3626b6217f7c2a079'), author: "Scott", title: "Python and MongoDB" } ] } ``` These are the permissions for the user: ``` { ... "roles": [ { "role": "dbOwner", ... } ] } ``` I don't understand why such simple insertion is not working in Python. How can I get this to work?<issue_comment>username_1: 1. Create the EBS volume using a separate resource: [aws\_ebs\_volume](https://www.terraform.io/docs/providers/aws/r/ebs_volume.html). 2. Configure the Instance to attach the volume during boot. For example, you could have a [User Data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-api-cli) script that uses the [attach-volume](https://docs.aws.amazon.com/cli/latest/reference/ec2/attach-volume.html) command of the AWS CLI. 3. If the Instance crashes, or if you want to replace it to deploy new code, you run `terraform apply`, and the replacement Instance will boot up and reattach the same EBS Volume. If you want the Instance to be able to recover itself automatically, it gets trickier. * You can configure your Instance with [Auto Recovery](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html), but that only detects if the actual VM dies; it won't detect if the app running on that VM dies (e.g., crashes, runs out of memory). * A better approach is to use an [Auto Scaling Group (ASG)](https://aws.amazon.com/autoscaling/) with a [Load Balancer](https://aws.amazon.com/elasticloadbalancing/). If any of the Instances in the ASG fail the Load Balancer health checks, they will be replaced automatically. The catch is that an Instance can only attach an EBS Volume in the same Availability Zone (AZ), but an ASG can launch Instances in *any* AZ, so an Instance might launch in an AZ without any EBS Volume! Solving this, especially in a way that supports zero-downtime deployment, typically requires going outside of Terraform. For example, the Auto Scaling Group module in the [Gruntwork IaC Library](https://gruntwork.io/infrastructure-as-code-library/) implements this using multiple ASGs and a Python script activated via an [external data source](https://www.terraform.io/docs/providers/external/data_source.html). Upvotes: 1 <issue_comment>username_2: First, separate your instances, volumes and volume attachments like so: ``` resource "aws_instance" "generic" { ami = "${var.ami_id}" instance_type = "${var.instance_type}" count = "${var.node_count}" subnet_id = "${var.subnet_id}" key_name = "${var.key_pair}" root_block_device = { volume_type = "gp2" volume_size = 20 delete_on_termination = false } vpc_security_group_ids = ["${var.security_group_ids}"] } resource "aws_ebs_volume" "vol_generic_data" { size = 120 count = "${var.node_count}" type = "gp2" } resource "aws_volume_attachment" "generic_data_vol_att" { device_name = "/dev/xvdf" volume_id = "${element(aws_ebs_volume.vol_generic_data.*.id, count.index)}" instance_id = "${element(aws_instance.vol_generic_data.*.id, count.index)}" count = "${var.node_count}" } ``` Then, if your instance gets manually terminated TF should detect that the instance is gone but still referenced in TF state and should try to recreate it and attach the existing volume. I have not tried this. However, I have tried importing an existing instance and its volume into TF state so the same logic should apply for just importing the volume alone and attaching to an existing TF managed instance. You should be able to simply import the existing volume like so: ``` terraform import module.generic.aws_ebs_volume.vol_generic_data vol-0123456789abcdef0 ``` Then TF will attach the volume or update the state if already attached. Upvotes: 2 <issue_comment>username_3: After a lot of searching and reading documentation, I came to a solution for this problem. Here, I will illustrate with a simple example how to preserve your `ebs` volumes using terraform i.e. you can create and destroy instances and they will be attached to the same `ebs` volume each time: 1. I have created a new Terraform folder in which I have written a script to create an `ebs` volume with a specific tag. 2. In my main script I have added a `data source` to search for `ebs` volumes with specific tags: ``` data "aws_ebs_volume" "test" { filter { name = "volume-type" values = ["gp2"] } most_recent = true } locals { # save the volume id value in this local ebs_vol_id = "${data.aws_ebs_volume.test.id}" } output "volume_id" { # print the volume id value value = "${local.ebs_vol_id}" } ``` 3. I have used this local (which now holds the volume id) in my `aws_volume_attachment` resource. ``` # attach the instacne to a volume resource "aws_volume_attachment" "ebs_att" { device_name = "/dev/sdh" volume_id = "${local.default_ami}" instance_id = aws_instance.ec2_instance.id skip_destroy = true # (if true) Don't detach the volume from the instance to which it is attached at destroy time, and instead just remove the attachment from Terraform state. } ``` 4. Holaaa, know every time you run `terraform apply` or `terraform destroy, your` ec2`instance will connect to the same`ebs` volume. --- **Discussion:** 1. This is such a workaround to achieve the intended behavior. 2. You can achieve the same by using `terraform import` but, I think this way is easier. 3. The main drawback of this solution is that now we have two terraform states which is now a recommended option. Upvotes: 0
2018/03/20
619
1,753
<issue_start>username_0: IN the Linnworks API documentation and Start and End Date is required for one of the API requests. The format for this is as follows; ``` 2018-02-19T16:57:07.0049771+00:00 ``` I am unsure on this formatting. Is this a default formatting of some sort or would I need to construct it? If I need to construct, I get the obvious portions; Date; ``` 2018-02-19 ``` Time; ``` T16:57:07 ``` But what this portion is I do not know; ``` 0049771+00:00 ``` Is it the Unix Time Stamp and a + for time zone?<issue_comment>username_1: The end part is microseconds and timezone. If you use date("c") or $yourDateObject->format("c") it should give you a complete string in this format (ISO 8601). Upvotes: 1 <issue_comment>username_2: This is [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) format, which states: > > Decimal fractions may be added to any of the three time elements. > However, a fraction may only be added to the lowest order time element > in the representation. > > > Considering the use of the word "may" here, I would expect that the API should allow you to specify the timestamp without any such decimal portion, assuming that is acceptable for your application. (Disclaimer -- this is a guess.) If so, this can be had simply via: ``` echo date('c'); ``` Which yields: ``` 2018-03-20T16:24:37-04:00 ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: Format character `u`(microseconds) will return 6 char long string (on php<7.1 it will be `000000`). If you need 7 char long string, just prepend `0` to the end, like this: ```php $dt = new DateTime(); echo $dt->format('Y-m-d\TH:i:s.u0P'); ``` It will output something like this: `2018-03-21T08:47:01.0263140+01:00` Upvotes: 0
2018/03/20
870
3,052
<issue_start>username_0: While reviewing a PR today I saw this in a class: ``` public bool ?Selected { get; set; } //question mark in front of property name ``` I suspected that the developer meant this ``` public bool? Selected { get; set; } //question mark at end of type ``` I was surprised that this even complied so I tried it myself and found that this is legal code and it ends up doing the same thing. My question is: Is there any subtle (or not so subtle) difference between these two?<issue_comment>username_1: It might make a little more sense when you realize that the following is also valid: ``` public bool?Selected { get; set; } ``` `?` in that context is the nullable type identifier and as such it can't be followed by any character that would make sense in the current token, so the lexer simply processes the nullable type identifier and starts lexing the following expected token. Whats really redundant is the whitespace, its more a format helper to make it more readable than a syntax requirement. Whitespaces are in many contexts meaningless trivia the compiler could do without. Its similar in a sense as to why the following are all valid: ``` static bool Huh() { return false; } static bool Huh () { return false; } static bool Huh () { return false; } etc. ``` Upvotes: 1 <issue_comment>username_2: First of all, my Visual Studio 2017 immediately corrects this ``` public bool ?Selected { get; set; } ``` To this ``` public bool? Selected { get; set; } ``` Then, [IL DASM](https://msdn.microsoft.com/en-us/library/aa309387(v=vs.71).aspx) to the rescue! You can see that **the resulting IL is the same**, just like @mjwills said in the comments: [![enter image description here](https://i.stack.imgur.com/CxYZp.png)](https://i.stack.imgur.com/CxYZp.png) [![enter image description here](https://i.stack.imgur.com/CimTq.png)](https://i.stack.imgur.com/CimTq.png) In the end, you can always hit ``Ctrl`+`K`` + ``Ctrl`+`D`` to have Visual Studio reformat your code and properly manage blanks and indentation. As per your question about changing `bool?` to `bool`, it depends: if something can have an [indefinite value or state](https://stackoverflow.com/questions/2204408/what-is-the-use-of-nullablebool-type), just like [checkboxes](http://csharphelper.com/blog/2015/10/use-a-tristate-checkbox-in-c/), you should use `bool?`; otherwise `bool` is fine. Upvotes: 3 [selected_answer]<issue_comment>username_3: As others have suggested, there is no difference in how the code functions based on the location of ?. The compiler doesn't much care for whitespace assuming it can make out the operators and operands. It simply comes down to readability and personal/team preference. For example, given: ``` var app = new App(); ``` These are also the same: ``` app. SomeMethod(); ``` And ``` app .SomeMethod(); ``` And ``` app . SomeMethod(); ``` Or placing the dot operator anywhere else inbetween; including on a new line. It all compiles down to the same IL. Upvotes: 0
2018/03/20
908
3,288
<issue_start>username_0: Consider the following ``` use namespace; class name impl... { use Trait; } ``` How would I go about it, if I would like to extract either the `use` from before the class definition or the one after? Well in the above example it would be simple enough, but if it should also work on an actual code file with multiple `use` in both places and maybe not even grouped together, but with other things in between and also with all line chars removed? It's easy enough to get them all, but I want it to either stop when it reaches the class or begin from the class. Just can't seam to get anything to work correctly. Lines, comments and literals is stripped, so these should not be taken into consideration.<issue_comment>username_1: It might make a little more sense when you realize that the following is also valid: ``` public bool?Selected { get; set; } ``` `?` in that context is the nullable type identifier and as such it can't be followed by any character that would make sense in the current token, so the lexer simply processes the nullable type identifier and starts lexing the following expected token. Whats really redundant is the whitespace, its more a format helper to make it more readable than a syntax requirement. Whitespaces are in many contexts meaningless trivia the compiler could do without. Its similar in a sense as to why the following are all valid: ``` static bool Huh() { return false; } static bool Huh () { return false; } static bool Huh () { return false; } etc. ``` Upvotes: 1 <issue_comment>username_2: First of all, my Visual Studio 2017 immediately corrects this ``` public bool ?Selected { get; set; } ``` To this ``` public bool? Selected { get; set; } ``` Then, [IL DASM](https://msdn.microsoft.com/en-us/library/aa309387(v=vs.71).aspx) to the rescue! You can see that **the resulting IL is the same**, just like @mjwills said in the comments: [![enter image description here](https://i.stack.imgur.com/CxYZp.png)](https://i.stack.imgur.com/CxYZp.png) [![enter image description here](https://i.stack.imgur.com/CimTq.png)](https://i.stack.imgur.com/CimTq.png) In the end, you can always hit ``Ctrl`+`K`` + ``Ctrl`+`D`` to have Visual Studio reformat your code and properly manage blanks and indentation. As per your question about changing `bool?` to `bool`, it depends: if something can have an [indefinite value or state](https://stackoverflow.com/questions/2204408/what-is-the-use-of-nullablebool-type), just like [checkboxes](http://csharphelper.com/blog/2015/10/use-a-tristate-checkbox-in-c/), you should use `bool?`; otherwise `bool` is fine. Upvotes: 3 [selected_answer]<issue_comment>username_3: As others have suggested, there is no difference in how the code functions based on the location of ?. The compiler doesn't much care for whitespace assuming it can make out the operators and operands. It simply comes down to readability and personal/team preference. For example, given: ``` var app = new App(); ``` These are also the same: ``` app. SomeMethod(); ``` And ``` app .SomeMethod(); ``` And ``` app . SomeMethod(); ``` Or placing the dot operator anywhere else inbetween; including on a new line. It all compiles down to the same IL. Upvotes: 0
2018/03/20
788
1,717
<issue_start>username_0: I have a dataset of 4 attributes like: ``` taxi id date time longitude latitude 0 1 2/2/2008 15:36 116.51 39.92 1 1 2/2/2008 15:46 116.51 39.93 2 1 2/2/2008 15:56 116.51 39.91 3 1 2/2/2008 16:06 116.47 39.91 4 1 2/2/2008 16:16 116.47 39.92 ``` datatype of each attribute is as follows: ``` taxi id dtype('int64') date time dtype('O') longitude dtype('float64') latitude dtype('float64') ``` i want to calculate mean and standard deviation (std) for each attribute. For mean i have tried this code: ``` np.mean('longitude') ``` but it gives me error like: ``` TypeError: cannot perform reduce with flexible type ```<issue_comment>username_1: You can using `pandas` `describe` ``` df.describe() Out[878]: taxi id longitude latitude count 5.000000 5.0 5.000000 5.000000 mean 2.000000 1.0 116.494000 39.918000 std 1.581139 0.0 0.021909 0.008367 min 0.000000 1.0 116.470000 39.910000 25% 1.000000 1.0 116.470000 39.910000 50% 2.000000 1.0 116.510000 39.920000 75% 3.000000 1.0 116.510000 39.920000 max 4.000000 1.0 116.510000 39.930000 ``` Upvotes: 2 <issue_comment>username_2: You have to specify that you're looking for the mean of your dataframe. As it is, you're not referencing your dataframe at all when you call `numpy.mean()`. If you dataframe is called `df`, using `pandas.Series.mean` should work, like this: ``` df['longitude'].mean() df['longitude'].std() ``` As it is, you're calling `numpy.mean()` on a string, which doesn't mean much. If you really wanted to use `numpy.mean()`, you could use `np.mean(df['longitude'])` Upvotes: 2 [selected_answer]
2018/03/20
1,562
6,241
<issue_start>username_0: The [docs](https://developers.facebook.com/docs/instagram-api/getting-started#instagram-api) say (step 4): > > In the App Review for Instagram section, click Add to Submission for each permission your App will need from its Users. instagram\_basic is required. instagram\_manage\_comments, instagram\_manage\_insights, and instagram\_content\_publish are optional. > > > Scroll to the bottom of the page, and in the Current Submission sub-section, for each submission, click its View Notes link. > > > In the View Notes dialogue **for each permission submission, provide a description for how you will use the data returned by any endpoints that require the permission, and a screencast showing how your app will use the data.** > > > Once you've completed your notes for all of your permission submissions, click the Submit For Review button at the bottom of the page. Note that this is **separate from your App Review**, which you will do after testing. > > > After you've added both Facebook Login and Instagram API products to your app configuration, add Facebook Login to your app and record a screencast showing how you will use data returned by the Instagram API. This is similar to the Facebook Login screencast but with Instagram data, so the same guidelines apply. > > > Step 5: > > **Once you've been notified that your Instagram API product submission has been approved, you can use the Graph API Explorer to test your app.** > > > With Facebook Login I can create an app that works in sandbox mode. Which lets me create a screencast. To be able to use Instagram Graph API (on a site) I need (from what I can gather): * Facebook Page linked to Instagram Business Account * an app icon * a privacy policy page * a site with working Facebook Login * description of how each permission is going to be used * screencast of... what exactly? of a site that is basically ready, but instead of fetching data from Instagram, it has them hardcoded in the code? Or there is a sandbox mode after all? **UPD** I've highlighted the important parts in the quotes above. Then: > > 5. Test Your App > > > **Once you've been notified that your Instagram API product submission has been approved, you can use the Graph API Explorer to test your app.** > > > Go to the Graph API Explorer... > > > The first call you will make is to the Graph API's /user/accounts edge... > > > Locate the Page that you connected to the Instagram Business Account and click its ID... > > > Next, [make /page?fields=instagram\_business\_account request]. > > > This is where I get empty response in Graph API Explorer (only id field is returned), or: > > 200:- OAuthException:(#200) Access to this data is temporarily disabled for non-active apps or apps that have not recently accessed this data due to changes we are making to the Facebook Platform. <https://developers.facebook.com/status/issues/205942813488872/> > > > GET /...?fields=instagram\_business\_account HTTP/1.1 > > > on my site. Which requests to Instagram Graph API can I make before passing reviews (Instagram App Review, and Facebook App Review)?<issue_comment>username_1: You can get full functionality of Graph API if you login with the developer account, use that to create a video screencast of app functionality and submit Upvotes: -1 <issue_comment>username_2: Go to your app dashboard from your developer account: <https://developers.facebook.com/apps/APP_ID/dashboad> Go to: Roles -> Test Users. Edit one of the test users to change its password to what you want in order to know him. Once done, logout from your personal account and connect with the "test user" account in <https://facebook.com> Go to your app and click the facebook login with the scope "instagram\_basic". With the access\_token that you get you are able to ask Graph API about the user's instagram business account. Et voilà. PS: You will not be able to use the Open Graph API tool from your test user. My advice is to test your app from Incognito mode so it will not interpose with your personal account. **UPDATE** Currently, access the instagram data from this way is disabled but this is the right way to access the data. FB error message says: (#200) Access to this data is temporarily disabled for non-active apps or apps that have not recently accessed this data due to changes we are making to the Facebook Platform. <https://developers.facebook.com/status/issues/205942813488872/> Upvotes: 0 <issue_comment>username_3: You will have to create a test application out of your real application, these applications are in sand box mode by default (developer mode) <https://developers.facebook.com/docs/apps/test-apps/> You will be able to access any fb permission / feature with it, and develop a product with it. Once you are ready to review, in your submission of your real application link your test app in the process Upvotes: -1 <issue_comment>username_4: It looks like the documentation that Facebook provides is somewhat misleading. You can actually use Instagram Graph API via Graph API Explorer (or by sending GET requests from your code) in a very limited way accessing only your instagram business account BEFORE you get initial approval described in Step 4. Steps: 1. Open Graph API Explorer. 2. First, you need to add extra permissions in the "Access Token" section. There is a "Add a Permission" drop down on the bottom of that section. Open that drop down and select: instagram\_basic, manage\_pages, business\_management permissions. Depending on what you are trying to do, you may need other permissions as well (e.g. instagram\_manage\_comments, ads\_management, manage\_pages). 3. Once you selected the extra permissions, click on the "Get Access Token" button. 4. Now, you have access token with correct permissions and you should be able to execute instagram\_business\_account request. This will return the instagram id of your business account, which you can then use for other requests. After going through these steps, I'm able to get media for my business account via Instagram Graph API, although I'm still trying to figure out why tags request returns empty list. Upvotes: 4 [selected_answer]
2018/03/20
930
3,777
<issue_start>username_0: How do I update a parent object if children are changed? I have entity City that has children Street. I would like save in City entity total kilometers of streets. ``` City id | name | total 1 | example | 1000 Street id | city_id | name | kilometers 1 | 1 | first | 800 2 | 1 | second | 200 ``` I tried Lifecycle Callbacks (PreUpdate and PostUpdate) and Doctrine listeners - preUpdate i postUpdate. Both for City, but both not are not working if I don't edit City. So if I set listeners for Street, then this is good, but listeners are executed for each edited one of streets. I can make this in the controller, but I would like to use events.<issue_comment>username_1: You can get full functionality of Graph API if you login with the developer account, use that to create a video screencast of app functionality and submit Upvotes: -1 <issue_comment>username_2: Go to your app dashboard from your developer account: <https://developers.facebook.com/apps/APP_ID/dashboad> Go to: Roles -> Test Users. Edit one of the test users to change its password to what you want in order to know him. Once done, logout from your personal account and connect with the "test user" account in <https://facebook.com> Go to your app and click the facebook login with the scope "instagram\_basic". With the access\_token that you get you are able to ask Graph API about the user's instagram business account. Et voilà. PS: You will not be able to use the Open Graph API tool from your test user. My advice is to test your app from Incognito mode so it will not interpose with your personal account. **UPDATE** Currently, access the instagram data from this way is disabled but this is the right way to access the data. FB error message says: (#200) Access to this data is temporarily disabled for non-active apps or apps that have not recently accessed this data due to changes we are making to the Facebook Platform. <https://developers.facebook.com/status/issues/205942813488872/> Upvotes: 0 <issue_comment>username_3: You will have to create a test application out of your real application, these applications are in sand box mode by default (developer mode) <https://developers.facebook.com/docs/apps/test-apps/> You will be able to access any fb permission / feature with it, and develop a product with it. Once you are ready to review, in your submission of your real application link your test app in the process Upvotes: -1 <issue_comment>username_4: It looks like the documentation that Facebook provides is somewhat misleading. You can actually use Instagram Graph API via Graph API Explorer (or by sending GET requests from your code) in a very limited way accessing only your instagram business account BEFORE you get initial approval described in Step 4. Steps: 1. Open Graph API Explorer. 2. First, you need to add extra permissions in the "Access Token" section. There is a "Add a Permission" drop down on the bottom of that section. Open that drop down and select: instagram\_basic, manage\_pages, business\_management permissions. Depending on what you are trying to do, you may need other permissions as well (e.g. instagram\_manage\_comments, ads\_management, manage\_pages). 3. Once you selected the extra permissions, click on the "Get Access Token" button. 4. Now, you have access token with correct permissions and you should be able to execute instagram\_business\_account request. This will return the instagram id of your business account, which you can then use for other requests. After going through these steps, I'm able to get media for my business account via Instagram Graph API, although I'm still trying to figure out why tags request returns empty list. Upvotes: 4 [selected_answer]
2018/03/20
989
3,574
<issue_start>username_0: How to tell to jQuery tabledit that the rows are changed? The buttons only generated for existing rows, when I add a new row (for example using jQuery), the table buttons doesn’t appear in the new row. I saw in tabledit code, that there is possibility to switch between view and edit mode (maybe this would help me), but don’t know how to access these methods after the tabledit is created and when rows has been changed. A little snippet from my code: ``` $(document).ready(function(){ $(‘#btn’).click(function(){ ... adding row, I need to update tabledit here }); $(‘#table’).Tabledit(...parameters...); } }); ``` [tabledit](http://markcell.github.io/jquery-tabledit)<issue_comment>username_1: Here is the best solution I could come up with for your situation. I created an "Add" button. **NOTE** the `for-table` attribute so I can figure out what table to add to later. ``` Add Row ``` Then I created a click handler for the "Add" button. ``` $("#add").click(function(e){ var table = $(this).attr('for-table'); //get the target table selector var $tr = $(table + ">tbody>tr:last-child").clone(true, true); //clone the last row var nextID = parseInt($tr.find("input.tabledit-identifier").val()) + 1; //get the ID and add one. $tr.find("input.tabledit-identifier").val(nextID); //set the row identifier $tr.find("span.tabledit-identifier").text(nextID); //set the row identifier $(table + ">tbody").append($tr); //add the row to the table $tr.find(".tabledit-edit-button").click(); //pretend to click the edit button $tr.find("input:not([type=hidden]), select").val(""); //wipe out the inputs. }); ``` Essentially; 1. [Deep Clone](http://api.jquery.com/clone/ "Deep Clone") the last row of the table. (copies the data and attached events) 2. Determine and set the `row identifier`. 3. Append the new row. 4. Automatically click the `Edit` button. 5. Clear all inputs and selects. In my limited testing this technique appears to work. Upvotes: 3 <issue_comment>username_2: jQuery Tabledit should be executed every time a table is reloaded. See answer given here: [refreshing Tabledit after pagination](https://github.com/markcell/jquery-tabledit/issues/1#issuecomment-75464451) This means that every time you reload the table (e.g. navigating to new page, refreshing etc), you must initialize Tabledit on the page of the table where it wasn't initialized. The problem is that there is no way to know whether Tabledit has been initialized on the table already, hence if you re-initialize it, duplicate buttons (edit, delete..) will be added to the rows of the table. You also cannot destroy a non-existent Tabledit, hence calling '**destroy**' always beforehand will not help. I hence created my own function to tell me if Tabledit is initialized on a certain page of a table or not: ``` function hasTabledit($table) { return $('tbody tr:first td:last > div', $table).hasClass("tabledit-toolbar"); } ``` and using it as follows: ``` if( !hasTabledit($('#table')) ) { $('#table').Tabledit({ url: 'example.php', columns: { identifier: [0, 'id'], editable: [[1, 'points'], [2, 'notes']] }, editButton: true, deleteButton: false }); } ``` The `hasTabledit(..)` function checks whether the last cell of the first row of the table has a div which has the `tabledit-toolbar` class, since this is the div that holds the Tabledit buttons. You may improve it as you like. This is not the perfect solution but it is the best I could do. Upvotes: 2
2018/03/20
430
1,521
<issue_start>username_0: I am new to the Semantic-UI-React framework, and recently ran across a problem that I can't seem to fix. I have a Log in & Sign up Modal on my home page. When the LogIn And Sign Up button is triggered, the Modal pops up. However, I cannot get it to appear in the center of the page. It is on the top of the page, and partially cut off. How do I go about doing this? Thank you in advance for your help!<issue_comment>username_1: There are currently issues with the modal's in SUI check out this issue <https://github.com/Semantic-Org/Semantic-UI/issues/6185> Upvotes: 1 <issue_comment>username_2: The outer containing the modal dialog has `display: block;` instead of `flex` which causes the misalignment. To be more specific, the definition `display: flex` of `.dimmed.dimmable > .ui.visible.dimmer` is for some reason overridden by `display: block !important` imposed by `.visible.transition`. You might want to add the following to your CSS to fix this: ``` .dimmed.dimmable > .ui.modals.dimmer.visible { display: flex !important; } ``` Or, if you are using css modules: ``` :global(.dimmed.dimmable > .ui.modals.dimmer.visible) { display: flex !important; } ``` Upvotes: 3 <issue_comment>username_3: I came across this issue when learning react just months ago and this is how I sorted this issue.In my .css file under App.js folder just add: ``` .modal { height: auto; top: auto; left: auto; bottom: auto; right: auto; } ``` Hope it helps.Cheers Upvotes: 3
2018/03/20
1,023
4,157
<issue_start>username_0: In my Flink code I am using a custom input format, which throws an exception. It seems I need an instance of `RuntimeContext`, but how can I get one? My format class looks like this: ```scala MyInputFormat extends org.apache.flink.api.common.io.DelimitedInputFormat[T]{ @transient var lineCounter: IntCounter = _ override def open(split: FileInputSplit): Unit = { super.open(split) lineCounter = new IntCounter() getRuntimeContext.addAccumulator("rowsInFile", lineCounter) // this line throws IllegalStateException ``` My main program looks like this: ```scala val env = ExecutionEnvironment.getExecutionEnvironment val format = new MyInputFormat env.readFile(format, inputFile.getAbsolutePath) // throws exception ``` The exception that gets thrown: ```scala java.lang.IllegalStateException: The runtime context has not been initialized yet. Try accessing it in one of the other life cycle methods. at org.apache.flink.api.common.io.RichInputFormat.getRuntimeContext(RichInputFormat.java:51) ``` My class needs a `RuntimeContext` because it extends `DelimitedInputFormat` which extends... `RichInputFormat` ```scala public abstract class DelimitedInputFormat extends FileInputFormat public abstract class FileInputFormat extends RichInputFormat public abstract class RichInputFormat implements InputFormat private transient RuntimeContext runtimeContext; public void setRuntimeContext(RuntimeContext t) public RuntimeContext getRuntimeContext() ``` So any instance of `RichInputFormat` expects us to `setRuntimeContext(RuntimeContext t)` after it's created. I expect I should be doing the following: ```scala val env = ExecutionEnvironment.getExecutionEnvironment val runtimeContext: RuntimeContext = ??? // How do I get this? val format = new MyInputFormat format.setRuntimeContext(runtimeContext) env.readFile(format, inputFile.getAbsolutePath) // no longer throws exception ``` But how do I get an instance of RuntimeContext? The exception gets thrown because my custom input format does not have a `RuntimeContext`. I would set one, but I don't know where to get it.<issue_comment>username_1: You should init the RuntimeContext in the lifecycle methods like `open` ```scala MyInputFormat extends org.apache.flink.api.common.io.DelimitedInputFormat[T] { override def openInputFormat() = { getRuntimeContext.addAccumulator("rowsInFile", lineCounter) } ``` Upvotes: 1 <issue_comment>username_2: I don't yet understand why, but it seems that `MyInputFormat` is being instantiated several times, including before the `RuntimeContext` is available. However, despite all this, the job works and computes what it needs to do. I have worked around this problem by enclosing all calls to `addAccumulator(,)` in a `try`, like so: ``` private def addAccumulator(accName: String, acc: SimpleAccumulator[_]): Unit = { try { val rc = getRuntimeContext.getAccumulator(accName) // throws if RuntimeContext not yet set if (rc == null) getRuntimeContext.addAccumulator(accName, acc) } catch { case NonFatal(_) => } } ``` I need to do this despite the fact that I am calling `addAccumulator(,)` inside `open()`, which seems like the right lifecycle method. Also: because of parallelism, several sub-jobs were trying to add the same accumulator, which is wrong. This is why I am attempting to get the accumulator first. If no context yet, no problem: I'll get one later. If accumulator already exists, no problem- nothing to do. This is just a workaround, not a solution - but that's what i have for now. Upvotes: 0 <issue_comment>username_3: I ran into this same issue in Flink. It looks like setRuntimeContext is called automatically underneath the hood by Flink, and not during the `open` call, and I could not find any obvious documentation explaining this. But you can do something like ``` lazy val acc = getRuntimeContext.addAccumulator(accName, acc) ``` in your class definition, and then call ``` acc.add(v) ``` at some other point in your code where this guaranteed to be initialized, eg in one of the overriden methods of the Flink class. Upvotes: 1
2018/03/20
2,443
8,078
<issue_start>username_0: After many years, I'm finally cleaning up my .vimrc and digging into how the settings really, really work. I spent a lot of the day trimming cruft and reading help files. Now I'm down to a pretty minimal .vimrc that I'm pretty comfortable with -- and I believe I understand what each thing in it does (lots of commenting). I'll include my .vimrc at the bottom of this question. My problem is, I am working in Python (for the first time) and somehow vim has a bunch of settings (I don't like) for it--seemingly built-in. How can I either: 1. Figure out what is causing those settings and turn it off 2. Get it to give me a nice, blank slate for python files so I can select what settings I want? Notes: I deleted my .viminfo file (after backing it up) to get everything clean. When I open a python file and type `:set`, here is what I get: ``` :set --- Options --- autoindent helplang=en modified scroll=38 smartindent textwidth=76 comments=b:#,fb:- hlsearch number scrolloff=5 softtabstop=4 ttyfast expandtab incsearch relativenumber shiftwidth=4 suffixesadd=.py ttymouse=sgr filetype=python keywordprg=pydoc ruler showmatch syntax=python wildignore=*.pyc backspace=indent,eol,start cinkeys=0{,0},0),:,!^F,o,O,e commentstring=# %s fileencoding=utf-8 fileencodings=ucs-bom,utf-8,default,latin1 include=^\s*\(from\|import\) includeexpr=substitute(v:fname,'\.','/','g') indentexpr=GetPythonIndent(v:lnum) indentkeys=0{,0},:,!^F,o,O,e,<:>,=elif,=except omnifunc=pythoncomplete#Complete printoptions=paper:letter runtimepath=~/.vim,~/.vim/bundle/vundle,~/.vim/bundle/tabular,~/.vim/bundle/vim- es6,/var/lib/vim/addons,/usr/share/vim/vimfiles,/usr/ share/vim/vim74,/usr/share/vim/vimfiles/after,/var/lib/vim/addons/after,~/.vim/after,~/.vim/bundle/Vundle.vim/,~/.vim/bundle/vundle/after,~/.vim/bundle/tabular/after,~/.vim/bundle/vim-es6/after ``` Notice that `shiftwidth=4` and `softtabstop=4`, for example. If I open a *blank* file, here's the `:set` I get: ``` :set --- Options --- autoindent number shiftwidth=2 ttyfast expandtab relativenumber showmatch ttymouse=sgr helplang=en ruler smartindent hlsearch scroll=38 tabstop=2 incsearch scrolloff=5 textwidth=76 backspace=indent,eol,start fileencodings=ucs-bom,utf-8,default,latin1 printoptions=paper:letter runtimepath=~/.vim,~/.vim/bundle/vundle,~/.vim/bundle/tabular,~/.vim/bundle/vim-es6,/var/lib/vim/addons,/usr/share/vim/vimfiles,/usr/share/vim/vim74,/usr/share/vim/vimfiles/after,/var/lib/vim/addons/after,~/.vim/after,~/.vim/bundle/Vundle.vim/,~/.vim/bundle/vundle/after,~/.vim/bundle/tabular/after,~/.vim/bundle/vim-es6/after suffixes=.bak,~,.swp,.o,.info,.aux,.log,.dvi,.bbl,.blg,.brf,.cb,.ind,.idx,.ilg,.inx,.out,.toc ``` That is what I would expect to get (notice `shiftwidth=2`, for example. Here's my (rather short) .vimrc: ``` $ cat .vimrc " Turn on line numbering set number " Turn on relative numbers set relativenumber " Show the leader key set showcmd " Set the to space. let mapleader = " " " Always use a numberwidth minimum of 6 set numberwidth=6 """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ Tabbing & Indenting """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Set the width of a single 'shift'. This is what occurs when using the > " or < keys. set shiftwidth=2 " Set the size of a tab. set tabstop=2 " Expand tabs to spaces. set expandtab " Copy the current line's indent to the next line when starting a new line. set autoindent " Use 'smart' indenting for C-style languages. When this is on, autoindent " should also be on. set smartindent " Lets us use gg to re-indent the file and return to our current " location. map gg gg=Gu """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ Informatics """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Show an indicator of where you are in the file. set ruler """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ Navigation """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Make it more convenient to jump up or down a page. map j map k " Make it more convenient to go to the beginning or end of the file. map jj :$ map kk :1 """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ Aesthetics """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Syntax highlighting syntax on " By default, use the "distinguished" color scheme. colorscheme distinguished set scrolloff=5 """""""""""""""""""""""""""" """ Specific highlight rules highlight Todo none """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ Search """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Highlight matches as-you-search set showmatch " Turn on incremental search set incsearch """"""""""""""""" """ Highlighting " Show all search results highlighted set hlsearch " Clears search highlighting by just hitting a return. " The clears the command line. " (From <NAME> on the vim list.) " I added the final to restore the standard behaviour of " to go to the next line nnoremap :nohlsearch/ """ Highlighting """"""""""""""""" """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ Line Structure """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Set the line length for breaking text set textwidth=76 " Set this expression to control the way in which lines break. " set formatexpr """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ File-level adjustments """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Turn off swap files. noswapfile """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" """ Plugins & Tools """"""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Turn on plugins filetype plugin on " Stuff for the Tabular.vim plugin map ,t :Tab /= "command -nargs=1 T1 Tabular /^[^]\*\zs[]/l1r0 """ " Vundle config set nocompatible " be iMproved filetype off " required! set rtp+=~/.vim/bundle/Vundle.vim/ call vundle#begin() " let Vundle manage Vundle " required! Plugin 'gmarik/vundle' " Tabular Plugin 'godlygeek/tabular' " JavaScript ES6 plugin Plugin 'isRuslan/vim-es6' call vundle#end() filetype plugin indent on " required! " The JSONTidy command to format JSON. command JSONTidy %!python -m json.tool ``` So ... where are the custom settings for python files coming from? By the way, I checked my .vimrc directory and there are no `indent`, `syntax`, or `color` files for python. Why is `shiftwidth` 4 for python files? And where did my `tabstop=2` go (for example)?<issue_comment>username_1: Vim ships with several filetype plugins. I think the one that's causing your issue is this: <https://github.com/vim/vim/blob/master/runtime/ftplugin/python.vim> You could try removing the file. Or just find the options that you don't like, and `:let g:OPTION_NAME = 0`. There's also some discussion on overriding ftplugins here: <https://vi.stackexchange.com/questions/6658/how-to-override-a-ftplugin#6659>. Upvotes: 0 <issue_comment>username_2: 1. `:verbose set optionname?` will tell you the current value of `optionname` and where it was set. 2. The default Python settings are defined in: ``` $VIMRUNTIME/ftplugin/python.vim ``` Note that they follow Python's "standard" so changing any of that may or may not cause problems down the line (with linters, for example). 3. To override those default settings, create your own Python ftplugin at: ``` ~/.vim/after/ftplugin/python.vim ``` and add the desired options: ``` setlocal shiftwidth=2 setlocal tabstop=2 ``` As the name implies, that script is sourced *after* the built-in one, which makes it the right place for overriding built-in filetype-specific settings and adding your own. Upvotes: 3 [selected_answer]
2018/03/20
1,254
3,879
<issue_start>username_0: Ok so I'm trying to write a script to save the config on my network devices. I'm going wrong somewhere and I'm not sure where. It won't send the final confirmation "y" character, but I think it's due to what I'm telling it to expect. Current code: ``` import pexpect import sys import os import time import getpass hostname = "switch1" username = raw_input('Enter Your username: ') password = getpass.getpass('Password:') fout = file('mylog.txt','w') child = pexpect.spawn('ssh %s@%s' % (username, hostname)) child.logfile_read = fout child.expect('myusername@%s\'s password:' % hostname) child.sendline(password) child.expect('>') child.sendline('enable') child.expect('Password:') child.sendline(password) child.expect('#') child.sendline('wr mem') child.expect("Are you sure you want to save? (y/n) ") child.sendline('y') child.expect('#') child.sendline('logout') ``` I've also tried this line which has produced the same result: ``` child.expect("\r\n\r\nThis operation may take a few minutes.\r\nManagement interfaces will not be available during this time.\r\n\r\nAre you sure you want to save? (y/n) ") ``` Here is what it looks like on my switch: ``` (switch1) #write mem This operation may take a few minutes. Management interfaces will not be available during this time. Are you sure you want to save? (y/n) y Config file 'startup-config' created successfully . Configuration Saved! (switch1) # ``` Here is the error I get when I run the script: ``` python wrmem.py Enter Your username: myusername Password: Traceback (most recent call last): File "wrmem.py", line 35, in child.expect("Are you sure you want to save? (y/n) ") File "/usr/local/lib/python2.7/dist-packages/pexpect/spawnbase.py", line 327, in expect timeout, searchwindowsize, async\_) File "/usr/local/lib/python2.7/dist-packages/pexpect/spawnbase.py", line 355, in expect\_list return exp.expect\_loop(timeout) File "/usr/local/lib/python2.7/dist-packages/pexpect/expect.py", line 104, in expect\_loop return self.timeout(e) File "/usr/local/lib/python2.7/dist-packages/pexpect/expect.py", line 68, in timeout raise TIMEOUT(msg) pexpect.exceptions.TIMEOUT: Timeout exceeded. command: /usr/bin/ssh args: ['/usr/bin/ssh', 'myusername@switch1'] buffer (last 100 chars): 'nagement interfaces will not be available during this time.\r\n\r\nAre you sure you want to save? (y/n) ' before (last 100 chars): 'nagement interfaces will not be available during this time.\r\n\r\nAre you sure you want to save? (y/n) ' after: match: None match\_index: None exitstatus: None flag\_eof: False pid: 22641 child\_fd: 6 closed: False timeout: 30 delimiter: logfile: None logfile\_read: logfile\_send: None maxread: 2000 ignorecase: False searchwindowsize: None delaybeforesend: 0.05 delayafterclose: 0.1 delayafterterminate: 0.1 searcher: searcher\_re: 0: re.compile("Are you sure you want to save? (y/n) ") ``` Again I think it has to do with the expect "are you sure you want to save" line but I'm not sure. I can confirm logging into the switch and other commands work as I have other scripts, it's just this one I can't figure out. Any help is appreciated.<issue_comment>username_1: Check the log: `searcher: searcher_re: 0: re.compile("Are you sure you want to save? (y/n) ")` Compiling the `"Are you sure you want to save? (y/n) "` as regex will get `?, (, / and )` as special characters so try to scape them like this: `"Are you sure you want to save\? \(y\/n\) "` which will match the text `"Are you sure you want to save? (y/n) "` Check [this](https://regex101.com/r/mK3TFG/1) So change this line to: `child.expect("Are you sure you want to save\? \(y\/n\) ")` Upvotes: 2 [selected_answer]<issue_comment>username_2: Pexpect also supports partial match. So you can also use, ``` child.expect('Are you sure you want to save?') ``` Upvotes: 0
2018/03/20
363
1,391
<issue_start>username_0: Imagine I had the following model ```py class Post(models.Model): name = models.CharField(max_length=20) body = models.TextField() class Meta: permissions = permission_generator() ``` and what I want is that the `permission_generator()` to generate the tuple ``` ( ('create_pet', 'create post'), ('read_pet', 'read post'), ('update_pet', 'update post'), ('delete_post', 'delete post'), ) ``` In a way that I could reuse in all my models without repeating myself each time I define a new model. how can something like this be implemented?<issue_comment>username_1: I've just found out that django adds by default * add\_modelname * change\_modelname * delete\_modelname <https://docs.djangoproject.com/en/dev/topics/auth/default/#default-permissions> Upvotes: 1 <issue_comment>username_2: You can create abstract model and inherit it. For example: ``` class BaseModel(models.Model): class Meta: abstract=True permissions = ( ('create', 'Create'), ('read', 'Read'), ('update', 'Update'), ('delete', 'Delete') ) class Cat(BaseModel): name = models.CharField() ``` When you inherit that model and do data migration these permissions will be added to `django_permissions` table and will look like: `app | cat | Permission name` Upvotes: 0
2018/03/20
1,216
4,449
<issue_start>username_0: My boss recently purchased a code signing certificate from Comodo. I now have the task of making it work in VS2013 using Strong Name Key signing and perhaps as a post-build event, too. He did the whole purchasing process on the same machine (Windows 8.1 64-bit laptop) and using the same browser (Firefox 59.0.1 64-bit). I have gone to the Menu > Options > Privacy & Security > View Certificates > Your Certificates screen in Firefox, and selected the cert that was just purchased. I then clicked the "Backup" button, named it something generic, like companyCert.p12 (I don't get any other choice or settings but .p12), clicked the "Save" button, and entered a 16-character alphanumeric (caps and lowercase) password, and clicked the "OK" button. I got the popup saying that the export was successful. When I try to import that .p12, or the extension-renamed .pfx, file into the local user's Personal or Trusted Publishers certificate store through mmc (even tried certmgr.msc just for kicks), I get: > > The password you entered is incorrect. > > > When I try to build my assemblies in VS using the .p12 file to Strong Name Key sign them, I get: > > The key file 'absolute\path\to\cert\companyCert.p12' does not contain a public/private key pair. > > > When I try to do the same thing but renaming the extension to .pfx, I am prompted for the password, so I input that, and I get: > > An error occurred during encode or decode operation. > > > So I then tried to import the .p12/.pfx using the command prompt (running as administrator) and CertUtil using the following line: ``` certutil -importPFX -user "absolute\path\to\cert\companyCert.p12" AT_SIGNATURE ``` and I was prompted for the password, which I input, and got: > > CertUtil: -importPFX command FAILED: 0x80092002 (-2146885630 CRYPT\_E\_BAD\_ENCODE) > > > CertUtil: An error occurred during encode or decode operation. > > > for both .p12 and .pfx. I've tried all of these several times just in case I did something wrong with the password or something. I've tried going back into the Firefox certificates and instead of clicking on the "Backup" button in the Your Certificates screen, clicking on the code signing certificate, and clicking "View..." That takes me to the Certificate Viewer window, where I click on the Details tab and click the "Export" button. There, I am given the choice of X.509 Certificates: > > PEM (.crt/.pem, both with or without chain) > > > DER (.der without chain) > > > and PKCS#7 (.p7c with or without chain) > > > I did all but .der when trying to do the code signing, but to no avail. I was able to import some of them into the local user's Personal and Trusted Publisher stores, but I was unable to get any of them to work in VS for Strong Name Key signing. As I understand it, I need to be able to get the private/public key certificate in the local store on the OS level, not just Firefox's browser level, and then I should be able to export how I need to. Is that correct? If it's not, please, someone tell me what I can do to get this to work.<issue_comment>username_1: I downloaded - **DigiCertUtil** - and manually installed the certificate with this tool, and it worked perfectly! <https://www.digicert.com/util/> I managed to re-export the file and use the certUtil.exe with -importPFX. Initially i used the digicert utility to install the certificate on one machine and add it to the store. Then i exported it from the google chrome browser to a new .pfx file. (Settings > manage certificates > export) And voila all of sudden the file was no longer corrupted and could be used with the certUtil.exe I saw a post where the Comodo Support Team blamed Firefox for the issue, which made me wonder if chrome could export after the use of DigiCertUtil. Upvotes: 1 <issue_comment>username_2: In my case I could successfully import the .pfx file in my Desktop running Windows 10, but when I tried to import the certificate in the server running Windows Server 2016 I received the error **"The password you entered is incorrect"**. To solve the problem: 1. Remove the certificate from my windows (using Management Console / Certificates) 2. Import the original .pfx certificate to my Desktop Computer 3. Export the certificate (using Management Console / Certificates) **The trick is:** When exporting the certificate, select the encryption "TripleDES-SHA1". Upvotes: 2
2018/03/20
1,079
3,264
<issue_start>username_0: I'm learning about regex. If I want to find all the 5 letter words in a string, I could use: ``` import re text = 'The quick brown fox jumps over the lazy dog.' print(re.findall(r"\b[a-zA-z]{5}\b", text)) ``` But I want to write a simple function whose argument includes the string and the length of the word being found. I tried this: ``` import re def findwords(text, n): return re.findall(r"\b[a-zA-z]{n}\b", text) print(findwords('The quick brown fox jumps over the lazy dog.', 5)) ``` But this returns an empty list. The `n` is not being recognized. How can I specify an argument with the number of repetitions (or in this case, the length of the word)?<issue_comment>username_1: It's simpler than you may realize. *There is nothing special about a "regex string"*: it is a simple, basic, everyday text string. About the only thing remotely remarkable is that it is *usually* defined with the `r` prefix, because the backslash means something in (unprefixed) Python strings as well, and you don't want to double up these, and ... it is fed as-is into Python's internal regex module. So where the string comes from, doesn't really matter! Construct it any way you like, then feed the result into `re.findall`: ``` def findwords(text, n): return re.findall(r"\b[a-zA-z]{" +str(n) + r"}\b", text) >>> findwords(text, 3) ['The', 'fox', 'the', 'dog'] >>> findwords(text, 4) ['over', 'lazy'] ``` Note the repeated use of `r`, because it is not a regex peculiarity but a Python one, and you need to prefix *all* separate strings with it to prevent backslashes running rampant and messing up your carefully constructed expression. (The same goes for the input to this function. This will also work, unless you test the argument and reject non-numbers: ``` >>> findwords(text, '5} {1') ['quick ', 'brown ', 'jumps '] ``` ... which I did not.) Upvotes: 2 <issue_comment>username_2: Python does not magically fill the value of `n` into the string. For this you either need to use `format`: ``` r"\b[a-zA-z]{{{}}}\b".format(n) ``` or, if you are running Python >= 3.6, use the new [f-strings](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep498) (which can be combined with the `r` prefix denoting a raw string): ``` fr"\b[a-zA-z]{{{n}}}\b" ``` In both cases you need the outer two `{{}}` to create a literal `{}` and the inner is a format placeholder. If you want to avoid having to escape the literal `{}`, you can use the older `%`-formatting to achieve the same thing. For this `n` needs to always be an integer (which it is here): ``` r"\b[a-zA-z]{%i}\b" % n ``` Upvotes: 3 <issue_comment>username_3: This can be done very easily without generating a regex pattern. Just simply extract all words and then use list comprehension to gather all words of length `n`. [See code in use here](https://tio.run/##JcwxEoIwFEXRPqt4QwOMQkNN4R6sZCjQBImG/PAJBth8xLG@c67b/EC2ilGPjtiDlRBerR410uugMC368cadKVj0tOK1jG4GfRTDH9l0@wZJzzIVgVjOB2NV9trKzpiMk6Yr9ktxa0/JGb9tLoRjbX3WhGPHCNAWf6l7GGWzkKOuUbV5jF8) ``` import re text = 'The quick brown fox jumps over the lazy dog.' words = re.findall(r"[a-zA-Z]+", text) print([w for w in words if len(w) == 3]) ``` Result: `['The', 'fox', 'the', 'dog']` Upvotes: 2
2018/03/20
1,015
3,991
<issue_start>username_0: The context ----------- I'm building my own library of code for dealing with ElasticSearch, using the Tirexs package. This is my first time heading deep into macros, dependencies, use, import and several other of the most advanced features Elixir offers. For that I have define a `Document` struct which looks roughly like this, (`/lib/data/document.ex`) ``` defmodule Document do use Document.ORM import Document.Builder, only: [build_document: 2] import Document.Builder.Validations, only: [is_document_valid?: 1, collect_errors: 1] defstruct [valid?: false, errors: nil, fields: %{}] def something do # uses imported functions from Document.Builder and Documen.Builder.Validations end end ``` The `Document` module then uses several other functions from the `Document.ORM` module which don't seem to be the cause of the error. The problem ----------- My error is the following ``` Compilation failed because of a deadlock between files. dataobj_1 | The following files depended on the following modules: dataobj_1 | dataobj_1 | web/controllers/document_controller.ex => Document dataobj_1 | lib/data/document/builder.ex => Document.Builder.AuxiliarBuilder dataobj_1 | lib/data/document.ex => Document.Builder dataobj_1 | lib/data/document/orm/bulk/utils.ex => Document dataobj_1 | lib/data/document/builder/auxiliar.ex => Document ``` There's a deadlock which I don't know how to approach, I'm sure I'm doing something wrong. The first dependency, the `document_controller` one occurs (*I think*) because it both refer to the module `Document` and the `%Document` struct in different places: ``` defmodule Data.DocumentController do use Data.Web, :controller def create(conn, %{"document" => document_params}) do {:ok, doc} = document_params |> Document.new case Document.save(doc) do {:ok, record} -> conn |> put_status(201) |> render(Data.DocumentView, "document.json", payload: Document.find(record._id)) {:error, map_of_errors} -> conn |> put_status(422) |> render(Data.ErrorView, "422.json", errors: map_of_errors) end end def update(conn, %{"id" => id, "document" => document}) do case Document.update(id, document) do %Document{valid?: false, errors: errors} -> conn |> put_status(422) |> render(Data.ErrorView, "422.json", errors: errors) docset -> conn |> put_status(200) |> render(Data.DocumentView, "update.json", payload: docset) end end ``` The other dependencies also refer to both the module and the struct, so I'm thinking the deadlock has to do with this. But I'm lost about what to do. If there's necessary I can share more code, but to start the question I think it is enough. Thanks in advance!<issue_comment>username_1: You have circular dependencies. ``` Document -> Document.Builder -> Document.AuxiliarBuilder -> Document ``` Generally speaking, you should avoid child modules (`Document.Builder`, `Document.AuxiliarBuilder`) from calling their parent (`Document`). It sounds like you need to extract those functions out of `Document` and put them elsewhere. **Note**: Elixir does not actually have a concept of parent and child modules. I am just using the words here because that is how I think about it. Upvotes: 4 [selected_answer]<issue_comment>username_2: Apparently the circular dependency was happening because of the definition of the struct, namely `%Document`, and confusion with the module name. Moving the struct definition into its own separate file (first couple of lines [here](https://elixirschool.com/en/lessons/basics/modules/#structs) were the hint), and changing the module name from `Document` to `Data.Document` (`%Data.Document`) was enough to solve all circular dependencies. Upvotes: 1
2018/03/20
436
1,798
<issue_start>username_0: I'm currently learning about JS objects and trying to perform basic subtraction on a property that is then outputted via a method. However, the object method returns value of NaN even though `typeof` tells me `currentAge` and `currentYear`are numbers. What am I doing wrong and How can I get my below method to output the correct number? ```js var today = new Date(); var year = today.getFullYear(); var Sam = { age: 27, //Number birthMonth: "March", //March currentYear: year, //number birthYear: this.currentYear - this.age, // Method to say birth year sayBirthYear: function(){ console.log("I was born in the year " + this.birthYear ); //return NANS } } Sam.sayBirthYear(); // Outputs Birth year ```<issue_comment>username_1: You have circular dependencies. ``` Document -> Document.Builder -> Document.AuxiliarBuilder -> Document ``` Generally speaking, you should avoid child modules (`Document.Builder`, `Document.AuxiliarBuilder`) from calling their parent (`Document`). It sounds like you need to extract those functions out of `Document` and put them elsewhere. **Note**: Elixir does not actually have a concept of parent and child modules. I am just using the words here because that is how I think about it. Upvotes: 4 [selected_answer]<issue_comment>username_2: Apparently the circular dependency was happening because of the definition of the struct, namely `%Document`, and confusion with the module name. Moving the struct definition into its own separate file (first couple of lines [here](https://elixirschool.com/en/lessons/basics/modules/#structs) were the hint), and changing the module name from `Document` to `Data.Document` (`%Data.Document`) was enough to solve all circular dependencies. Upvotes: 1
2018/03/20
583
2,388
<issue_start>username_0: I've made a table with geometrical figures and a for loop for printing out the name of figures that have right angle, but I would like to print out one random name for this figure that matches the condition and if it's possible create another table that contains only figures that match the condition. I was trying to use some method from `java.util.Random` but I couldn't find out how. I'll be thankful for Your help: ``` import java.util.Random; public class rectangularFigures { private String name; private boolean rightAngle; public String getName() { return name; } public rectangularFigures(String name, boolean rightAngle) { this.name = name; this.rightAngle = rightAngle; } public static void main(String[] args) { rectangularFigures[] lOFigures = new rectangularFigures[4]; lOFigures[0] = new rectangularFigures("whell", false); lOFigures[1] = new rectangularFigures("square", true); lOFigures[2] = new rectangularFigures("rhombus", false); lOFigures[3] = new rectangularFigures("rectangle", true); for (int i = 0; i < lOFigures.length; i++) { { if (lOFigures[i].rightAngle) { System.out.println(lOFigures[i].name); } } } } } ```<issue_comment>username_1: You have circular dependencies. ``` Document -> Document.Builder -> Document.AuxiliarBuilder -> Document ``` Generally speaking, you should avoid child modules (`Document.Builder`, `Document.AuxiliarBuilder`) from calling their parent (`Document`). It sounds like you need to extract those functions out of `Document` and put them elsewhere. **Note**: Elixir does not actually have a concept of parent and child modules. I am just using the words here because that is how I think about it. Upvotes: 4 [selected_answer]<issue_comment>username_2: Apparently the circular dependency was happening because of the definition of the struct, namely `%Document`, and confusion with the module name. Moving the struct definition into its own separate file (first couple of lines [here](https://elixirschool.com/en/lessons/basics/modules/#structs) were the hint), and changing the module name from `Document` to `Data.Document` (`%Data.Document`) was enough to solve all circular dependencies. Upvotes: 1
2018/03/20
738
2,991
<issue_start>username_0: Given a site, AJAX components on the page and I need to wait till the whole page is fully loaded. Here is my wait method using JavascriptExecutor checking document.readyState: ``` public void waitForLoading2() { WebDriverWait wait = new WebDriverWait(driver, timeOut); if(!driver.findElements(By.xpath("//*[@id='wait'][contains(@style, 'display: block')]")).isEmpty()) { wait.until(ExpectedConditions.presenceOfElementLocated(By.xpath("//*[@id='wait'][contains(@style, 'display: none')]"))); } ExpectedCondition expectation = new ExpectedCondition() { public Boolean apply(WebDriver driver) { return ((JavascriptExecutor) driver).executeScript("return document.readyState").toString().equalsIgnoreCase("complete"); } }; wait.until(expectation); } ``` Sometimes it's failing with the following Error msg: > > org.openqa.selenium.JavascriptException: JavaScript error (WARNING: > The server did not provide any stacktrace information) > > > What did I miss here? My assumption is that document.readyState is common and always can be checked. Thanks<issue_comment>username_1: Checking `document.readyState` will not help you with AJAX calls. Your best bet is to find an element in the area being loaded by AJAX and wait until it is visible. Then you'll know that the page is loaded. If there are multiple/separate areas loaded by AJAX calls, then you will want to pick an element from each area. Upvotes: 0 <issue_comment>username_2: If you use jQuery to send AJAX request, you can get the value of `jQuery.active`. it's equivalent to all AJAX requests complete when `jQuery.active=0`. Use `executeScript("return jQuery.active==0")`. For detail please read this [artical](https://www.swtestacademy.com/selenium-wait-javascript-angular-ajax/) Upvotes: 0 <issue_comment>username_3: There are more [complex options](http://hrabosch.com/2016/05/12/35/), like this one ``` public static void waitForAjax(WebDriver driver, String action) { driver.manage().timeouts().setScriptTimeout(5, TimeUnit.SECONDS); ((JavascriptExecutor) driver).executeAsyncScript( "var callback = arguments[arguments.length - 1];" + "var xhr = new XMLHttpRequest();" + "xhr.open('POST', '/" + action + "', true);" + "xhr.onreadystatechange = function() {" + " if (xhr.readyState == 4) {" + " callback(xhr.responseText);" + " }" + "};" + "xhr.send();"); } ``` in order > > to wait till the whole page is fully loaded > > > But the following did the trick for me - I check if there are ongoing AJAX calls and wait till those are done: ``` JavascriptExecutor js = (JavascriptExecutor) driverjs; js.executeScript("return((window.jQuery != null) && (jQuery.active === 0))").equals("true"); ``` Upvotes: 1
2018/03/20
952
3,460
<issue_start>username_0: I have my form defined as follows in the PHP file : ``` First name: Last name: Submit Top Test ``` The following functions is called when the button is clicked. ``` function submit(){ var formdata = $("#testForm").serializeArray(); var sendJson = JSON.stringify(formdata); var request = $.ajax({ url: 'php//processPost.php', type:'POST', data:{myData:sendJson}, success: function(msg) { alert(msg); }, error: function(e){ alert("Error in Ajax "+e.message); } }); } ``` In `processPost.php` file, I have defined the following : ``` 1) $json_obj = json_encode($_POST["myData"]); 2) var_dump($json_obj); ``` To define the above PHP stuff, I used the [answer posted](https://stackoverflow.com/questions/10955017/sending-json-to-php-using-ajax?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa) by `<NAME>` with just a minor change(instead of `json_decode`, I am using `json_encode`) I can see the following getting printed in the `alert(msg)` window. ``` string(93) ""[{\"name\":\"FirstName\",\"value\":\"Mickey\"},{\"name\":\"LastName\",\"value\":\"Mouse\"}]"" ``` I plan on making a curl request to my java webservice and I want to send JSON something like this : ``` [{ "name": "FirstName", "value": "Mickey" }, { "name": "LastName", "value": "Mouse" }] ``` How can I get rid of `string(93)` and forward slashes `/` from the above code to make it a valid json first. Once, I have a valid json, I am planning on sending it as an object using cURL.<issue_comment>username_1: Checking `document.readyState` will not help you with AJAX calls. Your best bet is to find an element in the area being loaded by AJAX and wait until it is visible. Then you'll know that the page is loaded. If there are multiple/separate areas loaded by AJAX calls, then you will want to pick an element from each area. Upvotes: 0 <issue_comment>username_2: If you use jQuery to send AJAX request, you can get the value of `jQuery.active`. it's equivalent to all AJAX requests complete when `jQuery.active=0`. Use `executeScript("return jQuery.active==0")`. For detail please read this [artical](https://www.swtestacademy.com/selenium-wait-javascript-angular-ajax/) Upvotes: 0 <issue_comment>username_3: There are more [complex options](http://hrabosch.com/2016/05/12/35/), like this one ``` public static void waitForAjax(WebDriver driver, String action) { driver.manage().timeouts().setScriptTimeout(5, TimeUnit.SECONDS); ((JavascriptExecutor) driver).executeAsyncScript( "var callback = arguments[arguments.length - 1];" + "var xhr = new XMLHttpRequest();" + "xhr.open('POST', '/" + action + "', true);" + "xhr.onreadystatechange = function() {" + " if (xhr.readyState == 4) {" + " callback(xhr.responseText);" + " }" + "};" + "xhr.send();"); } ``` in order > > to wait till the whole page is fully loaded > > > But the following did the trick for me - I check if there are ongoing AJAX calls and wait till those are done: ``` JavascriptExecutor js = (JavascriptExecutor) driverjs; js.executeScript("return((window.jQuery != null) && (jQuery.active === 0))").equals("true"); ``` Upvotes: 1
2018/03/20
719
2,876
<issue_start>username_0: In a laravel controller I currently have these two variables which are pulling from a their respective models. ``` $guests = Person::where('id', $id) ->with('languages') ->get(); $languages = Language::pluck('name')->toArray(); return view('layouts.visitor', ['guests' => $guests, 'languages' => $languages]); ``` in the database table I have a few different fields: ``` id | name ---------- 1 | spanish 2 | english 3 | french ``` When selecting a language I'm able to select a language assigned to a user. I am also able to select all the languages in the table. What I would like to do and I'm not able to figure out is to have it show the current language assigned to a user first when he page loads then after if the user would like to change it they should view all the OTHER languages available in the drop down. If the default language for this user was spanish it should be like this: ``` Drop down: Spanish English French ```<issue_comment>username_1: Checking `document.readyState` will not help you with AJAX calls. Your best bet is to find an element in the area being loaded by AJAX and wait until it is visible. Then you'll know that the page is loaded. If there are multiple/separate areas loaded by AJAX calls, then you will want to pick an element from each area. Upvotes: 0 <issue_comment>username_2: If you use jQuery to send AJAX request, you can get the value of `jQuery.active`. it's equivalent to all AJAX requests complete when `jQuery.active=0`. Use `executeScript("return jQuery.active==0")`. For detail please read this [artical](https://www.swtestacademy.com/selenium-wait-javascript-angular-ajax/) Upvotes: 0 <issue_comment>username_3: There are more [complex options](http://hrabosch.com/2016/05/12/35/), like this one ``` public static void waitForAjax(WebDriver driver, String action) { driver.manage().timeouts().setScriptTimeout(5, TimeUnit.SECONDS); ((JavascriptExecutor) driver).executeAsyncScript( "var callback = arguments[arguments.length - 1];" + "var xhr = new XMLHttpRequest();" + "xhr.open('POST', '/" + action + "', true);" + "xhr.onreadystatechange = function() {" + " if (xhr.readyState == 4) {" + " callback(xhr.responseText);" + " }" + "};" + "xhr.send();"); } ``` in order > > to wait till the whole page is fully loaded > > > But the following did the trick for me - I check if there are ongoing AJAX calls and wait till those are done: ``` JavascriptExecutor js = (JavascriptExecutor) driverjs; js.executeScript("return((window.jQuery != null) && (jQuery.active === 0))").equals("true"); ``` Upvotes: 1
2018/03/20
311
1,108
<issue_start>username_0: I have 2 different divs inside an id. (screenshot below). I want to select each `itslocked` div and append it to the last so the normal div should appear on the top and all `itslocked` div appears after that. [![enter image description here](https://i.stack.imgur.com/hvaBw.png)](https://i.stack.imgur.com/hvaBw.png) ``` $("#rewardCount").find(".itslocked").each(function() { $lock_content=$(this); $(".itslocked").remove(); console.log($lock_content[0].outerHTML); $("#rewardCount").append($lock_content[0].outerHTML); }); ``` The above code is only appending single `itslocked` div and all other gets removed. How do I fixed it so the all itslocked div line up at the bottom? Thanks.<issue_comment>username_1: Try to replace $(".itslocked").remove; With $(this).remove(); Sine you have a loop, you don't to remove all elemtes on the first iteration Upvotes: 1 <issue_comment>username_2: You are removing all divs in your loop. Try this: ``` $(".itslocked", $("#rewardCount")).appendTo("#rewardCount"); ``` Upvotes: 3 [selected_answer]
2018/03/20
741
2,180
<issue_start>username_0: I have to sum two numbers (integers) in LaTeX. I also have to "print" the process of sum. So it would look like 5+2=7 in text. Any ideas? My code so far: ``` \newcommand{\Sum} {\newcounter{cnt} \setcountter{cnt}{1+1} } ```<issue_comment>username_1: In LaTeX, first you have to define a counter with: ``` \newcounter{countername} ``` Then you can put a value in this counter with: ``` \setcounter{countername}{} ``` where is an integer. Or you can add one to this counter with: ``` \stepcounter{countername} ``` or you can add some arbitrary value to this counter with: ``` \addtocounter{countername}{} ``` --- Then, to access this counter you use: ``` \value{countername} ``` so you can, for example, make calculations with this counter: ``` \setcounter{anothercounter}{% \numexpr\value{countername}+10\relax% } ``` Then, when you need to print the value of this counter to the pdf file, you can you the mighty `\the`: ``` \the\value{countername} ``` or you can use one of these: ``` \arabic{countername} \roman{countername} \Roman{countername} \alph{countername} \Alph{countername} ``` Upvotes: 4 <issue_comment>username_2: I've managed to solve the problem. ``` \newcommand{\Sum} [2] {#1 + #2 = \the\numexpr #1 + #2 \relax \\} ``` And then I use my command as: ``` \Sum {7} {3} ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: Perhaps a different syntax can be in order; instead of supplying each argument to calculate a sum, you can supply the operator with the operands. This allows you to be a little more flexible in terms of the input and also provide more functionality: [![printed output](https://i.stack.imgur.com/1T5DC.png)](https://i.stack.imgur.com/1T5DC.png) ``` \documentclass{article} \usepackage{xfp} \NewDocumentCommand{\showcalculation}{o m}{$ \IfValueTF{#1} {#1}{#2} = \fpeval{#2} $} \begin{document} \showcalculation{7+3} \showcalculation{1+2-3+4} \showcalculation[22 \div 7]{22 / 7} \showcalculation[10 \times 3^{7 - 7}]{10 * 3 ^ (7 - 7)} \end{document} ``` The optional argument for `\showcalculation` uses a LaTeX formatting for the printable calculation. Upvotes: 3
2018/03/20
965
3,296
<issue_start>username_0: I wrote a rest controller to return an image associated with a primary key. Now I wanted to load this image in the browser and I am running into issues: (1) If I type a GET URL to the image the browser (FireFox and Chrome) don't display the image but they are seeing all the headers properly. Additionally firefox says "The image cannot be displayed because it contains errors" (2) If I used XMLHttpRequest to create get the image using the URL I get the image but it displays only partially (the bottom half is cut off and is set to transparent). ``` @GetMapping("/{featureId}/loadImage") public ResponseEntity loadImageForId(@PathVariable long featureId, HttpServletResponse response) throws IOException { log.info("Getting image for feature id " + featureId); Feature feature = featureService.getFeatureById(featureId); File file = featureService.loadImageForFeature(feature); byte [] imageData = new byte[(int) file.length()]; FileInputStream inputStream = new FileInputStream(file); inputStream.read(imageData); inputStream.close(); HttpHeaders headers = new HttpHeaders(); headers.setContentType(MediaType.parseMediaType(...)); headers.setContentLength(file.length()); response.setHeader("Content-Disposition", "inline; filename=" + file.getName()); return new ResponseEntity(imageData, headers, HttpStatus.OK); } ```<issue_comment>username_1: In LaTeX, first you have to define a counter with: ``` \newcounter{countername} ``` Then you can put a value in this counter with: ``` \setcounter{countername}{} ``` where is an integer. Or you can add one to this counter with: ``` \stepcounter{countername} ``` or you can add some arbitrary value to this counter with: ``` \addtocounter{countername}{} ``` --- Then, to access this counter you use: ``` \value{countername} ``` so you can, for example, make calculations with this counter: ``` \setcounter{anothercounter}{% \numexpr\value{countername}+10\relax% } ``` Then, when you need to print the value of this counter to the pdf file, you can you the mighty `\the`: ``` \the\value{countername} ``` or you can use one of these: ``` \arabic{countername} \roman{countername} \Roman{countername} \alph{countername} \Alph{countername} ``` Upvotes: 4 <issue_comment>username_2: I've managed to solve the problem. ``` \newcommand{\Sum} [2] {#1 + #2 = \the\numexpr #1 + #2 \relax \\} ``` And then I use my command as: ``` \Sum {7} {3} ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: Perhaps a different syntax can be in order; instead of supplying each argument to calculate a sum, you can supply the operator with the operands. This allows you to be a little more flexible in terms of the input and also provide more functionality: [![printed output](https://i.stack.imgur.com/1T5DC.png)](https://i.stack.imgur.com/1T5DC.png) ``` \documentclass{article} \usepackage{xfp} \NewDocumentCommand{\showcalculation}{o m}{$ \IfValueTF{#1} {#1}{#2} = \fpeval{#2} $} \begin{document} \showcalculation{7+3} \showcalculation{1+2-3+4} \showcalculation[22 \div 7]{22 / 7} \showcalculation[10 \times 3^{7 - 7}]{10 * 3 ^ (7 - 7)} \end{document} ``` The optional argument for `\showcalculation` uses a LaTeX formatting for the printable calculation. Upvotes: 3
2018/03/20
644
2,009
<issue_start>username_0: i have a struct such as ``` typedef struct bignum { long number_of_digits; char *digit; } bignum; ``` and i want to declare an array of type bignum, the array size is going to be changed dynamically , so i used `malloc()` , `realloc()` can i shrink the array using `realloc()` with out memory leakes. sample of shrinking code ``` if(free_slots == 50) { big_num_Arr =(bignum *) realloc (big_num_Arr,(capacity-40)*sizeof(bignum)); free_slots = 10; capacity -= 40; } ```<issue_comment>username_1: Assuming the `digit` pointers point to dynamically-allocated arrays, you need to free them before `realloc()` deallocates those elements of `big_num_Arr`. ``` if(free_slots == 50) { for (int i = 10; i < free_slots; i++) { free(big_num_Arr[i].digit); } big_num_Arr = realloc(big_num_Arr,(capacity-40)*sizeof(bignum)); free_slots = 10; capacity -= 40; } ``` Also, see [Do I cast the result of malloc?](https://stackoverflow.com/questions/605845/do-i-cast-the-result-of-malloc) Upvotes: 2 [selected_answer]<issue_comment>username_2: > > can i shrink the array using realloc() with out memory leaks (?) > > > Yes, but not robustly with OP's code. Any reallocaiton, even a shrinking one, may return `NULL` and that should be checked. ``` if (free_slots == 50) { size_t new_capacity = capacity-40; ... if (new_capacity > 0) { void *new_ptr = realloc(big_num_Arr, sizeof *big_num_Arr * new_capacity); if (new_ptr) { big_num_Arr = new_ptr; capacity = new_capacity; } else if (new_capacity <= capacity) { // perhaps leave values "as is" // big_num_Arr = big_num_Arr; // capacity = capacity; } else { // allocation failure // perhaps leave values "as is", yet return a error // big_num_Arr = big_num_Arr; // capacity = capacity; return failure; } } else { free(big_num_Arr); big_num_Arr = NULL; capacity = 0; } ``` Upvotes: 0
2018/03/20
1,565
4,563
<issue_start>username_0: Let's say I have a unsorted set of items: ``` input = set([45, 235, 3, 77, 55, 80, 154]) ``` I need to get random values from this input but in a specific range. E.g. when I have ``` ran = [50, 100] ``` I want it to return either 77 or 55 or 80. What's the fastest way to get this for large sets in python?<issue_comment>username_1: Using a `set` for this isn't the right way because elements aren't sorted. This would lead to a `O(N)` solution to test each element against the boundaries. I'd suggest to turn the data into a sorted list, then you can use `bisect` to find start & end indexes for your boundary values, then apply `random.choice` on the sliced list: ``` import bisect,random data = sorted([45, 235, 3, 77, 55, 80, 154]) def rand(start,stop): start_index = bisect.bisect_left(data,start) end_index = bisect.bisect_right(data,stop) return data[random.randrange(start_index,end_index)] print(rand(30,100)) ``` `bisect` has `O(log(N))` complexity on sorted lists. Then pick an index with `random.randrange`. `bisect` uses compiled code on mainstream platforms, so it's very efficient besides its low complexity. Boundaries are validated by performing a limit test: ``` print(rand(235,235)) ``` which prints `235` as expected (always difficult to make sure that the arrays aren't out of bounds when using random) (if you want to update your data while running, you can also use `bisect` to insert elements, it's slower than with `set` because of the `O(log N)` complexity + insertion in `list`, of course but you cannot have everything) Upvotes: 3 <issue_comment>username_2: ``` from random import randint input = set([45, 235, 3, 77, 55, 80, 154]) ran = [50, 100] valid_values = [] for i in input: if ran[0] <= i <= ran[1]: valid_values.append(i) random_index = randint(0, len(valid_values)-1) print(valid_values[random_index]) ``` Upvotes: 1 <issue_comment>username_3: Here is my suggestion that I find readable, easy to understand and quite short: ``` import random inputSet = set([45, 235, 3, 77, 55, 80, 154]) ran = [50,100] # Get list of elements inside the range a = [x for x in inputSet if x in range(ran[0],ran[1])] # Print a random element print(random.choice(a)) # randomly 55, 77 or 80 ``` Note that I have not used the name `input` for the defined set because it is a reserved built-in symbol. Upvotes: 0 <issue_comment>username_4: You didn't clarify whether you could or could not use `numpy` but also asked for "the fastest" so I'll include the `numpy` method for completeness. In this case, the "`python_method`" approach is the [answer given by username_1 here](https://stackoverflow.com/a/49393866/4799172) ``` import numpy as np import bisect,random data = np.random.randint(0, 60, 10000) high = 25 low = 20 def python_method(data, low, high): data = sorted(data) start_index = bisect.bisect_left(data,low) end_index = bisect.bisect_right(data,high) return data[random.randrange(start_index,end_index)] def numpy_method(data, low, high): return np.random.choice(data[(data >=low) & (data <= high)]) ``` Timings: ``` %timeit python_method(data, low, high) 2.34 ms ± 11.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit numpy_method(data, low, high) 33.2 µs ± 72.4 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) ``` Obviously, though, you'd only `sort` the list once if you were using that function several times, so that will cut down the Python runtime quite to the same level. ``` %timeit new_data = sorted(data) 2.33 ms ± 39.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ``` `numpy` would pull ahead again in cases where you needed multiple results from within a single range as you could get them in a single call. EDIT: In the case that the input array is already sorted, and you're sure you can exploit that (taking `sorted()` out of `timeit`), the pure python method wins in the case of picking single values: ``` %timeit python_method(data, low, high) 5.06 µs ± 16.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) ``` The un-modified `numpy` method gives: ``` %timeit numpy_method(data, low, high) 20.5 µs ± 668 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) ``` So, as far as I can tell, in cases where the list is already sorted, and you only want one result, the pure-python method wins. If you wanted multiple results from within that range it might be different but I'm benchmarking against `randrange`. Upvotes: 3 [selected_answer]
2018/03/20
926
3,292
<issue_start>username_0: I'm trying to consume a webservice using JAX-WS client and https. The problem now is that i successfully created an instance of the service and get the port of the service but when trying to consume a service using that port it gives connection timeout. ``` > com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport error: java.net.ConnectException: Connection timed out: connect at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(Unknown Source) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.process(Unknown Source) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.processRequest(Unknown Source) at com.sun.xml.internal.ws.transport.DeferredTransportPipe.processRequest(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber.__doRun(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber._doRun(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber.doRun(Unknown Source) at com.sun.xml.internal.ws.api.pipe.Fiber.runSync(Unknown Source) at com.sun.xml.internal.ws.client.Stub.process(Unknown Source) at com.sun.xml.internal.ws.client.sei.SEIStub.doProcess(Unknown Source) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(Unknown Source) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(Unknown Source) at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(Unknown Source) at com.sun.proxy.$Proxy31.manuallyAuthorizeWithValidation(Unknown Source) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source) at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source) at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) at java.net.AbstractPlainSocketImpl.connect(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(Unknown Source) ... 15 more ```<issue_comment>username_1: Make sure the web service is running. You can use a tool such as SoapUI (free) to test SOAP web services. Upvotes: 0 <issue_comment>username_2: Make sure service is running using SOAP, if it's then try increasing the time if still not working, try to change the port Upvotes: 1
2018/03/20
1,418
4,460
<issue_start>username_0: I am using reactplayer for a youtube video which uses iframe. I am trying to scale the video to my div and I want it to be responsive. I put a `width` and `height` at `100%` on the ReactPlayer, and I have a wrapper div that I put height and width on, but the reactplayer does not fit the div. It is stuck at a height of 150px no matter how I resize the screen. ``` ``` .css ``` .video-wrapper { height: 100%; width: 100%; min-height: 225px; } ```<issue_comment>username_1: This can be easily achieved by further-extending your CSS. Since most videos are shot in 16:9, following [this guide by <NAME>](https://css-tricks.com/aspect-ratio-boxes/) will make the process easily achievable. Since you're utilizing React-Player, I am working with the content located on their [demo page](https://cookpete.com/react-player/). ``` .player-wrapper { width: auto; // Reset width height: auto; // Reset height } .react-player { padding-top: 56.25%; // Percentage ratio for 16:9 position: relative; // Set to relative } .react-player > div { position: absolute; // Scaling will occur since parent is relative now } ``` **Why it works?** TL;DR - Padding in percentages is *based on width*. By setting an element's height to 0, we can utilize a percentage for 'padding-top' to scale content perfectly. **Generate 16:9's Percentage** (9 / 16) \* 100 = 56.25 Upvotes: 5 [selected_answer]<issue_comment>username_2: To force react-player be fully responsive I did the following: CSS ```css .player-wrapper { position: relative; padding-top: 56.25%; /* 720 / 1280 = 0.5625 */ } .react-player { position: absolute; top: 0; left: 0; } ``` JSX ```js import React from "react"; import ReactPlayer from "react-player"; import "./Player.css"; const Player = () => ( ); export default Player; ``` Upvotes: 3 <issue_comment>username_3: you can keep the width fixed and then allow the height to adjust according to the video height as different video has its own size like 100\*200. `.video-wrapper > video { width: 55vw; height: min-content; }` Upvotes: 0 <issue_comment>username_4: The Easiest way to make the it responsive is adding the widht as 100% ``` ``` Upvotes: 3 <issue_comment>username_5: you can fill with: ``` .react-player > video { position: absolute; object-fit: fill; } ``` Upvotes: 1 <issue_comment>username_6: For a more modern approach, just add a class to the react player component and set it to: ``` height: auto !important; aspect-ratio: 16/9; ``` <https://caniuse.com/mdn-css_properties_aspect-ratio> Upvotes: 2 <issue_comment>username_7: Here's how I did it. ``` video { object-fit: cover; } ``` Now, the size of the video can be adjusted by sizing the wrapper. Upvotes: 0 <issue_comment>username_8: If you are using Tailwindcss and Swiper you can use this code This code implements the last aspect-video class to ensure the aspect ratio always be correct And also ensure that if the user scroll to the next page, the previous player gets paused You can also set Max width of videos , here its "max-w-6xl" ``` import React, { useEffect, useState } from 'react' import ReactPlayer from 'react-player' import { Swiper, SwiperSlide } from 'swiper/react'; import { Pagination } from "swiper"; const videos = [ { id: 1, url: "https://www.youtube.com/watch?v=1234" }, { id: 2, url: "https://www.youtube.com/watch?v=1234" }, { id: 3, url: "https://www.youtube.com/watch?v=1234" } ]; const IndexHero = () => { const [domLoaded, setDomLoaded] = useState(false); const [isPlaying, setIsPlaying] = useState(null); useEffect(() => { setDomLoaded(true); }, []); return ( <> {!domLoaded && ( {/\*\* For CLS \*/} )} {domLoaded && ( { setIsPlaying(null); }} autoplay={false} watchSlidesProgress={true} > {videos.map((data) => ( { setIsPlaying(data.id); }} playing={isPlaying === data.id} /> ))} )} ); }; export default IndexHero; ``` Upvotes: 1 <issue_comment>username_9: Here is a easy fix that works for me, I hope it do it for you too ``` const videoRef = useRef(null!) const beginningHandler = () => { const videoTag = videoRef.current.getInternalPlayer() as HTMLVideoElement; videoTag.style.objectFit = 'cover'; } return ( console.log('finish')} url={'./videos/time-travel.mp4'} width={'100%'} height={'100%'} playing onStart={beginningHandler} /> ) ``` Upvotes: 0
2018/03/20
544
1,858
<issue_start>username_0: I have a website that loops music on load, but it is too loud. I have a slider bar to change the music volume, but how would I default it to 25% of the slider? [WEBSITE](http://prova.city) ``` If you are reading this, it is because your browser does not support the audio element. function SetVolume(val) { var player = document.getElementById('music'); console.log('Before: ' + player.volume); player.volume = val / 100; console.log('After: ' + player.volume); } ```<issue_comment>username_1: Just create a script that set the volume: ``` var audio = document.getElementById("music"); audio.volume = 0.25; ``` Upvotes: 1 <issue_comment>username_2: If you are using the `audio` tags, just get the DOM Node in Javascript and manipulate the `volume` property ``` var audio = document.querySelector('audio'); // Getting console.log(volume); // 1 // Setting audio.volume = 0.5; // Reduce the Volume by Half ``` The number that you set should be in the range `0.0` to `1.0`, where `0.0` is the quietest and `1.0` is the loudest. Upvotes: 0 <issue_comment>username_3: * `input` is a Void element and as such does not needs a closing * You're using `max`, `min`, well, use also `value`. * Avoid using inline JavaScript. Keep you logic away from your markup. * init your function like `setVolume()` * use camelCase `setVolume` instead of PascalCase `SetVolume`, since it's a *normal* function, not a Method, Class or constructor... ```js const audio = document.getElementById('audio'), input = document.getElementById('volume'), setVolume = () => audio.volume = input.value / 100; input.addEventListener("input", setVolume); setVolume(); ``` ```html Audio is not supported on your browser. Update it. ``` I think you'd also like [this example](https://stackoverflow.com/a/20753900/383904). Upvotes: 0
2018/03/20
711
2,548
<issue_start>username_0: I'm trying to attach BluePrism to a PowerShell process and input some character. [![enter image description here](https://i.stack.imgur.com/OhO5f.png)](https://i.stack.imgur.com/OhO5f.png) Currently, I can launch PowerShell and get the PID by using C#, yet I'd like to actually attach the PowerShell process to BluePrism and I'm not sure how to do this use case. I tried to use the `Application Modeller`(no C# code involved), but I can't get the process to attach or been launch. Furthermore, when I press the `Diagnostics` button, then `Take Snapshot(Now)`, I get a `Not Connected` message.<issue_comment>username_1: I was able to reproduce the behavior you described above in not being able to attach to an open instance of Powershell using the configuration from your screenshot. Using code made available on [this Blue Prism forum post](https://portal.blueprism.com/forums/general-discussion/general-discussion/unzip-folder-using-powershell), it appears that using a wildcard match for the window's title should work to attach to an open instance: **Application Modeller** [![Application modeler configuration](https://i.stack.imgur.com/Duhic.png)](https://i.stack.imgur.com/Duhic.png) **Navigation (Attach) Stage** [![Navigation Attach stage configuration](https://i.stack.imgur.com/QJl3p.png)](https://i.stack.imgur.com/QJl3p.png) Upvotes: 2 <issue_comment>username_2: Steps: 1. Utility - Environment. Start Process "C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe" 2. Wait until PowerShell (Block + recovery + resume + retry?) 3. Navigate + Attach (only windows text = "Windows PowerShell") 4. Spy and send text, etc. ![enter image description here](https://i.stack.imgur.com/ZvE9h.png) ![enter image description here](https://i.stack.imgur.com/BFIRf.png) ![enter image description here](https://i.stack.imgur.com/yiCK5.png) ![enter image description here](https://i.stack.imgur.com/LiWmT.png) ![enter image description here](https://i.stack.imgur.com/QKp1x.png) ![enter image description here](https://i.stack.imgur.com/vVuRX.png) Upvotes: 1 <issue_comment>username_3: It would be better to write a code for using powershell with BP, you can find the code to do that by using C# ``` PSConnector t = new PSConnector(); CustomResult r = new CustomResult(); string command = Command; string argument = Argument; string member = Member; r=t.PSCExecuteSynchronously(command, argument, member); Result=r.Result; Status=r.Status; Exception=r.Exception; ``` Upvotes: 1
2018/03/20
845
2,450
<issue_start>username_0: Please help me articulate this question better by reading my scenario and question. I am wondering if the following scenario is possible: I have a table like this: ``` ID # of APPLES 1 3 2 15 3 87 4 56 ``` And another table like this: ``` ID Description 1 I have %d Apples 2 You have %d Apples ``` What I want is to fetch data from these two tables such that my result would look like: ``` I have 3 Apples I have 15 Apples I have 87 Apples I have 56 Apples You have 3 Apples You have 15 Apples You have 87 Apples You have 56 Apples ``` My question is, can this be done in Oracle SQL? EDIT: modified how the result should look like<issue_comment>username_1: assuming your two tables are #temp3 and #temp 4 try this: ``` select replace(description, '%d', apples) from #temp3 a join #temp4 b on a.id=b.id ``` Upvotes: 0 <issue_comment>username_2: If your result had two rows, then you can use `replace()` and a `join`: ``` select replace(t2.description, '%d', t1.num_apples) from t1 join t2 on t1.id = t2.id; ``` If it had eight rows, you could use a `cross join`: ``` select replace(t2.description, '%d', t1.num_apples) from t1 cross join t2 on t1.id = t2.id; ``` To get 4 rows, I suppose you could do: ``` select t.* from (select replace(t2.description, '%d', t1.num_apples) from t1 cross join t2 on t1.id = t2.id order by row_number() over (partition by t2.id order by t2.id) ) t where rownum <= (select count(*) from t2); ``` This matches the values arbitrarily. It is unclear to me if there is deterministic logic in the matching between the tables. Upvotes: 2 [selected_answer]<issue_comment>username_3: *You may use* `left` *&* `right outer join`*s with the help of* `ceil`,`trunc`, `replace` *&* `lpad` *functions and operators :* ``` select result from ( select replace(l.Description,'%d',lpad(numApples,2,' ')) result, numApples from apples a left outer join lu_apples l on ( ceil(a.id/2) = l.id ) union all select replace(l.Description,'%d',lpad(numApples,2,' ')) result, numApples from apples a right outer join lu_apples l on ( trunc(3-a.id/2) = l.id ) ) order by result, numApples; RESULT --------------- I have 3 Apples I have 15 Apples I have 56 Apples I have 87 Apples You have 3 Apples You have 15 Apples You have 56 Apples You have 87 Apples ``` [Demo](http://sqlfiddle.com/#!4/31d59/28) Upvotes: 0
2018/03/20
951
2,403
<issue_start>username_0: New react enthusiast here needing a little bit of help. So I have this set of array store in a state called **projects** ``` 0:{id: 1, title: "Business Web", category: "Web Design", deleted_at: "0000-00-00 00:00:00"} 1:{id: 2, title: "Social App", category: "Mobile Development", deleted_at: "0000-00-00 00:00:00"} 2:{id: 3, title: "Ecommerce", category: "Web Development", deleted_at: "0000-00-00 00:00:00"} 3:{id: 4, title: "1", category: "1", deleted_at: "0000-00-00 00:00:00"} 4:{id: 5, title: "123123", category: "123123", deleted_at: "0000-00-00 00:00:00"} 5:{id: 6, title: "new", category: "new", deleted_at: "0000-00-00 00:00:00"} 6:{id: 7, title: "sdasd", category: "sdawd", deleted_at: "0000-00-00 00:00:00"} 7:{id: 8, title: "sssss", category: "ssssss", deleted_at: "0000-00-00 00:00:00"} 8:{id: 9, title: "Irene", category: "Bae", deleted_at: "0000-00-00 00:00:00"} 9:{id: 10, title: "sssss", category: "sssss", deleted_at: "0000-00-00 00:00:00"} ``` And Im using this to this to store an item into state called **projectItem** ``` this.setState({ projectItem: this.state.projects.filter(p => p.id === id) }) ``` which give me a result of something like this.. > > 0: {id: 1, title: "Business Web", category: "Web Design", deleted\_at: > "0000-00-00 00:00:00"} > > > Now when I try to access this state and set it as initial value for input, its not working. I set value like this ``` value={this.state.projectItem.title} ``` Can you please help me where did I go wrong? Or am I doing this the wrong way? Thank you in advance.<issue_comment>username_1: [Array.prototype.filter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) returns an array, so to get an item you may do something like ``` cont filtered = this.state.projects.filter(p => p.id === id); this.setState({ projectItem: filtered.length ? filtered[0] : null }); ``` Maybe you want to [find](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find) item instead: ``` this.setState({ projectItem: this.state.projects.find(p => p.id === id); }); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Your issue is that filter returns an array and not a single object. Try assigning ``` this.setState( projectItem: this.state.projects.filter(p => p.id === id)[0] }) ``` Upvotes: 0
2018/03/20
502
1,794
<issue_start>username_0: I'm trying to write documentation for a Module function like this: ``` /** * Usage: * * ``` * @NgModule({ * imports: [ * BrowserModule, * ..., * ThisModule.forRoot({ * name: 'Name', * version: '1.0', * ], * }), * ``` * * @param config Service configuration parameters */ public static forRoot(config: SVConfig) { ``` The problem is with `@NgModule`. I've tried with: ``` * ``` * @NgModule ``` Seems that html entitites works well outside code (```), but not inside code block (it does something weird like making `NgModule` in bold and new line) Also tried `\@`, `{@literal @}`, `\u0064`, `@@` with no success. The most friendly I've found is `(@)NgModule`. Any suggestion please?<issue_comment>username_1: Sadly, special symbols are not supported in jsDoc inside `@example` block. They work only inside inline code blocks, like this one: ``` ```js @Module ``` ``` This will result in the proper `@Module` output. And unlike `@example`, you cannot place an inline code block after everything, because it is inline, which means it will be somewhere before your `@returns` section. Awkward, I know. The same goes when you want to use something like multi-line comment in your code example, etc. ``` ```js a.setParams(/* parameters here */); ``` ``` outputs: `a.setParams(/* parameters here */);` Upvotes: 3 <issue_comment>username_2: I've had luck using an alternate @ symbol in the unicode space: [U+FF20 (@)](https://www.compart.com/en/unicode/U+FF20). It makes the documentation look correct, but unfortunately won't work if someone copy/pastes the code block. This appears to be an open issue since at least 2012 so I'm not holding my breath for a better fix. Upvotes: 4
2018/03/20
326
1,300
<issue_start>username_0: I want to create a messagebox or ContentDialogue which can be easily dragged with mouse like this: [![enter image description here](https://i.stack.imgur.com/sSowf.png)](https://i.stack.imgur.com/sSowf.png) How can I do this ?<issue_comment>username_1: Unfortunately the `ContentDialog` design is built this way and cannot be separated into a "secondary view". The confirmation dialog you mention in Groove Music is actually a `MessageDialog`, which appears as a modal window, but this one is not very customizable (you can set custom buttons and text only). If you really wanted to build such dialog, you would have to do this yourself by creating a [new app view](https://learn.microsoft.com/en-us/windows/uwp/design/layout/show-multiple-views), that will act as if it is a dialog. However, this app view will not be a modal one, so you would also have to disable the first app view somehow (like overlying a semi-transparent border above the content). Upvotes: 2 <issue_comment>username_2: You can do this with a Popup control and manipulation event handlers on the contents. I wrote an example [here](https://github.com/mscherotter/UWPChildWindow) that demonstrates the method. The code also shows how to make an independent window with the same content. Upvotes: 0
2018/03/20
1,321
3,916
<issue_start>username_0: ``` df.cleaned <- df[-which(str_detect(df, "Not found")),] ``` "df" refers to a data frame, that consists of multiple columns and rows. A lot of the elements in this data frame have certain character words in them. What I'm looking to do, is to remove all those values that contain the words "Not found" either as the whole element value, or part of it. So far, the above command is what I've come up with, with the stringr package. However, this command seems to remove entire rows. I don't want to remove the entire row, I simply want to remove that specific element that contains "Not found".<issue_comment>username_1: How to get the behavior: ``` toy[toy == "Not found"] <- "" toy # x y z n # 1 m f 6 # 2 z t a 3 # 3 m 4 # 4 j 9 # 5 e 5 # 6 f n k 2 # 7 q f p 1 # 8 n 8 # 9 n k h 7 # 10 d u l 10 ``` For matching vs. equality, you could try this. I'm not sure if it offers performance improvements over the @username_2 approach. EDIT: apparently, as @username_2 explains in the comments, the same conversion is done behind the scenes. In which case, it doesn't look as clean as the equality solution, but shouldn't drop in performance due to the conversion: ``` toy[matrix(grepl("Not found", as.matrix(toy)), nrow(toy))] <- "" toy # x y z n # 1 m f 6 # 2 z t a 3 # 3 m 4 # 4 j 9 # 5 e 5 # 6 f n k 2 # 7 q f p 1 # 8 n 8 # 9 n k h 7 # 10 d u l 10 ``` Create the data: ``` toy <- data.frame(x = sample(letters, 10), y = sample(letters, 10), z = sample(letters, 10), stringsAsFactors = FALSE) for (col in seq_along(toy)) toy[[col]][sample(10, 3)] <- "Not found" toy$n <- sample(10) toy # x y z n # 1 m Not found f 6 # 2 z t a 3 # 3 Not found m Not found 4 # 4 Not found j Not found 9 # 5 e Not found Not found 5 # 6 f n k 2 # 7 q f p 1 # 8 Not found Not found n 8 # 9 n k h 7 # 10 d u l 10 ``` Upvotes: 1 <issue_comment>username_2: It's often advantageous to write a simple function up front that does what you want, and then know how to *apply* that function to all of your columns. For instance: ``` replace_notfound <- function(s, newstr="") s[grepl("Not found", s)] <- newstr ``` Now, let's apply that function to each column of your data: ``` # I'm assuming you want stringsAsFactors=FALSE df.cleaned <- as.data.frame(lapply(df, replace_notfound), stringsAsFactors=FALSE) ``` It's not always the case that all columns of a frame are `character`, so you might want to conditionally do this: ``` ischr <- sapply(df, is.character) df.cleaned <- df # just a copy df.cleaned[ischr] <- lapply(df.cleaned[ischr], replace_notfound) ``` Upvotes: 1 <issue_comment>username_3: Your thought was in right direction. You need to try to apply it for each item. One option could be to use `sapply`. Check every item with `str_detect` and replace with `""` or `NA` otherwise just return value of item. ``` library(stringr) df.clean <- as.dataframe(sapply(df, function(x)ifelse(str_detect(x, "Not found"), "",x))) df.clean # # A B # 1 A Good # 2 B # 3 C Good # 4 D # 5 E Good # 6 A # 7 B Good # 8 C # 9 D Good # 10 E ``` **Data** ``` df <- data.frame(A = rep(c("A", "B", "C", "D", "E"), 2), B = rep(c("Good","Bad with Not found"),5), stringsAsFactors = FALSE) df # A B # 1 A Good # 2 B Bad with Not found # 3 C Good # 4 D Bad with Not found # 5 E Good # 6 A Bad with Not found # 7 B Good # 8 C Bad with Not found # 9 D Good # 10 E Bad with Not found ``` Upvotes: 1 [selected_answer]
2018/03/20
1,434
4,248
<issue_start>username_0: Here's a short explanation of the situation: I have an Ubuntu 14.04 VPS from DigitalOcean and I'm trying to install an EV SSL from Comodo for my domain. I've installed Sentora which uses Apache - 2.4.3 which a bit older than 2.4.8 (there's a small difference to what we enter into the VirtualHost configuration file using the tutorial I've used) The tutorial I've followed: <https://www.digitalocean.com/community/tutorials/how-to-install-an-ssl-certificate-from-a-commercial-certificate-authority> The problem: When I go to the https:// page, my site doesn't load. It loads http:// tho (In the image attached below you can see the virtualhost conf file and the ports listening), Apache still listens on port 80 instead of 443, I've trying opening the port manually but that still doesnt work as you can see. [![Open ports and virtualhost config file](https://i.stack.imgur.com/9LA3z.png)](https://i.stack.imgur.com/9LA3z.png)<issue_comment>username_1: How to get the behavior: ``` toy[toy == "Not found"] <- "" toy # x y z n # 1 m f 6 # 2 z t a 3 # 3 m 4 # 4 j 9 # 5 e 5 # 6 f n k 2 # 7 q f p 1 # 8 n 8 # 9 n k h 7 # 10 d u l 10 ``` For matching vs. equality, you could try this. I'm not sure if it offers performance improvements over the @username_2 approach. EDIT: apparently, as @username_2 explains in the comments, the same conversion is done behind the scenes. In which case, it doesn't look as clean as the equality solution, but shouldn't drop in performance due to the conversion: ``` toy[matrix(grepl("Not found", as.matrix(toy)), nrow(toy))] <- "" toy # x y z n # 1 m f 6 # 2 z t a 3 # 3 m 4 # 4 j 9 # 5 e 5 # 6 f n k 2 # 7 q f p 1 # 8 n 8 # 9 n k h 7 # 10 d u l 10 ``` Create the data: ``` toy <- data.frame(x = sample(letters, 10), y = sample(letters, 10), z = sample(letters, 10), stringsAsFactors = FALSE) for (col in seq_along(toy)) toy[[col]][sample(10, 3)] <- "Not found" toy$n <- sample(10) toy # x y z n # 1 m Not found f 6 # 2 z t a 3 # 3 Not found m Not found 4 # 4 Not found j Not found 9 # 5 e Not found Not found 5 # 6 f n k 2 # 7 q f p 1 # 8 Not found Not found n 8 # 9 n k h 7 # 10 d u l 10 ``` Upvotes: 1 <issue_comment>username_2: It's often advantageous to write a simple function up front that does what you want, and then know how to *apply* that function to all of your columns. For instance: ``` replace_notfound <- function(s, newstr="") s[grepl("Not found", s)] <- newstr ``` Now, let's apply that function to each column of your data: ``` # I'm assuming you want stringsAsFactors=FALSE df.cleaned <- as.data.frame(lapply(df, replace_notfound), stringsAsFactors=FALSE) ``` It's not always the case that all columns of a frame are `character`, so you might want to conditionally do this: ``` ischr <- sapply(df, is.character) df.cleaned <- df # just a copy df.cleaned[ischr] <- lapply(df.cleaned[ischr], replace_notfound) ``` Upvotes: 1 <issue_comment>username_3: Your thought was in right direction. You need to try to apply it for each item. One option could be to use `sapply`. Check every item with `str_detect` and replace with `""` or `NA` otherwise just return value of item. ``` library(stringr) df.clean <- as.dataframe(sapply(df, function(x)ifelse(str_detect(x, "Not found"), "",x))) df.clean # # A B # 1 A Good # 2 B # 3 C Good # 4 D # 5 E Good # 6 A # 7 B Good # 8 C # 9 D Good # 10 E ``` **Data** ``` df <- data.frame(A = rep(c("A", "B", "C", "D", "E"), 2), B = rep(c("Good","Bad with Not found"),5), stringsAsFactors = FALSE) df # A B # 1 A Good # 2 B Bad with Not found # 3 C Good # 4 D Bad with Not found # 5 E Good # 6 A Bad with Not found # 7 B Good # 8 C Bad with Not found # 9 D Good # 10 E Bad with Not found ``` Upvotes: 1 [selected_answer]
2018/03/20
1,435
4,667
<issue_start>username_0: UPDATE: Reformatted the whole question I have a database with individual Formula 1 race results. Each driver gets points for the according to the place they finish in. Now I want to build a table with all combined points from all races for every driver. So basically a "driver standings" table. I can get the summed up points easily of course but the problem I face comes when multiple drivers have the exact same total points. In these cases I need to check which one of the drivers had the best (or more of them) single race result as he should be above on the end table. Oh, and on the placement column the "0" means did not finish so that's the worst outcome. --- Simple example: ``` Event | Driver | Placement | Points -------------------------------------- Race 1 | Driver A | 1 | 3 Race 1 | Driver B | 2 | 2 Race 1 | Driver C | 3 | 1 Race 2 | Driver A | 3 | 3 Race 2 | Driver B | 2 | 2 Race 2 | Driver C | 1 | 1 Race 3 | Driver A | 0 | 0 Race 3 | Driver B | 0 | 0 Race 3 | Driver C | 2 | 2 ``` And requested finished table ``` Driver | Points ------------------ Driver C | 4 Driver A | 4 Driver B | 4 ``` Order is because Driver C has one first place finish and a second place finish. Driver A has one first place finish and a third place finish where as Driver B only has second place finishes. (Assuming my head works, it's very late.) Hard example <http://sqlfiddle.com/#!9/16f9c5> Desired result ``` | Driver | Points | |------------------------| | <NAME> | 43 | | <NAME> | 43 | | <NAME> | 5 | | <NAME> | 5 | | <NAME> | 0 | | <NAME> | 0 | | <NAME> | 0 | | <NAME> | 0 | ``` From the fiddle data I dropped everything except the problem cases. Everyone has same points with at least one other, and the tie breakers vary from simple best finish in a race to number of best finishes (four 6th place wins is better than 2 6th place wins like is the case with Hulkenberg and Massa). The result table is because: ``` 1. <NAME> - has four 6th place finishes 2. <NAME> - has two 6th place finishes 3. <NAME> - has 8th place finish 4. <NAME> - has 9th place finish 5. <NAME> - has 11th place finish 6. <NAME> - has 12th and 13th place finishes 7. <NAME> - has only one 12th place finish and nothing else 8. <NAME> - has 13th place finish ``` so the real problematic tie breakers are [Hulkenberg vs Massa] and [Gasly vs Giovinazzi] Any help appreciated.<issue_comment>username_1: Try a self-join. ``` select t1.driver, points, min_placement from (select driver, sum(points) points from f1 group by driver) t1 inner join (select driver, min(placement) min_placement from f1 group by driver) t2 on t1.driver = t2.driver order by points desc, min_placement asc ``` In your test data, most drivers have a lowest placement of zero, so you won't see the results you're looking for. Upvotes: 0 <issue_comment>username_2: I assume you'll need at least a second tie break after highest finish: ``` SELECT driver, SUM(points) as points, CASE WHEN MAX(placement) = 0 THEN 99 ELSE MAX(placement) END AS highest_finish, -- or maybe this would work? SUM(CASE WHEN placement <> 0 THEN POWER(20, placement) ELSE 0 END) AS weighted_finish FROM f1 GROUP BY driver ORDER BY points DESC, highest_finish ``` You might also have luck with an expression like this: ``` SELECT f1.driver, MAX((100 - f1.placement) * ( SELECT count(*) FROM f1 f2 WHERE f2.driver = f1.driver AND f2.placement = f1.placement AND f2.placement > 0 )) AS highest_placement_weight FROM f1 GROUP BY f1.driver ``` Upvotes: 0 <issue_comment>username_3: This should do the trick without too much fuzz. ``` SELECT driver ,SUM(points) AS points ,CONCAT(GROUP_CONCAT(IF(placement = 0, 99, LPAD(placement,2,0)) ORDER BY IF(placement = 0, 99, placement) SEPARATOR ','), ',99') AS best FROM f1 GROUP BY driver ORDER BY points DESC, best ``` Or, if you wanna hide the placement sorting data column: ``` SELECT driver ,SUM(points) AS points FROM f1 GROUP BY driver ORDER BY points DESC ,CONCAT(GROUP_CONCAT(IF(placement = 0, 99, LPAD(placement,2,0)) ORDER BY IF(placement = 0, 99, placement) SEPARATOR ','), ',99') ``` It simply takes advantage of alphanumerical sorting to determine best placement(s) in case of a tie in points. Upvotes: 3 [selected_answer]
2018/03/20
643
2,185
<issue_start>username_0: This is my code: ``` from random import shuffle def scramble(sentence): split = sentence.split() shuffle(split) return ' '.join(split) for i in range(20): reply = ['hi',scramble('me too thanks')] print(reply) ``` I'm trying to print either a scrambled version of the sentence "me too thanks" or just simply "hi". What I end up getting is this: ``` ['hi', 'me too thanks'] ['hi', 'too thanks me'] ['hi', 'thanks me too'] ['hi', 'too thanks me'] ['hi', 'me too thanks'] ``` and so on. It's working as expected, but it's printing the brackets as a whole when I only want it to print one or the other. I've worked with other programs where this usually works, but this isn't for some reason. Sorry if this is not a good question, I'm relatively new to python.<issue_comment>username_1: Your code is doing exactly what it is told: print the result, which is a list. To display one or the other from the list, use `random.choice`: ``` for i in range(20): reply = ['hi',scramble('me too thanks')] print(random.choice(reply)) ``` Upvotes: 2 <issue_comment>username_2: username_1 solution works but is discarding the result of `scramble` 50% of the time. It also creates an unnecessary list to choose in. That's a waste of CPU power. Just do: ``` for _ in range(20): print('hi' if random.randint(0,1) else scramble('me too thanks')) ``` It's a ternary expression to print one or the other string. The cool thing is that if `random.randint(0,1)` returns `1`, `scramble` isn't executed at all If you have more cases, you could pick a *function* at random, so you're only executing the one you need, using `lambda` for instance: ``` for _ in range(20): reply = [lambda : 'hi',lambda:scramble('me too thanks'),lambda:'hello'] print(random.choice(reply)()) # note the calling of the chosen function with () ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: Try this: ``` from numpy as np def scramble(sentence): split = sentence.split() np.random.shuffle(split) return ' '.join(split) for i in range(20): reply = ['hi',scramble('me too thanks')] print(np.random.choice(reply)) ``` Upvotes: 0
2018/03/20
1,038
3,143
<issue_start>username_0: I'm trying to make a script/program where I can find matching IP Addresses in two text files: * One text file that contains a list of IP addresses (1.1.1.1) * One text file that contains a list of subnets (1.1.1.0/28) And I want to use regex and I'm not really sure how to do it. Example: ``` import re def check(fname1, fname2): f2 = open(fname2) f1 = open(fname1) pattern = ('\d{1,3}\.\d{1,3}\.\d{1,3}') for line in f1: p1 = re.match(pattern, line) out_p1 = p1.group(0) for item in f2: p2 = re.match(pattern, item) out_p2 = p2.group(0) if out_p1 in out_p2: print(line, item) ``` So I'm trying to match an IP address from the first text file with a subnet from the second text file. Then I want to output the IP address with it's matching subnet. Like so: ``` #IP #Subnet 1.1.1.1, 1.1.1.0/28 172.16.17.32, 8.8.8.0/23 ```<issue_comment>username_1: By running that nested loop, you're going to do a lot of unnecessary processing, it'd make more sense to append all of the matches from the first file into a list, then check against that list with the matches from the second file. This is an approximation of the process here using two local lists: ``` import re input1 = ['1.1.1.1', '192.168.127.12'] input2 = ['1.1.1.1/123', '172.16.58.3/236'] pattern = ('^(\d{1,3}\.?){4}') matchlist = [] for line in input1: p1 = re.match(pattern, line) matchlist.append(p1.group(0)) print(matchlist) for item in input2: p2 = re.match(pattern, item) t = p2.group(0) if t in matchlist: print t ``` Upvotes: 1 <issue_comment>username_2: Leaving aside the pulling of lines of data from your two text files into program memory (simply `f1 = open(fname1, 'r').readlines()` for example), assume you have two lists of lines. ``` import re f1 = ['1.1.1.1', '192.168.1.1', '172.16.31.10', 'some other line not desired'] f2 = ['1.1.1.0/28', '1.2.2.0/28', '192.168.1.1/8', 'some other line not desired'] def get_ips(text): # this will match on any string containing three octets pattern = re.compile('\d{1,3}\.\d{1,3}\.\d{1,3}') out = [] for row in text: if re.match(pattern, row): out.append(re.match(pattern, row).group(0)) return out def is_match(ip, s): # this will return true if ip matches what it finds in string s if ip in s: return True def check(first, second): # First iterate over each IP found in the first file for ip in get_ips(first): # now check that against each subnet line in the second file for subnet in second: if is_match(ip, row): print('IP: {ip} matches subnet: {subnet}') ``` Note that I have tried to break up some of the functionality to separate concerns. You should be able to modify each function separately. This is assuming that you get your lines into some lists of strings. I also am not certain what you really want to match in `F2` so this should allow you to modify `is_match()` while leaving the other parts unaffected. Good luck. Upvotes: 1 [selected_answer]
2018/03/20
1,206
4,045
<issue_start>username_0: I'm building a BitTorrent client application in Java and I have 2 small question: 1. Can torrent contain folders? recursively? 2. If a torrent contains `n` files (not directories - for simplicity), do I need to create `n` files with their corresponding size? When I receive a piece from a peer, how do I know to which file it (the piece) belong? For example, here is a torrent which contains 2 files: ``` TorrentInfo{Created By: ruTorrent (PHP Class - <NAME>) Main tracker: http://tracker.hebits.net:35777/tracker.php?do=announce&passkey=5<PASSWORD> Comment: null Info_hash: c504216ca4a113d26f023a10a1249ca3a6217997 Name: Veronica.2017.1080p.BluRay.DTS-HD.MA.5.1.x264-HDH Piece Length: 16777216 Pieces: 787 Total Size: null Is Single File Torrent: false File List: TorrentFile{fileLength=13202048630, fileDirs=[Veronica.2017.1080p.BluRay.DTS-HD.MA.5.1.x264-HDH.mkv]} TorrentFile{fileLength=62543, fileDirs=[Veronica.2017.1080p.BluRay.DTS-HD.MA.5.1.x264-HDH.srt]} ``` The docs doesn't say much: <https://wiki.theory.org/index.php/BitTorrentSpecification><issue_comment>username_1: what you are doing is similar to mine... The following bold fonts are important to your questions. 1.yes; no **Info in Multiple File Mode** **name**: the name of the *directory* in which to store all the files. This is purely advisory. (string) **path**: a list containing one or more string elements that together represent the path and filename. Each element in the list corresponds to either a directory name or (in the case of the final element) the filename. For example, a the file "dir1/dir2/file.ext" would consist of three string elements: "dir1", "dir2", and "file.ext". This is encoded as a bencoded list of strings such as l4:\*dir\*14:\*dir\*28:file.exte **Info in Single File Mode** **name**: the filename. This is purely advisory. (string) Filename includes floder name. 2.maybe; Whether you need to create n files with their corresponding size depend on whether you need to download n files. **Peer wire protocol (TCP)** **piece**: The piece message is variable length, where X is the length of the block. The payload contains the following information: index: integer specifying the zero-based piece index **begin**: integer specifying the zero-based byte offset within the piece block: block of data, which is a subset of the piece specified by index. For the purposes of piece boundaries in the **multi-file case**, consider the file data as **one long continuous stream**, composed of the concatenation of each file **in the order listed in the *files* list**. The number of pieces and their boundaries are then determined in the same manner as the case of a single file. Pieces may overlap file boundaries. *I am sorry for my english, because I am not native speaker...* Upvotes: 4 [selected_answer]<issue_comment>username_2: > > Can torrent contain folders? recursively? > > > Yes. Sortof. In BEP3 Nested directories are mapped into path elements, i.e. `/dir1/dir2/dir3/file.ext` is represented as path: `["dir1", "dir2", "dir3", "file.ext"]` in the file list. BEP52 changes this to a tree-based structure more closely resembling a directory tree. > > If a torrent contains n files (not directories - for simplicity), do I need to create n files with their corresponding size? When I receive a piece from a peer, how do I know to which file it (the piece) belong? > > > The bittorrent wire protocol deals with a contiguous address space of bytes which are grouped into fixed-sized pieces. How a client stores those bytes locally is in principle up to the implementation. But if you want to store it in the file layout described in the `.torrent` then you have to calculate a mapping between the pieces address space and file offsets. In BEP3 files are not aligned to piece boundaries, so a single piece can straddle multiple files. BEP 47 and BEP 52 aim to simplify this by introducing padding files or implicit alignment gaps respectively. Upvotes: 1
2018/03/20
803
2,817
<issue_start>username_0: I'm starting with OpenCL programming and learning about the differences between a texture buffer (also called image) and a regular buffer. From what I undersand, one of these differences is the fact that a texture fetch is cached, and with 2D locality. The question is: where is this texture cache located? Is it shared across threads or is it just useful for accesses within a single thread? For instance, consider this kernel: ``` __constant sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST; __kernel void myCoolKernel( __read_only image2d_t image, __global float * dst) { const int i = get_global_id(0); const int j = get_global_id(1); dst[i+j*get_global_size(0)] = read_imagef(image, sampler, (int2){i,j}).x; } ``` Will *nearby* threads (on the same work-group I guess?) take advantage of the image cache? Or is it more useful for a kernel like the one below? ``` __kernel void myCoolKernel( __read_only image2d_t image, __global float * dst) { // Pray it's not a boundary const int i = get_global_id(0); const int j = get_global_id(1); float pixel1 = read_imagef(image, sampler, (int2){i+1,j}).x; float pixel2 = read_imagef(image, sampler, (int2){i,j-1}).x; float pixel3 = read_imagef(image, sampler, (int2){i-1,j}).x; float pixel4 = read_imagef(image, sampler, (int2){i,j+1}).x; dst[i+j*get_global_size(0)] = pixel1+pixel2+pixel3+pixel4; } ``` I hope I made myself clear. Thanks everyone.<issue_comment>username_1: It is shared between threads. It exists because of graphics API textures (e.g., OpenGL, DirectX, Vulkan, etc.) but OpenCL images can use it too. It typically has 2D locality, probably due to Z-order storage. It is much faster than non-coalesced buffer access, but maybe not as fast as coalesced buffer access. Upvotes: 2 <issue_comment>username_2: It'll be GPU dependent. For example, some systems may have several "independent" shader "units", each of which will be running a *subset* of all available threads. Each shader unit will probably have its own L0 texture cache so that all threads assigned to that unit will be sharing that cache. However, like a CPU, there will probably be a cache hierarchy such that there's an L1 cache that feeds the multiple shader unit L0s. So to answer your question... > > Will nearby threads (on the same work-group I guess?) take advantage of the image cache? > > > ... yes, if the accesses of the threads as a set are coherent, then it will take advantage of the cache hierarchy. FWIW there's a little more on texture caches usage on [the computer graphics site](https://computergraphics.stackexchange.com/questions/357/is-using-many-texture-maps-bad-for-caching) Upvotes: 1
2018/03/20
974
3,278
<issue_start>username_0: I am learning JavaScript and I am trying to program a small game. I want that something in my game moves consistently while I hold down a key. This is what I came up with so far: ``` document.onkeydown = onKeyDownListener; function onKeyDownListener(evt) { var keyCode = evt.which || evt.keyCode; if(keyCode == 37) { move(); } } ``` My problem is that when I press down the key move gets called once, then there is a pause and after that pause move gets called repeatedly as I intend. It goes like this: m......mmmmmmmmm when 'm' stands for move and'.' is the pause. Is there a way to get rid of the pause? Here is a GIF of what happens: [![GIF of my problem](https://i.stack.imgur.com/nxCkT.gif)](https://i.stack.imgur.com/nxCkT.gif)<issue_comment>username_1: You could try this: ``` document.onkeydown = onKeyDownListener; var keyDown = false; function onKeyDownListener(evt) { if(evt.keyCode == 37) { keyDown = true; } } document.onkeyup = function(evt){ if(evt.keyCode == 37) { keyDown = false; } } setInterval(function(){ if(keyDown){ player.x--; } },20) ``` It's not ideal, I know, but my best guess is it's an issue with keydown detection. This way, for as long as you're holding down that key, it will move until you let go of the key. Upvotes: 2 <issue_comment>username_2: I'm pretty sure you have this problem because you're not depending on a game loop to run your game. If you were, you wouldn't be running into this problem because the game loop at each interval would check if the key is pressed. Your web app is only reacting to each key press as they happen. I highly recommend you read this tutorial: **W3 HTML Game - Game Controllers** <https://www.w3schools.com/graphics/game_movement.asp> Read the "Keyboard as Controller" section. The "setInterval" method induces the game loop in this example. I used it as a reference to write game code for myself. This is code from the W3 tutorial. It also goes on to teach collision detection as well. ``` var myGameArea = { canvas : document.createElement("canvas"), start : function() { this.canvas.width = 480; this.canvas.height = 270; this.context = this.canvas.getContext("2d"); document.body.insertBefore(this.canvas, document.body.childNodes[0]); this.interval = setInterval(updateGameArea, 20); window.addEventListener('keydown', function (e) { myGameArea.keys = (myGameArea.keys || []); myGameArea.keys[e.keyCode] = true; }) window.addEventListener('keyup', function (e) { myGameArea.keys[e.keyCode] = false; }) }, clear : function(){ this.context.clearRect(0, 0, this.canvas.width, this.canvas.height); } } function updateGameArea() { myGameArea.clear(); myGamePiece.speedX = 0; myGamePiece.speedY = 0; if (myGameArea.keys && myGameArea.keys[37]) {myGamePiece.speedX = -1; } if (myGameArea.keys && myGameArea.keys[39]) {myGamePiece.speedX = 1; } if (myGameArea.keys && myGameArea.keys[38]) {myGamePiece.speedY = -1; } if (myGameArea.keys && myGameArea.keys[40]) {myGamePiece.speedY = 1; } myGamePiece.newPos(); myGamePiece.update(); } ``` Upvotes: 2
2018/03/20
1,026
3,723
<issue_start>username_0: I am running the following code. I want to calculate accuracy of my ANN for test data. I am using windows platfrom, python 3.5 ``` import numpy import pandas as pd from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasRegressor from sklearn.model_selection import cross_val_score from sklearn.model_selection import KFold from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.metrics import accuracy_score dataset=pd.read_csv('main.csv') dataset=dataset.fillna(0) X=dataset.iloc[:, 0:6].values #X = X[numpy.logical_not(numpy.isnan(X))] y=dataset.iloc[:, 6:8].values #y = y[numpy.logical_not(numpy.isnan(y))] #regr = LinearRegression() #regr.fit(numpy.transpose(numpy.matrix(X)), numpy.transpose(numpy.matrix(y))) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.24,random_state=0) ``` create model ============ ``` model = Sequential() model.add(Dense(4, input_dim=6, kernel_initializer='normal', activation='relu')) model.add(Dense(4, kernel_initializer='normal', activation='relu')) model.add(Dense(2, kernel_initializer='normal')) ``` Compile model ============= ``` model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X_train, y_train, batch_size=5, epochs=5) y_pred=model.predict(X_test) ``` Now, i want to calculate the accuracy of y\_pred. Any help will be appreciated. The above code is self explanatory. I am currently using only 5 epochs just for experimenting.<issue_comment>username_1: Keras already implements metrics such as accuracy, so you just need to change the `model.compile` line to: ``` model.compile(loss='mean_squared_error', optimizer='adam', metrics = ["accuracy"]) ``` Then training and validation accuracy (in the [0, 1] range) will be presented at the progress bar during training, and you can compute accuracy with `model.evaluate` as well, which will return a tuple of loss and metrics (accuracy in this case). Upvotes: 0 <issue_comment>username_2: Besides the suggestion of using `keras`. You can compute the accuracy using `scikit-learn` as follows: ``` from sklearn.metrics import accuracy_score accuracy_score(y_test, y_pred) ``` For more information, check the documentation : [sklearn.metrics.accuracy\_score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) Upvotes: 0 <issue_comment>username_3: Although in a narrow technical sense both answers already provided are correct, there is a more general issue with your question which affects the essence of it: are you in a regression or a classification context? * If you are in a *regression* context (as implied by your `loss='mean_squared_error'` and the linear activation in your output layer), then the simple augmentation of model compilation ``` model.compile(loss='mean_squared_error', optimizer='adam', metrics = ["accuracy"]) ``` will, as Matias says, provide the accuracy. Nevertheless, **accuracy is meaningless in a regression setting**; see the answer & discussion [here](https://stackoverflow.com/questions/48775305/what-function-defines-accuracy-in-keras-when-the-loss-is-mean-squared-error-mse/48788577#48788577) for more details. * If you are in a *classification* context (as implied by your wish to calculate the accuracy, which is meaningful only in classification), then your loss function should not be the MSE, but the cross-entropy instead, plus that the activation of your last layer should not be linear. Upvotes: 0 <issue_comment>username_4: to compute accuracy we can use `model.evaluate` function Upvotes: -1
2018/03/20
564
2,050
<issue_start>username_0: I am in the process of brushing up my C++ after 4-5 years and while going through this video (specific problem point screenshot attached), the speaker mentions that the code to **print a linked list** is actually doing a **call by value**. But as far as my knowledge of pointers goes, isn't the `head` parameter in the `Print` function already a pointer to the address of `head` in the main function? In which case, the two variables should share the same memory location. Could you please explain? ``` void Print(Node* head) { struct Node* temp = head; printf("List is: "); while (temp != NULL) { printf(" %d", temp->data); temp = temp->next; } printf("\n"); } int main() { Node* head = NULL; // empty list. printf("How many numbers?\n"); int n, i, x; scanf("%d", &n); for (i = 0; i < n; i++) { printf("Enter the number" \n); scanf("%d", &x); Insert(x); Print(head); } } ```<issue_comment>username_1: The `node*` is specifying that you want the pointer, and not a full copy of the nodes data to be passed in Upvotes: 0 <issue_comment>username_2: The speaker is right. ``` void Print (Node *head) { ... ``` is passing a pointer to head (by value) to the `Print` function. Don't confuse the C++ use of references (e.g. `void Print (Node& head) {...` with the plain old C "pass a pointer" as is done in your code. *Well is it pass by value or pass by reference?* C++ retains a lot of C heritage. In C there is no pass by reference -- it's all pass by value. When you use the plain old "pass a pointer" with `Node *head`, you are simply passing a pointer "by-value". The `Print` function receives a *copy of* the pointer. *What, a copy of the pointer?* Yes, but recall, a pointer is simply a variable that holds the address to something else as it value. So even though `Print` gets a copy of `head` the value held by the copy is the exact same address that `head` has in the calling function. Upvotes: 3 [selected_answer]
2018/03/20
398
1,463
<issue_start>username_0: I've created a new site within IIS and pointed to my local Documents directory ``` C:\Users\name\Documents ``` via ``` http://localhost:8080/ ``` The error I'm getting is > > HTTP Error 401.3 - Unauthorized > > > I've checked the properties of the Documents folder under Security. for IIS\_IUSRS, Read & Execute, List folder contents and Read are checked. Others like SYSTEM, myUsername, Administrators have more rights to this folder. Not sure what I'm missing here, please advise.<issue_comment>username_1: Download Procmon from [here](https://learn.microsoft.com/en-us/sysinternals/downloads/procmon "here") and start tracing. Reproduce the issue and stop procmon. Filter procmon trace for "access denied". It'll tell you what permissions are needed and for which folder. Share a screenshot of procmon trace if you find it difficult to analyze and I will try and guide you. Upvotes: 1 <issue_comment>username_2: "C:\Users\name\Documents" would typically be a terrible place to put a web site. It is essentially the same as "My Documents" which is a special Windows folder. And it would have a number of non-web site folders. Try using C:\Web (or something like that) and add IIS\_IUSRS to that folder. BUT, if you really want to use your My Documents as the root of a web site, you can create a web.config file in C:\Users\name\Documents with the following: ``` xml version="1.0" encoding="utf-8"? ``` Upvotes: 0
2018/03/20
416
1,413
<issue_start>username_0: I have a string: ``` sen = '0.31431 0.64431 Using drugs is not cool Speaker2'; ``` I am trying to write code that will generate: ``` cell = {'0.31431','0.64431', 'Using drugs is not cool', 'Speaker2'}; ``` The problem is that I don't want to use the number of words in `'Using drugs is not cool'` because these will change in other examples. I tried: ``` output = sscanf(sen,'%s %s %c %Speaker%d'); ``` But it doesn't work as desired.<issue_comment>username_1: Download Procmon from [here](https://learn.microsoft.com/en-us/sysinternals/downloads/procmon "here") and start tracing. Reproduce the issue and stop procmon. Filter procmon trace for "access denied". It'll tell you what permissions are needed and for which folder. Share a screenshot of procmon trace if you find it difficult to analyze and I will try and guide you. Upvotes: 1 <issue_comment>username_2: "C:\Users\name\Documents" would typically be a terrible place to put a web site. It is essentially the same as "My Documents" which is a special Windows folder. And it would have a number of non-web site folders. Try using C:\Web (or something like that) and add IIS\_IUSRS to that folder. BUT, if you really want to use your My Documents as the root of a web site, you can create a web.config file in C:\Users\name\Documents with the following: ``` xml version="1.0" encoding="utf-8"? ``` Upvotes: 0
2018/03/20
1,262
3,680
<issue_start>username_0: As I know, the k fold cross validation is to partition the training dataset into k equal subsets and each subset is different. The R code for k-fold validation which is from R-bloggers is attached below. This data have 506 obs. and 14 variables. According to the code, they used 10 folds. **My question is that if each fold has the different subset or has some repeated data points in each fold.** I wanna make sure to test each data points without repeating, so my goal is to get each fold has different data points. ``` set.seed(450) cv.error <- NULL k <- 10 library(plyr) pbar <- create_progress_bar('text') pbar$init(k) for(i in 1:k){ index <- sample(1:nrow(data),round(0.9*nrow(data))) train.cv <- scaled[index,] test.cv <- scaled[-index,] nn <- neuralnet(f,data=train.cv,hidden=c(5,2),linear.output=T) pr.nn <- compute(nn,test.cv[,1:13]) pr.nn <- pr.nn$net.result*(max(data$medv)-min(data$medv))+min(data$medv) test.cv.r <- (test.cv$medv)*(max(data$medv)-min(data$medv))+min(data$medv) cv.error[i] <- sum((test.cv.r - pr.nn)^2)/nrow(test.cv) pbar$step() } ```<issue_comment>username_1: you can shuffle the whole population from outside of the loop. the following code might give you an idea to solve the problem. ``` set.seed(450) cv.error <- NULL k <- 10 library(plyr) pbar <- create_progress_bar('text') pbar$init(k) total_index<-sample(1:nrows(data),nrows(data)) ## shuffle the whole index of samples for(i in 1:k){ index<-total_index[(i*(k-1)+1):(i*(k-1)+k)] ## pick the k samples from (i*(k-1)+1) to (i*(k-1)+k). ## so you can avoid of picking overlapping data point in other validation set train.cv <- scaled[-index,] ## pick the samples not in the index(-validation) test.cv <- scaled[index,] ## pick the k samples for validation. nn <- neuralnet(f,data=train.cv,hidden=c(5,2),linear.output=T) pr.nn <- compute(nn,test.cv[,1:13]) pr.nn <- pr.nn$net.result*(max(data$medv)-min(data$medv))+min(data$medv) test.cv.r <- (test.cv$medv)*(max(data$medv)-min(data$medv))+min(data$medv) cv.error[i] <- sum((test.cv.r - pr.nn)^2)/nrow(test.cv) pbar$step() } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: That is not K-fold cross validation; with each fold, a new random sample is chosen, rather than assigning the samples into K folds up front and then cycling through, assigning each fold the test set in turn. ``` set.seed(450) cv.error <- NULL k <- 10 library(plyr) pbar <- create_progress_bar('text') pbar$init(k) ## Assign samples to K folds initially index <- sample(letters[seq_len(k)], nrow(data), replace=TRUE) for(i in seq_len(k)) { ## Make all samples assigned current letter the test set test_ind <- index == letters[[k]] test.cv <- scaled[test_ind, ] ## All other samples are assigned to the training set train.cv <- scaled[!test_ind, ] ## It is bad practice to use T instead of TRUE, ## since T is not a reserved variable, and can be overwritten nn <- neuralnet(f,data=train.cv,hidden=c(5,2),linear.output=TRUE) pr.nn <- compute(nn,test.cv[,1:13]) pr.nn <- pr.nn$net.result*(max(data$medv)-min(data$medv))+min(data$medv) test.cv.r <- (test.cv$medv) * (max(data$medv) - min(data$medv)) + min(data$medv) cv.error[i] <- sum((test.cv.r - pr.nn) ^ 2) / nrow(test.cv) pbar$step() } ``` Then, to produce error estimates with less variance, I would repeat this process multiple times and visualise the distribution of cross-validation error across repeated assays. I think you would be better off using a package which accomplishes tasks like this for you, such as the excellent [caret](https://github.com/topepo/caret). Upvotes: 0
2018/03/20
795
3,000
<issue_start>username_0: I'm currently following a rabbitmq tutorial and running into an issue. No matter how close I follow the tutorial I keep getting this error when trying to run my send.py and receive.py: ``` pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 61] Connection refused ``` This is the send.py: ``` #!/usr/bin/env python import pika connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) channel = connection.channel() channel.queue_declare(queue='hello') channel.basic_publish(exchange='', routing_key='hello', body='Hello World!') print(" [x] Sent 'Hello World!'") connection.close() ``` This is the receive.py: ``` #!/usr/bin/env python import pika connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) channel = connection.channel() channel.queue_declare(queue='hello') def callback(ch, method, properties, body): print(" [x] Received %r" % body) channel.basic_consume(callback, queue='hello', no_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() ``` I can't for the life of me figure out what I'm doing wrong. I've looked at other post on here that ask a similar question but still no dice.<issue_comment>username_1: Are you using docker to run your rabbitmq? If yes, I suggest you to double check ports binding. For example: -p 5672:5672 Upvotes: 2 <issue_comment>username_2: I used the same tutorial it seems, and they did miss the dependency to install and run rabbitmq After doing `brew install rabbitmq` and then `brew services start rabbitmq` the connection to localhost on Pika then works Upvotes: 2 <issue_comment>username_3: If you are using docker to run rabbitmq and followed instructions in the tutorial and the docker page (<https://github.com/docker-library/docs/tree/master/rabbitmq>), you might run into this problem. When you run the container without specifying the port mapping option ("-p"), the port binding will be valid only within the container. You can verify by doing a "docker exec" into the container and then running netstat. So what you would want to do is to restart the rabbitmq container and specify a port mapping. Example: docker run -d --hostname my-rabbit --name some-rabbit -p 5672:5672 rabbitmq:latest Upvotes: 2 <issue_comment>username_4: The following steps fixed it for me: 1. In your Terminal, run `brew info rabbitmq`. It should show if all dependencies are installed (install any that is missing): 2. Make sure you have your ***.zshrc*** file in your usr directory. If you don't have it, then run `touch ~/.zshrc` in a terminal. 3. In your .zshrc file, make sure you have either of these two lines: (Mac Intel)`export PATH=$PATH:/usr/local/sbin` (MAC Silicon)`export PATH=$PATH:/opt/homebrew/sbin` 4. Finally, in your Terminal run `brew services restart rabbitmq` and test again Upvotes: 0
2018/03/20
540
1,472
<issue_start>username_0: I have this matrix `mat=matrix(rnorm(15), 1, 15)` [1x15] and I want to use the function `apply` to calculate the sum of rows in matix `mat` e.g. ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 0 #the sum is then 6 ``` Here is my code: ``` mat=matrix(rnorm(15), 1, 15) apply(mat[,1:15],1,sum) ``` Here is the error: **Error in apply(mat[, 1:15], 1, sum) : dim(X) must have a positive length** If I create two or more rows the `apply` function works. e.g. ``` mat=matrix(rnorm(15), 2, 15) apply(mat[,1:15],1,sum) #this will work ``` What should I change to the function so it would work even for matrices with one row?<issue_comment>username_1: The problem is that when you call the elements 1 to 15 you are converting your matrix to a vector so it doesn't have any dimension. just using the `as.matrix` in the apply call will make it work. ``` mat=matrix(rnorm(15), 1, 15) apply(as.matrix(mat[,1:15]),2,sum) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Do this instead and then go read the `?Extract` help page. That page has a wealth of information about the most fundamental functions of R, namely "[" and "[[". ``` apply(mat[ ,1:15, drop=FALSE],1,sum) # preserves the matrix class [1] -1.621488 ``` Upvotes: 2 <issue_comment>username_3: create a matrix and use general apply formulation: matrix =m, MARGIN = 1 (for row),2(for column), and FUN = sum. ``` m=matrix(1:9,3,3) apply(m,1,sum) ``` Upvotes: 0
2018/03/20
432
1,293
<issue_start>username_0: Trying to parse the JSON content from AWS SQS. First converting a string to JSON String and then to JSON Object. What is the correct way to handle this scenario ? ``` // JSON from SQS var x='{"Message":"{\"default\":{\\\"key1\\\":\\\"value1\\\",\\\"key2\\\":\\\"value2\\\"}\"}","Timestamp":"2018-03-20T03:21:32.136Z"}'; x1=JSON.stringify(x); var obj = JSON.parse(x1); console.log(obj.Message); // undefined alert(obj["Message"]); // undefined ```<issue_comment>username_1: The problem is that when you call the elements 1 to 15 you are converting your matrix to a vector so it doesn't have any dimension. just using the `as.matrix` in the apply call will make it work. ``` mat=matrix(rnorm(15), 1, 15) apply(as.matrix(mat[,1:15]),2,sum) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Do this instead and then go read the `?Extract` help page. That page has a wealth of information about the most fundamental functions of R, namely "[" and "[[". ``` apply(mat[ ,1:15, drop=FALSE],1,sum) # preserves the matrix class [1] -1.621488 ``` Upvotes: 2 <issue_comment>username_3: create a matrix and use general apply formulation: matrix =m, MARGIN = 1 (for row),2(for column), and FUN = sum. ``` m=matrix(1:9,3,3) apply(m,1,sum) ``` Upvotes: 0
2018/03/20
854
3,637
<issue_start>username_0: I'm learning React Native and I was looking up how to make a promise chain with an API call as referenced here: <https://facebook.github.io/react-native/docs/network.html#handling-the-response>. In looking how to get user permissions, I looked at the docs page for it (<https://facebook.github.io/react-native/docs/permissionsandroid.html>) and wondered if the promise concept could be applied here to make it a little My main question is this: How does promises improve functionality (if at all) and what is the best way to adapt code to use promises? Here is some other code that I would use for easy reference: ``` async requestLocationPermission() { const chckLocationPermission = PermissionsAndroid.check(PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION); if (chckLocationPermission === PermissionsAndroid.RESULTS.GRANTED) { console.log("You've access for the location"); } else { try { const granted = await PermissionsAndroid.request(PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION, { 'title': 'This App required Location permission', 'message': 'We required Location permission in order to get device location ' + 'Please grant us.' } ) if (granted === PermissionsAndroid.RESULTS.GRANTED) { console.log("You've access for the location"); } else { console.log("You don't have access for the location"); } } catch (err) { console.log(err) } } }; ```<issue_comment>username_1: Simply: * You are writing much cleaner code with async await rather than then() catch() etc. methods * If you have other programming language knowledge probably you miss try/catch blocks. It allows you write sync & async methods in same block. Personal recommendation you should be familiar with callbacks => promises => generators => async await (learn in order). It's actually your decision to use which one best fits for your purposes. I'd suggest to look very detailed guide about these topics (he actually wrote a book for just these topics xd) [You Don't Know JS: Async & Performance](https://github.com/getify/You-Dont-Know-JS/blob/master/async%20%26%20performance/README.md) Upvotes: 0 <issue_comment>username_2: The short answer is yes. If you need the data from your API request, then you need to use promises. For more on why you would need to use a promise, you want to learn about more about the [call stack](https://medium.com/@gaurav.pandvia/understanding-javascript-function-executions-tasks-event-loop-call-stack-more-part-1-5683dea1f5ec). I think there's a small confusion (which is pretty common), but the code you linked is also using promises. [Async/Await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) is a new JavaScript feature that makes writing promises easier. There are a more than a few ways you can write promises in JavaScript e.g. [Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise), [Async/Await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function). Not to mention libraries like AngularJS have their own implementation of this. I'd recommend to study up on the Call Stack first and then go from there. Upvotes: 2 [selected_answer]
2018/03/20
466
1,733
<issue_start>username_0: I'm trying to make an abstract class, and one of the methods that its children must override should return an instance of the child class. ``` class JsonSerializable { public: virtual fromJson(string jsonStr) const = 0; }; class ConcreteSerializable : public JsonSerializable { public: ConcreteSerializable fromJson(string jsonStr) const { return ConcreteSerializable(); } }; ``` I tried using templates following this [answer](https://stackoverflow.com/a/30687399/2599537), but I get an error that `templates may not be virtual`. Is there a way to do what I'm looking for without using raw pointers as the return type?<issue_comment>username_1: Are you trying to implement something that historically was done using [CRTP](https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern)? ``` struct Interface { virtual Interface *inflate(std::string const &json) = 0; virtual ~Interface() {} }; template struct Base: public Interface { Interface \*inflate(std::string const &json) { return new Child(json); } }; struct Child: public Base { Child(std::string const &json); }; ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You cannot create an object of an abstract type. And because you cannot create such an object, you also cannot return it. This is the reason why all examples returning a base/derived object, always return a pointer or some reference to the base class ``` struct B { virtual B *fromJson(const std::string &jsonStr) const = 0; }; struct D : public B { D(const std::string &jsonStr); D *fromJson(const std::string &jsonStr) const; }; D *D::fromJson(const std::string &jsonStr) const { return new D(jsonStr); } ``` Upvotes: 1
2018/03/20
520
1,984
<issue_start>username_0: I want to get value of input[text] in my controller when data is changed as this source code: ``` (function () { 'use strict'; var app = angular.module('app', ['ngMaterial']); app.controller('ScanDataCtrl', function ($scope) { $scope.getScannedData = function () { console.log($scope.formScanData.scanDataIReceipt); }; }); app.directive('scanDataBScan', function ($mdDialog, $sce, $http) { return { restrict: 'C', link: function (scope, element, attrs) { element.on('click', function () { scope.getScannedData(); // => the value always undefined }); } }; }); }); Scan ``` the result of this source code is always undefine Please help me. If I ask wrong or bad with my English, I am sorry. Thank you!<issue_comment>username_1: Are you trying to implement something that historically was done using [CRTP](https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern)? ``` struct Interface { virtual Interface *inflate(std::string const &json) = 0; virtual ~Interface() {} }; template struct Base: public Interface { Interface \*inflate(std::string const &json) { return new Child(json); } }; struct Child: public Base { Child(std::string const &json); }; ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You cannot create an object of an abstract type. And because you cannot create such an object, you also cannot return it. This is the reason why all examples returning a base/derived object, always return a pointer or some reference to the base class ``` struct B { virtual B *fromJson(const std::string &jsonStr) const = 0; }; struct D : public B { D(const std::string &jsonStr); D *fromJson(const std::string &jsonStr) const; }; D *D::fromJson(const std::string &jsonStr) const { return new D(jsonStr); } ``` Upvotes: 1
2018/03/20
617
2,207
<issue_start>username_0: I'm trying to access some webpage multiple times but each time I'll use different parameters. Here's an example of what I'm trying: ``` var codes = [1, 2, 3, 4, 5] for(i = 0; i <= codes.lenght; i++){ window.open('http://localhost/applicationame/execute/cod/'+codes[i],'_self'); } ``` (I removed the url real names because of work related privacy) But when I execute it in the browser console the browser only access the page containing the last element of my array. I tired using setTimeout to make the browser wait and then visit the next page but it just returns me the same behavior. I tried googling some solutions but found no good answers that apply to my case. I'm open to use JQuery too if needed or other tools that can help me with it.<issue_comment>username_1: That's happening because you open all instances on the same window. Try replacing '\_self' with the value of i, so you'll have a unique name parameter for each window Upvotes: 2 <issue_comment>username_2: You have several problems in your code: * \_self will open the target url in the same window - so your loop is interrupted in the same moment as you load the new page. * `codes.lenght` should be `code.length` I have tried to create a working example - but was not able to. **Obviously the browser does not open windows in a loop.** At least Chrome shows a message that windows have been blocked. See my test here: ```js function openWindows() { var codes = [1, 2, 3, 4, 5]; for(i = 0; i < codes.length; i++){ var params = { url: 'https://edition.cnn.com/?var='+codes[i], i: i }; var delay = i * 2000; setTimeout(function(params) { console.log(params.url); window.open(params.url,'_blank', 'PopUp', params.i); }, delay, params); } } ``` ```html Click to open windows ===================== Start ``` If you want to test, please use Plunker as playground, as SO playground does not open new windows: <https://plnkr.co/edit/L318Q564spTqMVkqQUPa?p=preview> In modern web development it's rarely seen that browser windows are opened. You could try to add new DIV layers with iFrames instead. Upvotes: -1
2018/03/20
1,145
3,671
<issue_start>username_0: As you can see below I have a sample JSON array object. That's how I want the JSON structure to look like at the end. I want the object generated dynamically based on the anchor tags which are inside a list. As you can see, the title is the name (html()) of the anchor tag and URL is the href attribute. So on click of the button create JSON, a function will loop through all the list and its anchor tag and create the object like the sample below. My attempt did not go very far, I was trying to get the html of all anchor tag then get like a.html() for title and a..attr('href') for the link. How would i go about doing this ? *JSON ARRAY OBJECT SAMPLE* ```js [ { "id": "1", "Title": "Google", "URL": "https://www.google.com" }, { "id": "1", "Title": "yahoo", "URL": "https://www.microsoft.com" } ] ``` *HTML* ```js function createJson() { var linkjson=[]; var listItemsA = $(".addlink_dynlist > li").find('a'); listItemsA.each(function(a) { console.log(a); //linkjson.push({id:1,Title:a.html()}) }); } ``` ```html Document - [google](https://www.google.com/) | *Edit* - [yahoo](https://www.yahoo.com/) | *Edit* - [cnn](https://www.cnn.com/) | *Edit* - [fox](http://www.foxnews.com/) | *Edit* Create json ```<issue_comment>username_1: I'm not sure what you want id to be? ```js function createJson() { var linkjson = $(".addlink_dynlist > li a").map((index, a) => { return { id: a.id, Title: a.innerText, URL: a.href }; }).get(); console.log(linkjson); } ``` ```html Document - [google](https://www.google.com/) | *Edit* - [yahoo](https://www.yahoo.com/) | *Edit* - [cnn](https://www.cnn.com/) | *Edit* - [fox](http://www.foxnews.com/) | *Edit* Create json ``` Upvotes: 2 <issue_comment>username_2: Try this. ```js function createJson() { var linkjson=[]; var listItemsA = $(".addlink_dynlist > li").find('a'); listItemsA.each(function(a) { var x={}; x.id=1; x.title=$(listItemsA[a]).html(); x.url=$(listItemsA[a]).attr('href'); linkjson.push(x) }); console.log(linkjson); } ``` ```html Document - [google](https://www.google.com/) | *Edit* - [yahoo](https://www.yahoo.com/) | *Edit* - [cnn](https://www.cnn.com/) | *Edit* - [fox](http://www.foxnews.com/) | *Edit* Create json ``` Upvotes: 0 <issue_comment>username_3: ``` window.createJson = function createJson() { var linkjson=[]; var listItemsA = $(".addlink_dynlist > li a") linkjson = listItemsA.map(function(index, node) { var $node = $(node); return {id: index, text:$node.text(), URL: $node.attr("href")} }) } ``` if I understand correctly your function look like this Upvotes: 0 <issue_comment>username_4: You have to look for each element attributes collection. Your function could accept a given selector and return an array of objects containing attributes for all elements that matched the selector: ```js function createJson(selector) { var linkjson=[]; Array // Grad HTMLElements .from(document.querySelectorAll(selector)) // Only keep attribute collections .map(x => x.attributes) // Add a new entry to linkjson for each element .forEach(at => { let item; linkjson[linkjson.length] = {}; item = linkjson[linkjson.length-1]; Array.from(at).forEach(a => (item[a.name] = a.value) ); }); return linkjson } console.log(createJson('.target')); ``` ```html Element1 Element2 Element3 ``` Upvotes: 0
2018/03/20
2,184
7,095
<issue_start>username_0: I've a Zebra ZT610 and I want to print a label, in pdf format, containing multiple pages and then have it cut on the last page. I've tried using the delayed cut mode and sending the ~JK command but I'm using a self written java application to do the invocation of printing. I've also tried to add the string "${^XB}$" into the PDF document before each page break, except the last, and used the pass-through setting in the driver to inhibit the cut command but that seems to not work either as the java print job is rendering such text as an image. I've tried the official Zebra driver as well as using the NiceLabel zebra driver too in the hope that they may have more "Custom Commands" options in the settings but nothing has yet come to light.<issue_comment>username_1: After we had the same issues for several weeks and neither the vendor nor google nor Zebra's own support came up with a FULL working solution, we've worked out the following EASY 5 step solution for this (apparently pretty common) Zebra Cutter issue/problem: **Step 1:** Set Cutter-Mode to Tear-Off in the settings. This will disable the auto-cutting after every single page. **Step 2:** Go to Customer-Commands in the settings dialog (Allows ZPL coding). **Step 3:** Set the first drop-down to `"DOCUMENT"`. **Step 4:** Set the Start-Section to `"TEXT"` and paste in ``` ^XA^MMD^XZ^XA^JUS^XZ ``` `MMD` enables PAUSE-Mode. The `JK` command is only available in Pause-Mode and many Zebra printers do not support the much easier command `CN` (Cut-Now). `JUS` saves the setting to the printer. **Step 5:** Set the End-Section to `"ANALYZED TEXT"` and paste in ``` ˜JK˜PS ``` `JK` sets the cut command to the end of the document, `PS` disables the pause mode (and thus starts printing immediately). When everything looks as described above, hit `"APPLY"` and your Zebra printer will automatically cut after the end of each document you send to it. You just send your PDF using sumatra or whatever you prefer. The cutter handling is now automatically done by the printer settings. **Alternatively,** if you want to do this programmaticaly, use the START and END codes at the corresponding positions in your ZPL code instead. Note that `˜CMDs` cannot be send in combination with `^CMDs`, thats why there's no XA...XZ block to reset any settings (which is not necessary in this scenario as it only affects the print session and `PS` turns the pause mode back to OFF). Upvotes: 2 <issue_comment>username_2: I had similar concern but as the print server was CUPS, I wasn't able to use Windows drivers and utilities (settings dialog). So basically, I did the following: 1. On the printer, set `Cutter` mode. This will cut after each printed label. 2. In my Java code, thanks to [Apache PDFBox](https://pdfbox.apache.org/) lib, open the PDF and for each page, render it as a monochrome `BufferedImage`, get bytes array from it, and get its hex representation. 3. Write a few ZPL commands to download hex as graphic data, and add the `^XB` command before the `^XZ` one, in order to prevent a cut here, except for the last page, so that there is a cut only at the end of the document. 4. Send the generated ZPL code to the printer. In my case, I send it as a raw document through IPP, using `application/vnd.cups-raw` as mime-type, thanks to the great lib [ipp-client-kotlin](https://github.com/gmuth/ipp-client-kotlin), but it is also possible to use Java native printing API with bytes. Below in a snippet of Java code, for demo purpose: ``` public void printPdfStream(InputStream pdfStream) throws IOException { try (PDDocument pdDocument = PDDocument.load(pdfStream)) { PDFRenderer pdfRenderer = new PDFRenderer(pdDocument); StringBuilder builder = new StringBuilder(); for (int pageIndex = 0; pageIndex < pdDocument.getNumberOfPages(); pageIndex++) { boolean isLastPage = pageIndex == pdDocument.getNumberOfPages() - 1; BufferedImage bufferedImage = pdfRenderer.renderImageWithDPI(pageIndex, 300, ImageType.BINARY); byte[] data = ((DataBufferByte) bufferedImage.getData().getDataBuffer()).getData(); int length = data.length; // Invert bytes for (int i = 0; i < length; i++) { data[i] ^= 0xFF; } builder.append("~DGR:label,").append(length).append(",").append(length / bufferedImage.getHeight()) .append(",").append(Hex.getString(data)); builder.append("^XA"); builder.append("^FO0,0"); builder.append("^XGR:label,1,1"); builder.append("^FS"); if (!isLastPage) { builder.append("^XB"); } builder.append("^XZ"); } IppPrinter ippPrinter = new IppPrinter("ipp://printserver/printers/myprinter"); ippPrinter.printJob(new ByteArrayInputStream(builder.toString().getBytes()), documentFormat("application/vnd.cups-raw")); } } ``` **Important:** hex data can (and should) be compressed, as mentioned in ZPL Programming Guide, section [Alternative Data Compression Scheme for ~DG and ~DB Commands](https://www.zebra.com/content/dam/zebra/manuals/printers/common/programming/zpl-zbi2-pm-en.pdf). Depending on the PDF content, it may drastically reduce the data size (by a factor 10 in my case!). Note that Zebra's support provides [a few more alternatives](https://support.zebra.com/cpws/docs/xi3plus/cutter_control.htm) in order to controller the cutter, but this one worked immediately. Upvotes: 2 <issue_comment>username_3: Zebra Automatic Cut - Found another solution. 1. Create a file with the name: **Delayed Cut Settings.txt** 2. Insert the following code: **^XA^MMC,N^XZ** 3. Send it to the printer After you do the 3 steps above, all the documents you send to the printer will be cut automatically. (To disable that function send again the 'Delayed Cut Setting.txt' with the following code:**^XA^MMD^XZ** ) The first document you send to the printer, you need to ADD (just once) the command **^MMC,N** before the **^XZ** My EXAMPLE TXT: ```none ^XA ^FX Top section with logo, name and address. ^CF0,60 ^FO50,50^GB100,100,100^FS ^FO75,75^FR^GB100,100,100^FS ^FO93,93^GB40,40,40^FS ^FO220,50^FDIntershipping, Inc.^FS ^CF0,30 ^FO220,115^FD1000 Shipping Lane^FS ^FO220,155^FDShelbyville TN 38102^FS ^FO220,195^FDUnited States (USA)^FS ^FO50,250^GB700,3,3^FS ^FX Second section with recipient address and permit information. ^CFA,30 ^FO50,300^FDJohn Doe^FS ^FO50,340^FD100 Main Street^FS ^FO50,380^FDSpringfield TN 39021^FS ^FO50,420^FDUnited States (USA)^FS ^CFA,15 ^FO600,300^GB150,150,3^FS ^FO638,340^FDPermit^FS ^FO638,390^FD123456^FS ^FO50,500^GB700,3,3^FS ^FX Third section with bar code. ^BY5,2,270 ^FO100,550^BC^FD12345678^FS ^FX Fourth section (the two boxes on the bottom). ^FO50,900^GB700,250,3^FS ^FO400,900^GB3,250,3^FS ^CF0,40 ^FO100,960^FDCtr. X34B-1^FS ^FO100,1010^FDREF1 F00B47^FS ^FO100,1060^FDREF2 BL4H8^FS ^CF0,190 ^FO470,955^FDCA^FS ^MMC,N ^XZ ``` Upvotes: 0
2018/03/20
6,000
19,776
<issue_start>username_0: I'm working on a project that involves a `boost::beast` websocket/http mixed server, which runs on top of `boost::asio`. I've heavily based my project off the [`advanced_server.cpp`](http://www.boost.org/doc/libs/1_66_0/libs/beast/example/advanced/server/advanced_server.cpp) example source. It works fine, but right now I'm attempting to add a feature that requires the sending of a message to *all* connected clients. I'm not very familiar with `boost::asio`, but right now I can't see any way to have something like "broadcast" events (if that's even the correct term). My naive approach would be to see if I can have the construction of `websocket_session()` attach something like an event listener, and the destructor detatch the listener. At that point, I could just fire the event, and have all the currently valid websocket sessions (to which the lifetime of `websocket_session()` is scoped) execute a callback. There is <https://stackoverflow.com/a/17029022/268006>, which does more or less what I want by (ab)using a `boost::asio::steady_timer`, but that seems like a kind of horrible hack to accomplish something that should be pretty straightforward. Basically, given a stateful `boost::asio` server, how can I do an operation on multiple connections?<issue_comment>username_1: First off: You can broadcast UDP, but that's not to connected clients. That's just... UDP. Secondly, that link shows how to have a condition-variable (event)-like interface in Asio. That's only a tiny part of your problem. You forgot about the big picture: you need to know about the set of open connections, one way or the other: 1. e.g. keeping a container of session pointers (`weak_ptr`) to each connection 2. each connection subscribing to a signal slot (e.g. [Boost Signals](https://i.stack.imgur.com/fNumi.png)). Option 1. is great for performance, option 2. is better for flexibility (decoupling the event source from subscribers, making it possible to have heterogenous subscribers, e.g. not from connections). Because I think Option 1. is much simpler w.r.t to threading, better w.r.t. efficiency (you can e.g. serve all clients from one buffer without copying) and you probably don't need to doubly decouple the signal/slots, let me refer to an answer where I already showed as much for pure Asio (without Beast): * [How to design proper release of a boost::asio socket or wrapper thereof](https://stackoverflow.com/questions/43239208/how-to-design-proper-release-of-a-boostasio-socket-or-wrapper-thereof/43243314#43243314) It shows the concept of a "connection pool" - which is essentially a thread-safe container of `weak_ptr` objects with some garbage collection logic. Demonstration: Introducing Echo Server -------------------------------------- After [chatting about things](https://chat.stackoverflow.com/transcript/message/41744223#41744223) I wanted to take the time to actually demonstrate the two approaches, so it's completely clear what I'm talking about. First let's present a simple, run-of-the mill asynchronous TCP server with * with multiple concurrent connections * each connected session reads from the client line-by-line, and echoes the same back to the client * stops accepting after 3 seconds, and exits after the last client disconnects **`[master branch on github](https://github.com/username_1/broadcast_to_sessions/blob/master/test.cpp)`** ``` #include #include #include #include namespace ba = boost::asio; using ba::ip::tcp; using boost::system::error\_code; using namespace std::chrono\_literals; using namespace std::string\_literals; static bool s\_verbose = false; struct connection : std::enable\_shared\_from\_this { connection(ba::io\_context& ioc) : \_s(ioc) {} void start() { read\_loop(); } void send(std::string msg, bool at\_front = false) { post(\_s.get\_io\_service(), [=] { // \_s.get\_executor() for newest Asio if (enqueue(std::move(msg), at\_front)) write\_loop(); }); } private: void do\_echo() { std::string line; if (getline(std::istream(&\_rx), line)) { send(std::move(line) + '\n'); } } bool enqueue(std::string msg, bool at\_front) { // returns true if need to start write loop at\_front &= !\_tx.empty(); // no difference if (at\_front) \_tx.insert(std::next(begin(\_tx)), std::move(msg)); else \_tx.push\_back(std::move(msg)); return (\_tx.size() == 1); } bool dequeue() { // returns true if more messages pending after dequeue assert(!\_tx.empty()); \_tx.pop\_front(); return !\_tx.empty(); } void write\_loop() { ba::async\_write(\_s, ba::buffer(\_tx.front()), [this,self=shared\_from\_this()](error\_code ec, size\_t n) { if (s\_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl; if (!ec && dequeue()) write\_loop(); }); } void read\_loop() { ba::async\_read\_until(\_s, \_rx, "\n", [this,self=shared\_from\_this()](error\_code ec, size\_t n) { if (s\_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl; do\_echo(); if (!ec) read\_loop(); }); } friend struct server; ba::streambuf \_rx; std::list \_tx; tcp::socket \_s; }; struct server { server(ba::io\_context& ioc) : \_ioc(ioc) { \_acc.bind({{}, 6767}); \_acc.set\_option(tcp::acceptor::reuse\_address()); \_acc.listen(); accept\_loop(); } void stop() { \_ioc.post([=] { \_acc.cancel(); \_acc.close(); }); } private: void accept\_loop() { auto session = std::make\_shared(\_acc.get\_io\_context()); \_acc.async\_accept(session->\_s, [this,session](error\_code ec) { auto ep = ec? tcp::endpoint{} : session->\_s.remote\_endpoint(); std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl; session->start(); if (!ec) accept\_loop(); }); } ba::io\_context& \_ioc; tcp::acceptor \_acc{\_ioc, tcp::v4()}; }; int main(int argc, char\*\* argv) { s\_verbose = argc>1 && argv[1] == "-v"s; ba::io\_context ioc; server s(ioc); std::thread th([&ioc] { ioc.run(); }); // todo exception handling std::this\_thread::sleep\_for(3s); s.stop(); // active connections will continue th.join(); } ``` Approach 1. Adding Broadcast Messages ------------------------------------- So, let's add "broadcast messages" that get sent to all active connections simultaneously. We add two: * one at each new connection (saying "Player ## has entered the game") * one that emulates a global "server event", like you described in the question). It gets triggered from within main: ``` std::this_thread::sleep_for(1s); auto n = s.broadcast("random global event broadcast\n"); std::cout << "Global event broadcast reached " << n << " active connections\n"; ``` Note how we do this by registering a weak pointer to each accepted connection and operating on each: ``` _acc.async_accept(session->_s, [this,session](error_code ec) { auto ep = ec? tcp::endpoint{} : session->_s.remote_endpoint(); std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl; if (!ec) { auto n = reg_connection(session); session->start(); accept_loop(); broadcast("player #" + std::to_string(n) + " has entered the game\n"); } }); ``` `broadcast` is also used directly from `main` and is simply: ``` size_t broadcast(std::string const& msg) { return for_each_active([msg](connection& c) { c.send(msg, true); }); } ``` **`[`using-asio-post` branch on github](https://github.com/username_1/broadcast_to_sessions/blob/using-asio-post/test.cpp)`** ``` #include #include #include #include namespace ba = boost::asio; using ba::ip::tcp; using boost::system::error\_code; using namespace std::chrono\_literals; using namespace std::string\_literals; static bool s\_verbose = false; struct connection : std::enable\_shared\_from\_this { connection(ba::io\_context& ioc) : \_s(ioc) {} void start() { read\_loop(); } void send(std::string msg, bool at\_front = false) { post(\_s.get\_io\_service(), [=] { // \_s.get\_executor() for newest Asio if (enqueue(std::move(msg), at\_front)) write\_loop(); }); } private: void do\_echo() { std::string line; if (getline(std::istream(&\_rx), line)) { send(std::move(line) + '\n'); } } bool enqueue(std::string msg, bool at\_front) { // returns true if need to start write loop at\_front &= !\_tx.empty(); // no difference if (at\_front) \_tx.insert(std::next(begin(\_tx)), std::move(msg)); else \_tx.push\_back(std::move(msg)); return (\_tx.size() == 1); } bool dequeue() { // returns true if more messages pending after dequeue assert(!\_tx.empty()); \_tx.pop\_front(); return !\_tx.empty(); } void write\_loop() { ba::async\_write(\_s, ba::buffer(\_tx.front()), [this,self=shared\_from\_this()](error\_code ec, size\_t n) { if (s\_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl; if (!ec && dequeue()) write\_loop(); }); } void read\_loop() { ba::async\_read\_until(\_s, \_rx, "\n", [this,self=shared\_from\_this()](error\_code ec, size\_t n) { if (s\_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl; do\_echo(); if (!ec) read\_loop(); }); } friend struct server; ba::streambuf \_rx; std::list \_tx; tcp::socket \_s; }; struct server { server(ba::io\_context& ioc) : \_ioc(ioc) { \_acc.bind({{}, 6767}); \_acc.set\_option(tcp::acceptor::reuse\_address()); \_acc.listen(); accept\_loop(); } void stop() { \_ioc.post([=] { \_acc.cancel(); \_acc.close(); }); } size\_t broadcast(std::string const& msg) { return for\_each\_active([msg](connection& c) { c.send(msg, true); }); } private: using connptr = std::shared\_ptr; using weakptr = std::weak\_ptr; std::mutex \_mx; std::vector \_registered; size\_t reg\_connection(weakptr wp) { std::lock\_guard lk(\_mx); \_registered.push\_back(wp); return \_registered.size(); } template size\_t for\_each\_active(F f) { std::vector active; { std::lock\_guard lk(\_mx); for (auto& w : \_registered) if (auto c = w.lock()) active.push\_back(c); } for (auto& c : active) { std::cout << "(running action for " << c->\_s.remote\_endpoint() << ")" << std::endl; f(\*c); } return active.size(); } void accept\_loop() { auto session = std::make\_shared(\_acc.get\_io\_context()); \_acc.async\_accept(session->\_s, [this,session](error\_code ec) { auto ep = ec? tcp::endpoint{} : session->\_s.remote\_endpoint(); std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl; if (!ec) { auto n = reg\_connection(session); session->start(); accept\_loop(); broadcast("player #" + std::to\_string(n) + " has entered the game\n"); } }); } ba::io\_context& \_ioc; tcp::acceptor \_acc{\_ioc, tcp::v4()}; }; int main(int argc, char\*\* argv) { s\_verbose = argc>1 && argv[1] == "-v"s; ba::io\_context ioc; server s(ioc); std::thread th([&ioc] { ioc.run(); }); // todo exception handling std::this\_thread::sleep\_for(1s); auto n = s.broadcast("random global event broadcast\n"); std::cout << "Global event broadcast reached " << n << " active connections\n"; std::this\_thread::sleep\_for(2s); s.stop(); // active connections will continue th.join(); } ``` Approach 2: Those Broadcast But With Boost Signals2 --------------------------------------------------- The Signals approach is a fine example of [Dependency Inversion](https://en.wikipedia.org/wiki/Inversion_of_control). Most salient notes: * signal slots get invoked on the thread invoking it ("raising the event") * the `scoped_connection` is there so subscriptions are **\*automatically** removed when the `connection` is destructed * there's [subtle difference in the wording of the console message](https://github.com/username_1/broadcast_to_sessions/compare/using-asio-post...using-signals2?expand=1#diff-dfe5c65cf4ac041daf85f8f5f3d6dd74L156) from "reached # active connections" to "reached # active **subscribers**". > > *The difference is key to understanding the added flexibility: the signal owner/invoker does not know anything about the subscribers. That's the decoupling/dependency inversion we're talking about* > > > **`[`using-signals2` branch on github](https://github.com/username_1/broadcast_to_sessions/blob/using-signals2/test.cpp)`** ``` #include #include #include #include #include namespace ba = boost::asio; using ba::ip::tcp; using boost::system::error\_code; using namespace std::chrono\_literals; using namespace std::string\_literals; static bool s\_verbose = false; struct connection : std::enable\_shared\_from\_this { connection(ba::io\_context& ioc) : \_s(ioc) {} void start() { read\_loop(); } void send(std::string msg, bool at\_front = false) { post(\_s.get\_io\_service(), [=] { // \_s.get\_executor() for newest Asio if (enqueue(std::move(msg), at\_front)) write\_loop(); }); } private: void do\_echo() { std::string line; if (getline(std::istream(&\_rx), line)) { send(std::move(line) + '\n'); } } bool enqueue(std::string msg, bool at\_front) { // returns true if need to start write loop at\_front &= !\_tx.empty(); // no difference if (at\_front) \_tx.insert(std::next(begin(\_tx)), std::move(msg)); else \_tx.push\_back(std::move(msg)); return (\_tx.size() == 1); } bool dequeue() { // returns true if more messages pending after dequeue assert(!\_tx.empty()); \_tx.pop\_front(); return !\_tx.empty(); } void write\_loop() { ba::async\_write(\_s, ba::buffer(\_tx.front()), [this,self=shared\_from\_this()](error\_code ec, size\_t n) { if (s\_verbose) std::cout << "Tx: " << n << " bytes (" << ec.message() << ")" << std::endl; if (!ec && dequeue()) write\_loop(); }); } void read\_loop() { ba::async\_read\_until(\_s, \_rx, "\n", [this,self=shared\_from\_this()](error\_code ec, size\_t n) { if (s\_verbose) std::cout << "Rx: " << n << " bytes (" << ec.message() << ")" << std::endl; do\_echo(); if (!ec) read\_loop(); }); } friend struct server; ba::streambuf \_rx; std::list \_tx; tcp::socket \_s; boost::signals2::scoped\_connection \_subscription; }; struct server { server(ba::io\_context& ioc) : \_ioc(ioc) { \_acc.bind({{}, 6767}); \_acc.set\_option(tcp::acceptor::reuse\_address()); \_acc.listen(); accept\_loop(); } void stop() { \_ioc.post([=] { \_acc.cancel(); \_acc.close(); }); } size\_t broadcast(std::string const& msg) { \_broadcast\_event(msg); return \_broadcast\_event.num\_slots(); } private: boost::signals2::signal \_broadcast\_event; size\_t reg\_connection(connection& c) { c.\_subscription = \_broadcast\_event.connect( [&c](std::string msg){ c.send(msg, true); } ); return \_broadcast\_event.num\_slots(); } void accept\_loop() { auto session = std::make\_shared(\_acc.get\_io\_context()); \_acc.async\_accept(session->\_s, [this,session](error\_code ec) { auto ep = ec? tcp::endpoint{} : session->\_s.remote\_endpoint(); std::cout << "Accept from " << ep << " (" << ec.message() << ")" << std::endl; if (!ec) { auto n = reg\_connection(\*session); session->start(); accept\_loop(); broadcast("player #" + std::to\_string(n) + " has entered the game\n"); } }); } ba::io\_context& \_ioc; tcp::acceptor \_acc{\_ioc, tcp::v4()}; }; int main(int argc, char\*\* argv) { s\_verbose = argc>1 && argv[1] == "-v"s; ba::io\_context ioc; server s(ioc); std::thread th([&ioc] { ioc.run(); }); // todo exception handling std::this\_thread::sleep\_for(1s); auto n = s.broadcast("random global event broadcast\n"); std::cout << "Global event broadcast reached " << n << " active subscribers\n"; std::this\_thread::sleep\_for(2s); s.stop(); // active connections will continue th.join(); } ``` > > See the diff between Approach 1. and 2.: **`[Compare View on github](https://github.com/username_1/broadcast_to_sessions/compare/using-asio-post...using-signals2?expand=1#diff-dfe5c65cf4ac041daf85f8f5f3d6dd74)`** > > > A sample of the output when run against 3 concurrent clients with: ``` (for a in {1..3}; do netcat localhost 6767 < /etc/dictionaries-common/words > echoed.$a& sleep .1; done; time wait) ``` [![enter image description here](https://i.stack.imgur.com/fNumi.png)](https://i.stack.imgur.com/fNumi.png) Upvotes: 5 [selected_answer]<issue_comment>username_2: The answer from @username_1 was amazing, so I'll be brief. Generally speaking, to implement an algorithm which operates on all active connections you must do the following: * Maintain a list of active connections. If this list is accessed by multiple threads, it will need synchronization (`std::mutex`). New connections should be inserted to the list, and when a connection is destroyed or becomes inactive it should be removed from the list. * To iterate the list, synchronization is required if the list is accessed by multiple threads (i.e. more than one thread calling `asio::io_context::run`, or if the list is also accessed from threads that are not calling `asio::io_context::run`) * During iteration, if the algorithm needs to inspect or modify the state of any connection, and that state can be changed by other threads, additional synchronization is needed. This includes any internal "queue" of messages that the connection object stores. * A simple way to synchronize a connection object is to use `boost::asio::post` to submit a function for execution on the connection object's context, which will be either an explicit strand (`boost::asio::strand`, as in the advanced server examples) or an implicit strand (what you get when only one thread calls `io_context::run`). The Approach 1 provided by @username_1 uses `post` to synchronize in this fashion. * Another way to synchronize the connection object is to "stop the world." That means call `io_context::stop`, wait for all the threads to exit, and then you are guaranteed that no other threads are accessing the list of connections. Then you can read and write connection object state all you want. When you are finished with the list of connections, call `io_context::restart` and launch the threads which call `io_context::run` again. Stopping the `io_context` does not stop network activity, the kernel and network drivers still send and receive data from internal buffers. TCP/IP flow control will take care of things so the application still operates smoothly even though it becomes briefly unresponsive during the "stop the world." This approach can simplify things but depending on your particular application you will have to evaluate if it is right for you. Hope this helps! Upvotes: 3 <issue_comment>username_3: Thank you @username_1 for the amazing answer. Still, I think there is a small but severe bug in the Approach 2. IMHO `reg_connection` should look like this: ``` size_t reg_connection(std::shared_ptr c) { c->\_subscription = \_broadcast\_event.connect( [weak\_c = std::weak\_ptr(c)](std::string msg){ if(auto c = weak\_c.lock()) c->send(msg, true); } ); return \_broadcast\_event.num\_slots(); } ``` Otherwise you can end up with a race condition leading to a server crash. In case the connection instance is destroyed during the call to the lambda, the reference becomes invalid. Similarly `connection#send()` should look like this, because otherwise `this` might be dead by the time the lambda is called: ``` void send(std::string msg, bool at_front = false) { post(_s.get_io_service(), [self=shared_from_this(), msg=std::move(msg), at_front] { if (self->enqueue(std::move(msg), at_front)) self->write_loop(); }); } ``` PS: I would have posted this as a comment on @username_1's answer, but unfortunately I have not enough reputation. Upvotes: 0
2018/03/20
1,359
5,634
<issue_start>username_0: Is there a programming language that doesn't compile, but rather just translates into another language? I apologize if this is a stupid question to ask, but I was just wondering if this would be a literal shortcut in creating a programming language. Wouldn't it be easier (probably not speedy) but still doable?<issue_comment>username_1: Today the most commonly translated language is JavaScript. The newer constructs of ECMAScript are translated to the old version to be compatible with older browsers. The translation is done by [Babel](https://babeljs.io/). There are also other languages like [TypeScript](http://www.typescriptlang.org/) and [CoffeScript](http://coffeescript.org/) that are translated to JavaScript. Upvotes: 1 <issue_comment>username_2: > > Is there a programming language that doesn't compile, but rather just translates into another language? > > > That makes no sense to me. My definition of compilation is "translating from one language (the source language) to another (the target language)". Usually the source language is something written by humans and the target language is machine code (or asm), but that's not a requirement. In fact, many compilers are structured as multiple layers, each translating to another intermediate language (until the final layer emits code in the target language). And it's not directly related to a language, but a particular implementation. We can take C, for example: There are C interpreters, C compilers that target assembler code, C compilers that target machine code (of various platforms), [C compilers that target JavaScript, C compilers that target Perl](http://cowlark.com/clue/), etc. As for simplifying the implementation of a language: Yes, there are various kinds of code reuse that apply. One way is to separate compiler front-ends (translate from source language to an internal abstract representation) and back-ends (translate from the internal abstract representation to machine code for a particular platform). This way you can keep the front-end and only write a new back-end if you want to support another target platform. You can also keep the back-end and only write a new front-end if you want to add support for another source language. Another way is to use a full-blown programming language as the intermediate representation. For example, your new compiler might produce C code, which can then be compiled to machine code by any C compiler. The first implementation of C++ [did exactly this](https://en.wikipedia.org/wiki/Cfront). C has a number of drawbacks as a compiler target language; there have been efforts to create languages better suited for the task (see e.g. [C--](https://en.wikipedia.org/wiki/C--), which is used internally by [GHC](https://en.wikipedia.org/wiki/Glasgow_Haskell_Compiler) (a Haskell compiler)). Upvotes: 4 [selected_answer]<issue_comment>username_3: [f2c](https://en.wikipedia.org/wiki/F2c) translates Fortran 77 to C code. So it is probably an example for what you are looking for. Upvotes: 0 <issue_comment>username_4: All [general-purpose programming languages](https://en.wikipedia.org/wiki/General-purpose_programming_language) are [Turing complete](https://en.wikipedia.org/wiki/Turing_completeness). That means any one of them can be translated into another. When creating a new programming language, many designers often have their first prototypes translate their new language into one their are familiar with. This makes it easier to check if the translation is correct, that the new language is working correctly, and to share ideas with colleagues since it is machine independent. When their design becomes stable, they make a front end to an existing compiler to do the compiling. Using an existing compiler has several advantages. Optimization is instantly available. The new language can access existing libraries. Compiling can be targeted to all the existing back ends, making the language available on different architectures. Upvotes: 0 <issue_comment>username_5: Yes, this is one technique for creating new languages. The first experiments in what became C++ were translated to C for compilation. Taken from <http://wiki.c2.com/?CeeAsAnIntermediateLanguage>: > > Examples of using C in this fashion: > > > CeeFront; the original implementation of C++, translated to C. > > > Comeau C++ (<http://www.comeaucomputing.com/>) translates C++ to C. It > is the first C++ compiler to provide full core language support for > standard C++. > > > Several Java-to-C translators out there (some translate Java source; > others translate JavaByteCode to C) > > > Many experimental language compilers use C as a backend, rather than > emitting assembly language directly. > > > SqueakSmalltalk's VirtualMachine is written in a subset of Smalltalk > which gets translated to C and fed to the C compiler. The > VirtualMachine used by Scheme48 is written in a StaticallyTyped > SchemeLanguage dialect called PreScheme which is compiled to C. (The > PreScheme compiler itself is written in full Scheme.) > > > Several SchemeImplementations compile to C (e.g. RScheme, Bigloo and > Chicken). These Schemes often use the technique described in > CheneyOnTheMta to provide support for ProperTailRecursion. > > > More recently, compilers targeting a subset of JavaScript capable of efficient on-the-fly compilation have been created - [emscripten](https://github.com/kripken/emscripten/wiki). And if you count assembly language as well as high level languages, [WebAssembly](https://webassembly.org/) or other bytecode languages fit. Upvotes: 0
2018/03/20
949
3,262
<issue_start>username_0: My friends and I have minecraft server and we want to add JavaMail plugin with `Maven` , We added 2 jar files: Mail.jar Activation.jar With this code: ``` package com.parlagames; import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class App { public void AppVoid(String host, String port,final String userName,final String password, String[] toAddress, String subject, String message) { Properties props = new Properties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", host); props.put("mail.smtp.port",port); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(userName, password); } }); try { Message SendMessage = new MimeMessage(session); SendMessage.setFrom(new InternetAddress(userName)); for(int i=0;i ``` When we start the plugin in his server it shows an error that it doesn't identify the class ``` java.lang.NoClassDefFoundError: com/parlagames/App at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_161] at java.lang.ClassLoader.defineClass(Unknown Source) ~[?:1.8.0_161] at java.security.SecureClassLoader.defineClass(Unknown Source) ~[?:1.8.0_161] at java.net.URLClassLoader.defineClass(Unknown Source) ~[?:1.8.0_161] at java.net.URLClassLoader.access$100(Unknown Source) ~[?:1.8.0_161] at java.net.URLClassLoader$1.run(Unknown Source) ~[?:1.8.0_161] at java.net.URLClassLoader$1.run(Unknown Source) ~[?:1.8.0_161] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_161] at java.net.URLClassLoader.findClass(Unknown Source) ~[?:1.8.0_161] at org.bukkit.plugin.java.PluginClassLoader.findClass(PluginClassLoader.java:101) ~[spigot-1.11.2.jar:git-Spigot-3fb9445-6e3cec8] ``` Why does it happend? we need to have the maven by the way<issue_comment>username_1: Is `App` the main class of your plugin ? If so, it you need to make it extend the `JavaPlugin` class like this : ``` public class MyPlugin extends JavaPlugin { public void onEnable() { } public void onDisable() { } } ``` If you have trouble understanding of the bukkit/spigot API, I would suggest to start learning from the docs ([here](https://www.spigotmc.org/wiki/creating-a-blank-spigot-plugin-in-eclipse) is a reference guide for the basics). Upvotes: 0 <issue_comment>username_2: It seems that you need to add this line of code to the plugin: ``` public void onEnable() ``` and this code ``` public void onDisable() ``` It also seems that you don't have a main class. A main class is declared at plugin.yml. Try finding the part that says "main:" and change it to the class that has the "onEnable()" and "onDisable()". Also add `extends JavaPlugin` as someone said before Upvotes: 1
2018/03/20
746
3,199
<issue_start>username_0: I have a Web Api where I use *Owin Token Authentication*, as you know you have this method for authentication by default ``` public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context) { //here you get the context.UserName and context.Password // and validates the user } ``` This is the JavaScript call ``` $.ajax({ type: 'POST', url: Helper.ApiUrl() + '/token', data: { grant_type: 'password', username: UserName, password: <PASSWORD> }, success: function (result) { Helper.TokenKey(result.access_token); Helper.UserName(result.userName); }, error: function (result) { Helper.HandleError(result); } }); ``` This is perfect but the problem is that I have a multicustomer database and I have to send also the **Customer**, so I need to send something like this ``` data: { grant_type: 'password', username: UserName, password: <PASSWORD>, customer: Customer } ``` And be able to receive it in the Web Api ``` //here you get the context.UserName, context.Password and context.Customer ```<issue_comment>username_1: In the *ValidateClientAuthentication* you can get the additional param and add it to the context ``` public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context) { //Here we get the Custom Field sent in /Token string[] customer = context.Parameters.Where(x => x.Key == "customer").Select(x => x.Value).FirstOrDefault(); if (customer.Length > 0 && customer[0].Trim().Length > 0) { context.OwinContext.Set("Customer", customer[0].Trim()); } // Resource owner password credentials does not provide a client ID. if (context.ClientId == null) { context.Validated(); } return Task.FromResult(null); } ``` Then use it in the where you want method *GrantResourceOwnerCredentials* ``` public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context) { //Here we use the Custom Field sent in /Token string customer = context.OwinContext.Get("Customer"); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I found a solution ``` public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context) { //here you read all the params var data = await context.Request.ReadFormAsync(); //here you get the param you want var param = data.Where(x => x.Key == "CustomParam").Select(x => x.Value).FirstOrDefault(); string customer = ""; if (param != null && param.Length > 0) { customer = param[0]; } } ``` What you send in the Ajax call is ``` data: { grant_type: 'password', username: user, password: pwd, CustomParam: 'MyParam' }, ``` You can download a running sample in [my github repository](https://github.com/vterceros/WepApiAuthentication) Upvotes: 2
2018/03/20
1,762
5,207
<issue_start>username_0: I'm using with a smaller piece of code to test functionality for a larger (beginner) program, but I have a problem displaying the token I've pulled out of a string. I found and used: ``` #include #include int main() { char \*string, \*found; string = strdup ("1/2/3"); printf("Original string: '%s'\n",string); while ((found = strsep(&string,"/")) != NULL ) printf ("%s\n",found); return (0); } ``` and this works fine, prints the tokens one at a time as strings. Then when I try and move to a user entered string: ``` #include #include int main() { char string[13]; char \*found, \*cp = string; fprintf(stderr, "\nEnter string: "); scanf("%12s",string); printf("Original string: '%s'\n",string); while((found = strsep(&cp,"/,-")) != NULL ) printf("Test 1"); /\*To pinpoint where the seg fault arises\*/ printf("%s\n",found); return(0); } ``` I get a seg fault on the `printf("%s\n",found);` line. I'm getting the hang of basics of pointers, arrays and strings, but clearly I'm missing something, and would love for someone to tell me what it is! Also - if I change the argument of `printf("%s\n",found);` e.g. to `printf("%i\n",found);` I get some randomness returned, but always the correct amount, e.g. If I enter `1/2/3` I get three lines of junk, entering `1111/2222` gives two lines. I tried %c, %i, %d, %p and they all do the same, but %s seg faults. I'm completely stumped.<issue_comment>username_1: The segfault is because you're missing braces around your `while`. You'll keep printing "Test 1" until `strsep` returns `NULL`, then you try to print that result (and segfault). With several warning flags (probably `-Wall`), gcc helps out here: ``` sep.c:13:3: warning: this ‘while’ clause does not guard... [-Wmisleading-indentation] while((found = strsep(&cp,"/,-")) != NULL ) ^~~~~ sep.c:15:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘while’ printf("%s\n",found); ^~~~~~ ``` With braces added around the `while`, the program works as expected: ``` ./sep Enter string: abc/def Original string: 'abc/def' Test 1abc Test 1def ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This is the problem: ``` while((found = strsep(&cp,"/,-")) != NULL ) printf("Test 1"); /*To pinpoint where the seg fault arises*/ printf("%s\n",found); ``` and you think you are doing both `printf`s inside the loop, but in reality this code is equivalent to ``` while((found = strsep(&cp,"/,-")) != NULL ) { printf("Test 1"); /*To pinpoint where the seg fault arises*/ } printf("%s\n",found); ``` that means, `printf("%s\n",found);` is basically doing `printf("%s\n",NULL);` which is undefined behaviour and may cause a segfault. Note that in C indentation does not matter to the compiler. So you would need to use `{` and `}` around the code: ``` while((found = strsep(&cp,"/,-")) != NULL ) { printf("Test 1"); /*To pinpoint where the seg fault arises*/ printf("%s\n",found); } ``` Doing that I get ``` $ ./a Enter string: aa/bb/cc/dd Original string: 'aa/bb/cc/dd' Test 1aa Test 1bb Test 1cc Test 1dd ``` Also note that your first code is leaking memory, you are not freeing the allocated memory returned by `strdup`. You would have to save a pointer to that: ``` #include #include // for the free function #include int main() { char \*orig = \*string, \*found; orig = string = strdup ("1/2/3"); printf("Original string: '%s'\n",string); while ((found = strsep(&string,"/")) != NULL ) printf ("%s\n",found); free(orig); return 0; } ``` --- **edit** Neither [username_1](https://stackoverflow.com/a/49394464/1480131) nor me seems to have the same problem with the corrected version of the code. The OP provided a link to [onlinegdb.com](https://www.onlinegdb.com/fork/r1VBLGJ5G) showing that the corrected version ends with a segfault. I tried the same code on ideone.com and I also got the segfault. That seemed strange to me, so I opened my man page of `strsep` and found this: > > *man strsep* > > > **SYNOPSIS** > > > > ``` > #include > > char \*strsep(char \*\*stringp, const char \*delim); > > ``` > > Feature Test Macro Requirements for glibc (see feature\_test\_macros(7)): > > > `strsep()`: > > > > ``` > Since glibc 2.19: > _DEFAULT_SOURCE > Glibc 2.19 and earlier: > _BSD_SOURCE > > ``` > > The important part is this here: **Since glibc 2.19: `_DEFAULT_SOURCE`** So if you add ``` #define _DEFAULT_SOURCE ``` before including any standard C header file, then it works on onlinegdb.com and ideone.com. So the code should be: ``` #define _DEFAULT_SOURCE // <-- important #include #include int main() { char string[13]; char \*found, \*cp = string; fprintf(stderr, "\nEnter string: "); scanf("%12s",string); printf("Original string: '%s'\n",string); while((found = strsep(&cp,"/,-")) != NULL ) { printf("Test 1"); /\*To pinpoint where the seg fault arises\*/ printf("%s\n",found); } return(0); } ``` See: * [corrected onlinegdb.com version](https://onlinegdb.com/S1851my5f) * [corrected ideone.com version](https://ideone.com/5sVad1) Upvotes: 2
2018/03/20
849
2,677
<issue_start>username_0: I have a collection of python scripts, that I would like to be able to execute with a button press, from a web browser. Currently, I run `python -m http.server 8000` to start a server on port 8000. It serves up html pages well, but that's about all it does. Is it possible to have it execute a python script (via ajax) and return the output, instead of just returning the full text of the .py file. Additionally, if not, is there a simple (as in only 1 or 2 files) way to make this work? I'm looking for the equivalent of PHP -s, but for python. For completeness, this is my html ``` Hello World =========== Click me! $('button').click(function(){ $.get('/gui/run\_bash.py'); }); ```<issue_comment>username_1: `http.server` merely serves static files, it does not do any serverside processing or execute any code when you hit a python file. If you want to run some python code, you'll have to write an application to do that. Flask is a Python web framework that is probably well-suited to this task. Your flask application might look something like this for executing scripts... ``` import subprocess from flask import Flask app = Flask(__name__) SCRIPTS_ROOT = '/path/to/script_dir' @app.route('/run/') def run\_script(script\_name): fp = os.path.join(SCRIPTS\_ROOT, script\_name) try: output = subprocess.check\_output(['python', fp]) except subprocess.CalledProcessError as call: output = call.output # if exit code was non-zero return output.encode('utf-8') # or your system encoding if \_\_name\_\_ == '\_\_main\_\_': app.run(host='127.0.0.1', port=8000) ``` And of course, I should include an obligatory warning 'having a webserver execute commands like this is insecure', etc, etc. Check out the [Flask quickstart](http://flask.pocoo.org/docs/0.12/quickstart/) for more details. Upvotes: 2 <issue_comment>username_2: Add `--cgi` to your command line. ``` python -m http.server --cgi 8000 ``` Then place your python scripts in ./cgi-bin and mark them as executable. ``` $ mkdir cgi-bin $ cp hello.py cgi-bin/hello.py $ chmod +x cgi-bin/hello.py ``` You may need to slightly modify your python scripts to support the CGI protocol. Here is the server running: ```none $ cat cgi-bin/hello.py #! /usr/bin/env python3 print("Content-Type: application/json") print() print('{"hello": "world"}') radams@wombat:/tmp/z/h$ python -m http.server --cgi Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... 127.0.0.1 - - [20/Mar/2018 18:04:16] "GET /cgi-bin/hello.py HTTP/1.1" 200 - ``` Reference: <https://docs.python.org/3/library/http.server.html#http.server.CGIHTTPRequestHandler> Upvotes: 4 [selected_answer]
2018/03/20
432
1,493
<issue_start>username_0: This is my first question and first website ever, a total beginner so I hope I'm doing it right :) I made a table with pictures. I added height and width to each, like this: ``` | ``` It worked, but then I tried to delete the sizes and use my external CSS file instead. So i changed it to: ``` | ``` And then added in the css file: ``` .topics {width: 300px; height: 300px;} ``` It didn't work, and the pictures are now showing with the original size of the picture file itself. I also tried adding the class to the "td" part instead of the "img", that one didn't work either. What am I doing wrong? After being able to do this, with your answers I hope, I'd like another tip for adjusting the pictures to mobile version as well. I tried using percentage (%) and it didn't work. So any insights on that will be great :)<issue_comment>username_1: You forgot to say px in the stylings to specify its 300 pixels ``` .topics { width: 300px; height: 300px; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: 1. Add "px" to your pixel attribute 2. Be sure to link to your external stylesheet ```css .topics { height:300px; width:300px; } ``` ```html | ``` Thanks! I've just noticed that I forgot the px, indeed. Sadly, it still doesn't work. I know the link to the stylesheet is ok since it works on other elements, like the headlines. I'm trying to figure out what else I am missing here... Upvotes: 0
2018/03/20
782
2,831
<issue_start>username_0: All: I am pretty new to Apache POI, I wonder if I want to use the builtin function from excel VBA, how can I do that? For example: In VBA, I can write something like: ``` Dim month as String month = MonthName( Month( Sheets1.Range("C12") ) ) ``` How can I use that function in Apache POI(I do not wanna give that formula to a cell and evaluate it)? Thanks,<issue_comment>username_1: POI does not provide a VBA engine, so you cannot execute VBA with POI. But there's a way to implement equivalent code in Java and use it with POI. The link below gives an example of how to do that. <https://poi.apache.org/spreadsheet/user-defined-functions.html> Upvotes: 2 <issue_comment>username_2: `Apache poi` cannot interpret `VBA` but of course it has a formula evaluator. So of course you can evaluate `Excel` formulas directly in `apache poi` code. But the `VBA` function `MonthName` is not implemented by default. So either you do getting the date from the cell, what is clearly possible using `apache poi`. And then, you get the month and the month name out of that date using `Java` code. This is called first approach in following example. Or you are using a implemented `Excel` function. `TEXT` for example. This is called second approach in following example. ``` import java.io.FileInputStream; import org.apache.poi.ss.usermodel.*; import org.apache.poi.xssf.usermodel.*; import org.apache.poi.ss.formula.WorkbookEvaluator; import org.apache.poi.ss.formula.eval.*; public class ExcelEvaluateMonthFunctions{ public static void main(String[] args) throws Exception { Workbook workbook = WorkbookFactory.create(new FileInputStream("workbook.xlsx")); Sheet sheet = workbook.getSheet("Sheet2"); java.util.Locale.setDefault(java.util.Locale.US); //first approach: String monthname = null; java.util.Date date = sheet.getRow(11).getCell(2).getDateCellValue(); //Sheet2!C12 must contain a date java.time.LocalDate localDate = date.toInstant().atZone(java.time.ZoneId.systemDefault()).toLocalDate(); monthname = localDate.getMonth().getDisplayName(java.time.format.TextStyle.FULL, java.util.Locale.getDefault()); System.out.println(monthname); //second approach: monthname = null; CreationHelper helper = workbook.getCreationHelper(); XSSFFormulaEvaluator formulaevaluator = (XSSFFormulaEvaluator)helper.createFormulaEvaluator(); WorkbookEvaluator workbookevaluator = formulaevaluator._getWorkbookEvaluator(); ValueEval valueeval = null; valueeval = workbookevaluator.evaluate("TEXT(" + sheet.getSheetName() +"!C12, \"MMMM\")", null); //Sheet2!C12 must contain a date if (valueeval instanceof StringValueEval) { monthname = ((StringValueEval)valueeval).getStringValue(); } System.out.println(monthname); workbook.close(); } } ``` Upvotes: 2
2018/03/20
2,155
7,351
<issue_start>username_0: I have an email field that only gets shown if a checkbox is selected (boolean value is `true`). When the form get submitted, I only what this field to be required if the checkbox is checked (boolean is true). This is what I've tried so far: ```js const validationSchema = yup.object().shape({ email: yup .string() .email() .label('Email') .when('showEmail', { is: true, then: yup.string().required('Must enter email address'), }), }) ``` I've tried several other variations, but I get errors from Formik and Yup: ```none Uncaught (in promise) TypeError: Cannot read property 'length' of undefined at yupToFormErrors (formik.es6.js:6198) at formik.es6.js:5933 at yupToFormErrors @ formik.es6.js:6198 ``` And I get validation errors from Yup as well. What am I doing wrong?<issue_comment>username_1: You probably aren't defining a validation rule for the **showEmail** field. I've done a CodeSandox to test it out and as soon as I added: ``` showEmail: yup.boolean() ``` The form started validation correctly and no error was thrown. This is the url: <https://codesandbox.io/s/74z4px0k8q> And for future this was the correct validation schema: ``` validationSchema={yup.object().shape({ showEmail: yup.boolean(), email: yup .string() .email() .when("showEmail", { is: true, then: yup.string().required("Must enter email address") }) }) } ``` Upvotes: 8 <issue_comment>username_2: Totally agree with @username_1's answer. Just a supplement for the use case of Radio button. When we use radio button as condition, we can check value of string instead of boolean. e.g. `is: 'Phone'` ``` const ValidationSchema = Yup.object().shape({ // This is the radio button. preferredContact: Yup.string() .required('Preferred contact is required.'), // This is the input field. contactPhone: Yup.string() .when('preferredContact', { is: 'Phone', then: Yup.string() .required('Phone number is required.'), }), // This is another input field. contactEmail: Yup.string() .when('preferredContact', { is: 'Email', then: Yup.string() .email('Please use a valid email address.') .required('Email address is required.'), }), }); ``` This the radio button written in ReactJS, onChange method is the key to trigger the condition checking. ``` this.handleRadioButtonChange('Email', setFieldValue)} /> Email this.handleRadioButtonChange('Phone', setFieldValue)} /> Phone ``` And here's the callback function when radio button get changed. if we are using Formik, [setFieldValue](http://handleRadioButtonChange(value,%20setFieldValue)%20%7B%20%20%20%20%20this.setState(%7B'preferredContact':%20value%7D);%20%20%20%20%20setFieldValue('preferredContact',%20value);%20%20%20%7D) is the way to go. ``` handleRadioButtonChange(value, setFieldValue) { this.setState({'preferredContact': value}); setFieldValue('preferredContact', value); } ``` Upvotes: 5 <issue_comment>username_3: Formik author here... To make `Yup.when` work properly, you would have to add `showEmail` to `initialValues` and to your Yup schema shape. In general, when using `validationSchema`, it is best practices to ensure that all of your form's fields have initial values so that Yup can see them immediately. The result would look like: ``` ``` Upvotes: 6 <issue_comment>username_4: You can even use a function for complex cases . Function case helps for complex validations ``` validationSchema={yup.object().shape({ showEmail: yup.boolean(), email: yup .string() .email() .when("showEmail", (showEmail, schema) => { if(showEmail) return schema.required("Must enter email address") return schema }) }) } ``` Upvotes: 5 <issue_comment>username_5: ``` email: Yup.string() .when(['showEmail', 'anotherField'], { is: (showEmail, anotherField) => { return (showEmail && anotherField); }, then: Yup.string().required('Must enter email address') }), ``` Upvotes: 4 <issue_comment>username_6: I use yup with vee-validate --------------------------- [vee-validate](https://vee-validate.logaretm.com/v4/v4/guide/composition-api/validation#validation-schemas-with-yup) here is the sample code from project ``` const schema = yup.object({ first_name: yup.string().required().max(45).label('Name'), last_name: yup.string().required().max(45).label('Last name'), email: yup.string().email().required().max(255).label('Email'), self_user: yup.boolean(), company_id: yup.number() .when('self_user', { is: false, then: yup.number().required() }) }) const { validate, resetForm } = useForm({ validationSchema: schema, initialValues: { self_user: true } }) const { value: self_user } = useField('self_user') const handleSelfUserChange = () => { self_user.value = !self_user.value } ``` Upvotes: 1 <issue_comment>username_7: it works for me very well : ``` Yup.object().shape({ voyageStartDate:Yup.date(), voyageEndDate:Yup.date() .when( 'voyageStartDate', (voyageStartDate, schema) => (moment(voyageStartDate).isValid() ? schema.min(voyageStartDate) : schema), ), }) ``` Upvotes: 2 <issue_comment>username_8: ``` This Code Works For ME Try To use It const validation = () => { try { let userSchema = Yup.object().shape({ date: Yup.string().required(), city: Yup.string().required(), gender: Yup.string().required(), email: Yup.string() .matches( /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|. (".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA- Z\-0-9]+\.)+[a-zA-Z]{2,}))$/, ) .nullable(true), userName: Yup.string().min(3), }); userSchema.validateSync({ email: email, userName: userName, gender: gender.name, city: city.name, date: date, }); } catch (error) { console.log(error); setError({ userName: '', email: '', gender: '', city: '', date: '', [error.path]: error.name, }); } }; ``` Upvotes: 0 <issue_comment>username_9: Checking for a specific value without using the function notation: If select `choice` has value `date`, then input field `date` is required: ``` availableDate: yup.string().when('choice', { is: (v) => v === 'date', then: (schema) => schema.required('date is required') }) ``` Upvotes: 1 <issue_comment>username_10: Attention anyone using Yup `v1` and upper. `v1.2` in my case. According to the [official docs](https://github.com/jquense/yup) you have to do `(schema) => ...` in you conditions Official Docs: For schema with dynamic components (references, lazy, or conditions), describe requires more context to accurately return the schema description. In these cases provide options ``` import { ref, object, string, boolean } from 'yup'; let schema = object({ isBig: boolean(), count: number().when('isBig', { is: true, then: (schema) => schema.min(5), otherwise: (schema) => schema.min(0), }), }); schema.describe({ value: { isBig: true } }); ``` Upvotes: 2
2018/03/20
2,909
6,967
<issue_start>username_0: Trips ``` id,timestamp 1008,2003-11-03 15:00:31 1008,2003-11-03 15:02:38 1008,2003-11-03 15:03:04 1008,2003-11-03 15:18:00 1009,2003-11-03 22:00:00 1009,2003-11-03 22:02:53 1009,2003-11-03 22:03:44 1009,2003-11-14 10:00:00 1009,2003-11-14 10:02:02 1009,2003-11-14 10:03:10 ``` prompts ``` id,timestamp ,mode 1008,2003-11-03 15:18:49,car 1009,2003-11-03 22:04:20,metro 1009,2003-11-14 10:04:20,bike ``` Read csv file: ``` coordinates = pd.read_csv('coordinates.csv') mode = pd.read_csv('prompts.csv') ``` I have to assign each mode at the end of the trip Results: ``` id, timestamp, mode 1008, 2003-11-03 15:00:31, null 1008, 2003-11-03 15:02:38, null 1008, 2003-11-03 15:03:04, null 1008, 2003-11-03 15:18:00, car 1009, 2003-11-03 22:00:00, null 1009, 2003-11-03 22:02:53, null 1009, 2003-11-03 22:03:44, metro 1009, 2003-11-14 10:00:00, null 1009, 2003-11-14 10:02:02, null 1009, 2003-11-14 10:03:10, bike ``` **Note** I use a large dataset for trips (4GB) and a small dataset for modes (500MB)<issue_comment>username_1: This would be a naive solution which assumes that your coordinates DataFrame already is sorted by timestamp, that ids are unique and that your data set fits into memory. If the latter is not the case, I recommend using [dask](http://dask.pydata.org/en/latest/) and partition your DataFrames by id. Imports: ``` import pandas as pd import numpy as np ``` First we join the two DataFrames. This will fill the whole mode column for each id. We join on the index because that will speed up the operation, see also "[Improve Pandas Merge performance](https://stackoverflow.com/questions/40860457/improve-pandas-merge-performance)". ``` mode = mode.set_index('id') coordinates = coordinates.set_index('id') merged = coordinates.join(mode, how='left') ``` We need the index to be unique values in order for our groupby operation to work. ``` merged = merged.reset_index() ``` Then we apply a function that will replace all but the last row in the mode column for each id. ``` def clean_mode_col(df): cleaned_mode_col = df['mode'].copy() cleaned_mode_col.iloc[:-1] = np.nan df['mode'] = cleaned_mode_col return df merged = merged.groupby('id').apply(clean_mode_col) ``` As mentioned above, you can use dask to parallelize the execution of the merge code like this: ``` import dask.dataframe as dd dd_coordinates = dd.from_pandas(coordinates).set_index('id') dd_mode = dd.from_pandas(mode).set_index('id') merged = dd.merge(dd_coordinates, dd_mode, left_index=True, right_index=True) merged = merged.compute() #returns pandas DataFrame ``` The set\_index operations are slow but make the merge way faster. I did not test this code. Please provide copy-pasteable code that includes your DataFrames so that I don't have to copy and paste all those files you have in your description (hint: use pd.DataFrame.to\_dict to export your DataFrame as a dictionary and copy and paste that into your code). Upvotes: 0 <issue_comment>username_2: Based on your updated example, you can denote a trip by finding the first prompt timestamp that is greater than the trip timestamp. All rows with the same prompt timestamp will then correspond to the same trip. Then you want to set the mode for the greatest of the trip timestamps for each group. One way to do this is by using 2 [`pyspark.sql.Window`](http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.Window)s. Suppose you start with the following two PySpark DataFrames, `trips` and `prompts`: ```python trips.show(truncate=False) #+----+-------------------+ #|id |timestamp | #+----+-------------------+ #|1008|2003-11-03 15:00:31| #|1008|2003-11-03 15:02:38| #|1008|2003-11-03 15:03:04| #|1008|2003-11-03 15:18:00| #|1009|2003-11-03 22:00:00| #|1009|2003-11-03 22:02:53| #|1009|2003-11-03 22:03:44| #|1009|2003-11-14 10:00:00| #|1009|2003-11-14 10:02:02| #|1009|2003-11-14 10:03:10| #|1009|2003-11-15 10:00:00| #+----+-------------------+ prompts.show(truncate=False) #+----+-------------------+-----+ #|id |timestamp |mode | #+----+-------------------+-----+ #|1008|2003-11-03 15:18:49|car | #|1009|2003-11-03 22:04:20|metro| #|1009|2003-11-14 10:04:20|bike | #+----+-------------------+-----+ ``` Join these two tables together using the `id` column with the condition that the prompt timestamp is greater than or equal to the trip timestamp. For some trip timestamps, this will result in multiple prompt timestamps. We can eliminate this by selecting the minimum prompt timestamp for each `('id', 'trip.timestamp')` group- I call this temporary column `indicator`, and I used the Window `w1` to compute it. Next do a window over `('id', 'indicator')` and find the maximum trip timestamp for each group. Set this value equal to the `mode`. All other rows will be set to [`pyspark.sql.functions.lit(None)`](http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.lit). Finally you can compute all of the entries in `trips` where the trip timestamp was greater than the max prompt timestamp. These would be trips that did not match to a prompt. Union the matched and the unmatched together. ```python import pyspark.sql.functions as f from pyspark.sql import Window w1 = Window.partitionBy('id', 'trips.timestamp') w2 = Window.partitionBy('id', 'indicator') matched = trips.alias('trips').join(prompts.alias('prompts'), on='id', how='left')\ .where('prompts.timestamp >= trips.timestamp' )\ .select( 'id', 'trips.timestamp', 'mode', f.when( f.col('prompts.timestamp') == f.min('prompts.timestamp').over(w1), f.col('prompts.timestamp'), ).otherwise(f.lit(None)).alias('indicator') )\ .where(~f.isnull('indicator'))\ .select( 'id', f.col('trips.timestamp').alias('timestamp'), f.when( f.col('trips.timestamp') == f.max(f.col('trips.timestamp')).over(w2), f.col('mode') ).otherwise(f.lit(None)).alias('mode') ) unmatched = trips.alias('t').join(prompts.alias('p'), on='id', how='left')\ .withColumn('max_prompt_time', f.max('p.timestamp').over(Window.partitionBy('id')))\ .where('t.timestamp > max_prompt_time')\ .select('id', 't.timestamp', f.lit(None).alias('mode'))\ .distinct() ``` Output: ```python matched.union(unmatched).sort('id', 'timestamp').show() +----+-------------------+-----+ | id| timestamp| mode| +----+-------------------+-----+ |1008|2003-11-03 15:00:31| null| |1008|2003-11-03 15:02:38| null| |1008|2003-11-03 15:03:04| null| |1008|2003-11-03 15:18:00| car| |1009|2003-11-03 22:00:00| null| |1009|2003-11-03 22:02:53| null| |1009|2003-11-03 22:03:44|metro| |1009|2003-11-14 10:00:00| null| |1009|2003-11-14 10:02:02| null| |1009|2003-11-14 10:03:10| bike| |1009|2003-11-15 10:00:00| null| +----+-------------------+-----+ ``` Upvotes: 3 [selected_answer]
2018/03/20
2,008
5,953
<issue_start>username_0: I would like to pull 1000 samples from a custom distribution in R I have the following custom distribution ``` library(gamlss) mu <- 1 sigma <- 2 tau <- 3 kappa <- 3 rate <- 1 Rmax <- 20 x <- seq(1, 2e1, 0.01) points <- Rmax * dexGAUS(x, mu = mu, sigma = sigma, nu = tau) * pgamma(x, shape = kappa, rate = rate) plot(points ~ x) ``` [![enter image description here](https://i.stack.imgur.com/dS5hE.png)](https://i.stack.imgur.com/dS5hE.png) How can I randomly sample via Monte Carlo simulation from this distribution? My first attempt was the following code which produced a histogram shape I did not expect. ``` hist(sample(points, 1000), breaks = 51) ``` [![enter image description here](https://i.stack.imgur.com/FhW3M.png)](https://i.stack.imgur.com/FhW3M.png) This is not what I was looking for as it does not follow the same distribution as the pdf.<issue_comment>username_1: If you want a Monte Carlo simulation, you'll need to sample from the distribution a large number of times, not take a large sample one time. Your object, `points`, has values that increases as the index increases to a threshold around `400`, levels off, and then decreases. That's what `plot(points ~ x)` shows. It may describe a distribution, but the actual distribution of values in `points` is different. That shows how often values are within a certain range. You'll notice your x axis for the histogram is similar to the y axis for the `plot(points ~ x)` plot. The actual distribution of values in the `points` object is easy enough to see, and it is similar to what you're seeing when sampling 1000 values at random, without replacement from an object with `1900` values in it. Here's the distribution of values in `points` (no simulation required): ``` hist(points, 100) ``` I used 100 breaks on purpose so you could see some of the fine details. [![enter image description here](https://i.stack.imgur.com/Goo99.jpg)](https://i.stack.imgur.com/Goo99.jpg) Notice the little bump in the tail at the top, that you may not be expecting if you want the histogram to look like the plot of the values vs. the index (or some increasing x). That means that there are more values in `points` that are around `2` then there are around `1`. See if you can look at how the curve of `plot(points ~ x)` flattens when the value is around `2`, and how it's very steep between `0.5` and `1.5`. Notice also the large hump at the low end of the histogram, and look at the `plot(points ~ x)` curve again. Do you see how most of the values (whether they're at the low end or the high end of that curve) are close to `0`, or at least less than `0.25`. If you look at those details, you may be able to convince yourself that the histogram is, in fact, exactly what you should expect :) If you want a Monte Carlo simulation of a sample from this object, you might try something like: ``` samples <- replicate(1000, sample(points, 100, replace = TRUE)) ``` If you want to generate data using `points` as a probability density function, that question has been asked and answered [here](https://stackoverflow.com/questions/32871602/r-generate-data-from-a-probability-density-distribution) Upvotes: 3 [selected_answer]<issue_comment>username_2: You invert the ECDF of the distribution: ``` ecd.points <- ecdf(points) invecdfpts <- with( environment(ecd.points), approxfun(y,x) ) samp.inv.ecd <- function(n=100) invecdfpts( runif(n) ) plot(density (samp.inv.ecd(100) ) ) plot(density(points) ) png(); layout(matrix(1:2,1)); plot(density (samp.inv.ecd(100) ),main="The Sample" ) plot(density(points) , main="The Original"); dev.off() ``` [![enter image description here](https://i.stack.imgur.com/Og0Ak.png)](https://i.stack.imgur.com/Og0Ak.png) Upvotes: 1 <issue_comment>username_3: Let's define your (not normalized) probability density function as a function: ``` library(gamlss) fun <- function(x, mu = 1, sigma = 2, tau = 3, kappa = 3, rate = 1, Rmax = 20) Rmax * dexGAUS(x, mu = mu, sigma = sigma, nu = tau) * pgamma(x, shape = kappa, rate = rate) ``` Now one approach is to use some [MCMC](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) (Markov chain Monte Carlo) method. For instance, ``` simMCMC <- function(N, init, fun, ...) { out <- numeric(N) out[1] <- init for(i in 2:N) { pr <- out[i - 1] + rnorm(1, ...) r <- fun(pr) / fun(out[i - 1]) out[i] <- ifelse(runif(1) < r, pr, out[i - 1]) } out } ``` It starts from point `init` and gives `N` draws. The approach can be improved in many ways, but I'm simply only going to start form `init = 5`, include a burnin period of 20000 and to select every second draw to reduce the number of repetitions: ``` d <- tail(simMCMC(20000 + 2000, init = 5, fun = fun), 2000)[c(TRUE, FALSE)] plot(density(d)) ``` [![enter image description here](https://i.stack.imgur.com/jdnPe.png)](https://i.stack.imgur.com/jdnPe.png) Upvotes: 2 <issue_comment>username_4: Here's another way to do it that draws from [R: Generate data from a probability density distribution](https://stackoverflow.com/questions/32871602/r-generate-data-from-a-probability-density-distribution) and [How to create a distribution function in R?](https://stackoverflow.com/questions/40988621/how-to-create-a-distribution-function-in-r?rq=1): ``` x <- seq(1, 2e1, 0.01) points <- 20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1) f <- function (x) (20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)) C <- integrate(f,-Inf,Inf) > C$value [1] 11.50361 # normalize by C$value f <- function (x) (20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)/11.50361) random.points <- approx(cumsum(pdf$y)/sum(pdf$y),pdf$x,runif(10000))$y hist(random.points,1000) ``` [![histfrommydist](https://i.stack.imgur.com/aWFz8.jpg)](https://i.stack.imgur.com/aWFz8.jpg) `hist((random.points*40),1000)` will get the scaling like your original function. Upvotes: 0
2018/03/20
2,235
6,407
<issue_start>username_0: I'm trying to do 2 things in a carousel in Bootstrap 4 1- Shadow in img text, in carousel-caption use this dark effect below the text 2- Indicators in circle format and with a number inside This 2 things just like this IMG: <https://i.stack.imgur.com/SAfg5.png> I've trid this: ``` /\* shadow \*/ .carousel-caption { background: rgba(0,0,0,0.5); background-color: rgba(0,0,0,.5); text-shadow: 0 1px 4px rgba(0, 0, 0, 0.85); } /\* circle indicators and numbers \*/ .carousel-indicators-numbers { li { text-indent: 0; margin: 0 2px; width: 30px; height: 30px; border: none; border-radius: 100%; line-height: 30px; color: #fff; background-color: #999; transition: all 0.25s ease; &.active, &:hover { margin: 0 2px; width: 30px; height: 30px; background-color: #337ab7; } } } 1. 1 2. 2 3. 3 ![Third slide](http://placehold.it/850x500) [### News 1](#) Bla bla bla bla bla bla bla bla bla bla bla ![Third slide](http://placehold.it/850x500) [### News 2](#) Bla bla bla bla bla bla bla bla bla bla bla ![Third slide](http://placehold.it/850x500) [### News 3](#) Bla bla bla bla bla bla bla bla bla bla bla [Previous](#carouselExampleIndicators) [Next](#carouselExampleIndicators) ``` But with no effect, it's just show like default bootstrap carousel: <https://i.stack.imgur.com/RxWzS.png><issue_comment>username_1: If you want a Monte Carlo simulation, you'll need to sample from the distribution a large number of times, not take a large sample one time. Your object, `points`, has values that increases as the index increases to a threshold around `400`, levels off, and then decreases. That's what `plot(points ~ x)` shows. It may describe a distribution, but the actual distribution of values in `points` is different. That shows how often values are within a certain range. You'll notice your x axis for the histogram is similar to the y axis for the `plot(points ~ x)` plot. The actual distribution of values in the `points` object is easy enough to see, and it is similar to what you're seeing when sampling 1000 values at random, without replacement from an object with `1900` values in it. Here's the distribution of values in `points` (no simulation required): ``` hist(points, 100) ``` I used 100 breaks on purpose so you could see some of the fine details. [![enter image description here](https://i.stack.imgur.com/Goo99.jpg)](https://i.stack.imgur.com/Goo99.jpg) Notice the little bump in the tail at the top, that you may not be expecting if you want the histogram to look like the plot of the values vs. the index (or some increasing x). That means that there are more values in `points` that are around `2` then there are around `1`. See if you can look at how the curve of `plot(points ~ x)` flattens when the value is around `2`, and how it's very steep between `0.5` and `1.5`. Notice also the large hump at the low end of the histogram, and look at the `plot(points ~ x)` curve again. Do you see how most of the values (whether they're at the low end or the high end of that curve) are close to `0`, or at least less than `0.25`. If you look at those details, you may be able to convince yourself that the histogram is, in fact, exactly what you should expect :) If you want a Monte Carlo simulation of a sample from this object, you might try something like: ``` samples <- replicate(1000, sample(points, 100, replace = TRUE)) ``` If you want to generate data using `points` as a probability density function, that question has been asked and answered [here](https://stackoverflow.com/questions/32871602/r-generate-data-from-a-probability-density-distribution) Upvotes: 3 [selected_answer]<issue_comment>username_2: You invert the ECDF of the distribution: ``` ecd.points <- ecdf(points) invecdfpts <- with( environment(ecd.points), approxfun(y,x) ) samp.inv.ecd <- function(n=100) invecdfpts( runif(n) ) plot(density (samp.inv.ecd(100) ) ) plot(density(points) ) png(); layout(matrix(1:2,1)); plot(density (samp.inv.ecd(100) ),main="The Sample" ) plot(density(points) , main="The Original"); dev.off() ``` [![enter image description here](https://i.stack.imgur.com/Og0Ak.png)](https://i.stack.imgur.com/Og0Ak.png) Upvotes: 1 <issue_comment>username_3: Let's define your (not normalized) probability density function as a function: ``` library(gamlss) fun <- function(x, mu = 1, sigma = 2, tau = 3, kappa = 3, rate = 1, Rmax = 20) Rmax * dexGAUS(x, mu = mu, sigma = sigma, nu = tau) * pgamma(x, shape = kappa, rate = rate) ``` Now one approach is to use some [MCMC](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) (Markov chain Monte Carlo) method. For instance, ``` simMCMC <- function(N, init, fun, ...) { out <- numeric(N) out[1] <- init for(i in 2:N) { pr <- out[i - 1] + rnorm(1, ...) r <- fun(pr) / fun(out[i - 1]) out[i] <- ifelse(runif(1) < r, pr, out[i - 1]) } out } ``` It starts from point `init` and gives `N` draws. The approach can be improved in many ways, but I'm simply only going to start form `init = 5`, include a burnin period of 20000 and to select every second draw to reduce the number of repetitions: ``` d <- tail(simMCMC(20000 + 2000, init = 5, fun = fun), 2000)[c(TRUE, FALSE)] plot(density(d)) ``` [![enter image description here](https://i.stack.imgur.com/jdnPe.png)](https://i.stack.imgur.com/jdnPe.png) Upvotes: 2 <issue_comment>username_4: Here's another way to do it that draws from [R: Generate data from a probability density distribution](https://stackoverflow.com/questions/32871602/r-generate-data-from-a-probability-density-distribution) and [How to create a distribution function in R?](https://stackoverflow.com/questions/40988621/how-to-create-a-distribution-function-in-r?rq=1): ``` x <- seq(1, 2e1, 0.01) points <- 20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1) f <- function (x) (20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)) C <- integrate(f,-Inf,Inf) > C$value [1] 11.50361 # normalize by C$value f <- function (x) (20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)/11.50361) random.points <- approx(cumsum(pdf$y)/sum(pdf$y),pdf$x,runif(10000))$y hist(random.points,1000) ``` [![histfrommydist](https://i.stack.imgur.com/aWFz8.jpg)](https://i.stack.imgur.com/aWFz8.jpg) `hist((random.points*40),1000)` will get the scaling like your original function. Upvotes: 0
2018/03/20
575
2,217
<issue_start>username_0: I have been working on a Spring Boot project using Maven. I need to write logs for system activity. I have been following this example for log4j [Example](https://www.mkyong.com/logging/log4j-hello-world-example/) This is the code: ``` import org.apache.log4j.Logger; public class HelloExample{ final static Logger logger = Logger.getLogger(HelloExample.class); public static void main(String[] args) { HelloExample obj = new HelloExample(); obj.runMe("mkyong"); } private void runMe(String parameter){ if(logger.isDebugEnabled()){ logger.debug("This is debug : " + parameter); } if(logger.isInfoEnabled()){ logger.info("This is info : " + parameter); } logger.warn("This is warn : " + parameter); logger.error("This is error : " + parameter); logger.fatal("This is fatal : " + parameter); } } ``` I face 2 issues: 1. The main compiles, runs & gives the following output as in the image without having the log4j.properties [![enter image description here](https://i.stack.imgur.com/uVb0G.png)](https://i.stack.imgur.com/uVb0G.png) 2. Even after having the log4j.properties file in the resources folder, the logs are not stored to the log file. The file is created if the application is not springboot. I don't understand where I am going wrong. P.S: I am a beginner in Java & Spring.<issue_comment>username_1: For spring boot with log4j, you need to use spring-boot-starter-log4j2 dependency.Please follow the below link for the solution. <https://www.callicoder.com/spring-boot-log4j-2-example/> Upvotes: 3 [selected_answer]<issue_comment>username_2: I suggest you to have a look to the [official documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-logging.html). This explains how logging works in a Spring boot application. The paragraph about [file output](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-logging.html#boot-features-logging-file-output) shows how you can write logs into specific files. Upvotes: 0
2018/03/20
1,944
4,858
<issue_start>username_0: I know this is a recurrent question, so I apologize in advance for cross-posting. I am having trouble reading a model output `.csv` file that looks more or less like this (the original file has about 14,000 rows, but the columns are as they look here): ``` time x y z w r s t 1980 1 0.8327 0.3402 0.2021 0 1.1729 0 1980 2 0.7886 0.3399 0.2019 0 2.3014 0 1980 3 0.7909 0.3396 0.2017 0 3.4319 0 1980 4 0.7846 0.3394 0.2016 0 4.5559 0 1980 5 0.8103 0.3392 0.2014 0 5.7053 0 1980 6 0.8207 0.339 0.2013 0 6.865 0 1980 7 0.8263 0.3388 0.2012 0 8.0301 0 1980 14 0.9112 10.3411 20.6821 3.1175 60.4644 3.1175 1980 15 0.9092 8.878 17.756 2.734 70.2517 5.8515 1980 16 0.9001 9.5232 19.0464 2.9655 80.6749 8.817 1980 17 1.0313 7.59 15.18 2.4332 89.2962 11.2502 1980 18 1.0333 6.8859 13.7718 2.266 97.2154 13.5162 ``` For the command: ``` read.csv("df", header = TRUE, sep = ",", blank.lines.skip = FALSE) ``` I get the following error message: ``` Error in read.table(file = file, header = header, sep = sep, quote = quote, : duplicate 'row.names' are not allowed ``` From what I've understood from answers to similar questions, a possible problem may be that the `read.csv` command is not recognizing the zeros in the last column as values, so the program reads it as if the first row contained one fewer field than the number of columns, and hence uses the first column for the row names. However, when I create a "fake" table with actual zeros, blanks, or "NA" in the same positions as shown in the example above, the program has no trouble recognizing them and reading the file. e.g. ``` df <- data.frame(x=c(1,2,3,3,3,4,5,2,2,6,7,3,8,9,10)) df$y <- c(4,8,9,1,1,5,8,8,3,2,0,9,4,4,7) df$z <- c(" "," "," ",4,5,6,7,8,9,10,11,12,13,14,15) OR: df$z <- c(0,0,0,4,5,6,7,8,9,10,11,12,13,14,15) OR: df$z <- c("NA","NA","NA",4,5,6,7,8,9,10,11,12,13,14,15) ``` Could anyone tell me why is this happening? I have solved the issue as suggested by other users: ``` df <- read.csv("df.csv", header = TRUE, row.names = NULL) colnames(df) <- c(colnames(df)[-1],NULL) write.table(df, "df.csv", sep = ",", col.names = TRUE, row.names = FALSE) And start working as normal from here. ``` It works just fine, but I was wondering if there is a more direct solution to this problem, or if there is something I am missing. Thank you,<issue_comment>username_1: From the help page for `read.csv`: > > If there is a header and the first row contains one fewer > field than the number of columns, the first column in the > input is used for the row names. Otherwise if ‘row.names’ is > missing, the rows are numbered. > > > Without seeing the CSV it is difficult to tell, but it does seem that it must fulfil the criteria listed (ie, there is one fewer entry in the first row, possibly due to an empty column name). Upvotes: 0 <issue_comment>username_2: Here are two ways. **The first** uses an external package, `data.table`. Function `fread` does the job with a warning. And the column names are messed up, since the first row has less fields than the other rows, `fread` discards that row. ``` data.table::fread("test.csv", sep = ";") # V1 V2 V3 #1: A 1 6 #2: A 2 7 #3: A 3 8 #4: A 4 9 #5: A 5 10 ``` > > Warning message: > > In data.table::fread("test2.csv", sep = ";") : > > Starting data input on line 2 and discarding line 1 because it has too > > few or too many items to be column names or data: Col1;Col2 > > > **The second** way is more complicated. If you don't want to load an extra package, I have written a function that uses `readLines` to read in the first row with the column names and then reads the rest of the file with `read.table`. ``` myread <- function(file, sep = ",", ...){ nm <- readLines(file, n = 1) nm <- unlist(strsplit(nm, sep)) DF <- read.table(file, skip = 1, sep = sep, ...) if(length(names(DF)) > length(nm)){ names(DF)[(length(names(DF)) - length(nm) + 1):length(names(DF))] <- nm } else names(DF) <- nm DF } myread("test.csv", sep = ";") # V1 Col1 Col2 #1 A 1 6 #2 A 2 7 #3 A 3 8 #4 A 4 9 #5 A 5 10 ``` **FILE** Here are the contents of the file. Note that the column separator is a semi-colon, in most continental Europe, we use the comma as a decimals marker and so the CSV format separates the columns with a semi-colon. ``` Col1;Col2 A;1;6 A;2;7 A;3;8 A;4;9 A;5;10 ``` Upvotes: 2 <issue_comment>username_3: my 'row.names' problem was that i was simply using the wrong 'sep' Upvotes: 0
2018/03/20
886
2,663
<issue_start>username_0: I am trying to center the AddThis inline follow buttons. By default they left align. The code for the follow buttons is: ``` ``` I tried adding them within a div like so: ``` ``` I added the following CSS but it made no difference, they still align to the left. ``` .center_aift { width: 100%; text-align: center; } ``` I am using SquareSpace with a custom code block but do not think that is the problem.<issue_comment>username_1: From the help page for `read.csv`: > > If there is a header and the first row contains one fewer > field than the number of columns, the first column in the > input is used for the row names. Otherwise if ‘row.names’ is > missing, the rows are numbered. > > > Without seeing the CSV it is difficult to tell, but it does seem that it must fulfil the criteria listed (ie, there is one fewer entry in the first row, possibly due to an empty column name). Upvotes: 0 <issue_comment>username_2: Here are two ways. **The first** uses an external package, `data.table`. Function `fread` does the job with a warning. And the column names are messed up, since the first row has less fields than the other rows, `fread` discards that row. ``` data.table::fread("test.csv", sep = ";") # V1 V2 V3 #1: A 1 6 #2: A 2 7 #3: A 3 8 #4: A 4 9 #5: A 5 10 ``` > > Warning message: > > In data.table::fread("test2.csv", sep = ";") : > > Starting data input on line 2 and discarding line 1 because it has too > > few or too many items to be column names or data: Col1;Col2 > > > **The second** way is more complicated. If you don't want to load an extra package, I have written a function that uses `readLines` to read in the first row with the column names and then reads the rest of the file with `read.table`. ``` myread <- function(file, sep = ",", ...){ nm <- readLines(file, n = 1) nm <- unlist(strsplit(nm, sep)) DF <- read.table(file, skip = 1, sep = sep, ...) if(length(names(DF)) > length(nm)){ names(DF)[(length(names(DF)) - length(nm) + 1):length(names(DF))] <- nm } else names(DF) <- nm DF } myread("test.csv", sep = ";") # V1 Col1 Col2 #1 A 1 6 #2 A 2 7 #3 A 3 8 #4 A 4 9 #5 A 5 10 ``` **FILE** Here are the contents of the file. Note that the column separator is a semi-colon, in most continental Europe, we use the comma as a decimals marker and so the CSV format separates the columns with a semi-colon. ``` Col1;Col2 A;1;6 A;2;7 A;3;8 A;4;9 A;5;10 ``` Upvotes: 2 <issue_comment>username_3: my 'row.names' problem was that i was simply using the wrong 'sep' Upvotes: 0