date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/20
600
2,333
<issue_start>username_0: I'm using Angular 5 and have a couple of routes defined. When I navigate routes and put a breakpoint on the canActivate function guard, I see it is executed twice. I'm asking this because I have to call an external service on each canActivate call, and executing the same service uselessly multiple times causes unnecessary overhead.<issue_comment>username_1: It is ran several times because the router has several events to handle. You could test if this is the last event with something like this. ``` this.router.events.subscribe(event => { if(event instanceof NavigationEnd) { // Last event, do your thing } }); ``` Upvotes: 2 <issue_comment>username_2: It is the not the `canActivate` being executed multiple times, but it is the event that you have subscribed to. In your case it would be the `router` event. You can watch for only the last event which is `NavigationEnd` as explained in the other answer by @trichetriche. Upvotes: 1 <issue_comment>username_3: Just happened to me, and the problem was this snippet in my `AuthGuard`: ``` setActiveWebsite(website: string): void { this._website.next(website); this.router.navigate([]); <-- This line here } ``` It executed twice because it sets a value, redirects, and then skipped auth because it resolved to `true` immediately. --- Are you, maybe, using `route.navigate` or a redirect in one of your `guards` or `resolvers`? Upvotes: 2 <issue_comment>username_4: Angular 7.1 solves this problem with the *router.parseUrl()* method, which takes the path name (as set in the routing module) as an argument. So instead of `router.navigate(['__'])` or `router.navigateByUrl('__')`, which by the way should followed by a [false return](https://angular.io/guide/router#milestone-5-route-guards) if used as a redirect, you would do the elegant `return router.parseUrl('____')`. I exhausted so many variations of the previous methods that I now plan my days around a calendar composed strictly of angular guard logic. In removing the need for me to travel home every time I wanted check that calendar, this new parseUrl() upgrade has substantially improved my life. Hat tip to [<NAME> and the fantastic Angular In Depth](https://blog.angularindepth.com/new-in-angular-v7-1-updates-to-the-router-fd67d526ad05). Upvotes: 1
2018/03/20
368
1,465
<issue_start>username_0: Is there any tool that can be used to search for the location in which a symbol (e.g. function) is defined within a Haskell project built with stack, which contains multiple packages? I'm not looking for any IDE integration, anything better than: ```bsh egrep --color -R --include="*.hs" "someSymbol" sys/*/src ``` will do. If, additionally, this tool can provide other information such as uses of a symbol, that'd be even better.<issue_comment>username_1: My suggestion would be ```bsh ag --haskell "someSymbol" sys/*/src ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If you're looking for IDE-like experience, check out [Haskell IDE engine](https://github.com/haskell/haskell-ide-engine) that integrates with editors which [support Language Server Protocol](http://langserver.org/#implementations-client). Upvotes: 1 <issue_comment>username_3: [Hoogle](http://hackage.haskell.org/package/hoogle) can be locally installed and its database populated with all of the information from the packages in your project. Additionally, [haddock](http://hackage.haskell.org/package/haddock) can generate documentation, and part of the documentation it generates is an index of all symbols, in alphabetical order, together with information about which module(s) define that symbol. It also has generally good tool support, so I expect a single `stack` command would be able to build all the haddocks for your project. Upvotes: 2
2018/03/20
484
1,827
<issue_start>username_0: I am facing a problem to install packages of Serilog.Sinks in an old project, e.g. when I try installing Serilog.Sinks.MSSqlServer I get the following error: [An error occurred while retrieving package metadata for Serilog.2.6.0](https://i.stack.imgur.com/Jh0fn.png) I have already installed package Serilog.2.6.0 and the process went fine, I tried running the code that uses this library and it works okay, but in case I try to uninstall this package I get the error: [Object reference not set to an instance of an object](https://i.stack.imgur.com/066qc.png) Thus, I think that something might be wrong with Serilog.2.6.0 in my machine, since I cannot uninstall it or install any of its sinks, but I can't figure out the problem. Details: * Visual Studio 2015 * .NET Framework 4.5 Web Forms<issue_comment>username_1: My suggestion would be ```bsh ag --haskell "someSymbol" sys/*/src ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If you're looking for IDE-like experience, check out [Haskell IDE engine](https://github.com/haskell/haskell-ide-engine) that integrates with editors which [support Language Server Protocol](http://langserver.org/#implementations-client). Upvotes: 1 <issue_comment>username_3: [Hoogle](http://hackage.haskell.org/package/hoogle) can be locally installed and its database populated with all of the information from the packages in your project. Additionally, [haddock](http://hackage.haskell.org/package/haddock) can generate documentation, and part of the documentation it generates is an index of all symbols, in alphabetical order, together with information about which module(s) define that symbol. It also has generally good tool support, so I expect a single `stack` command would be able to build all the haddocks for your project. Upvotes: 2
2018/03/20
298
1,048
<issue_start>username_0: tsling raise an error: > > Line 1: 'use strict' is unnecessary inside of modules (strict) > > > this is my code ``` "use strict"; function Foo() {} Foo.prototype.sayHello= function () { console.log("hello!"); } if (typeof module !== 'undefined' && typeof module.exports !== 'undefined') { module.exports = { Foo: Foo }; } ``` how fix this error? **Side note** my code is used both `module` and vanilla javascript. I want use `"strict mode"` only for vanilla javascript. maybe i can use ``` if (typeof module !== 'undefined') { "use strict"; } ``` for enabling `strict mode` only for vanilla javascript?<issue_comment>username_1: Remove `'use strict'`. As the error mentions, it's unnecessary. Modules are expected to execute in strict mode. Compilers will add it for you when you export the module into a script for non-module consumption (i.e. UMD/CJS). See `--alwaysStrict` option for TS. Upvotes: 4 <issue_comment>username_2: ES6 modules are always in strict mode. Upvotes: 3
2018/03/20
353
1,338
<issue_start>username_0: ``` class Seller(object): type = ... name = ... cars = models.ManyToManyField(Car) class PotentialBuyer(object): name = ... cars = models.ManyToManyField(Car) class Car(object): extra_field = ... extra_field2 = ... ``` Suppose I have a relationship like this. I would like to use [extra](https://docs.djangoproject.com/en/2.0/ref/models/querysets/#django.db.models.query.QuerySet.extra) queryset modifier to get the list of cars that are already been picked out by PotentialBuyers when I fetch a seller object. I suppose the query queryset will something like this. ``` def markPending(self) return self.extra(select={'pending': 'select images from PotentialBuyer as t ...'}) ``` How can I accomplish this? Is there a better way? I could fetch the seller object and the potential object and do sets, but I'd think it would be cleaner to make it handled by the database. I am using PostgreSQL 9.5.<issue_comment>username_1: Remove `'use strict'`. As the error mentions, it's unnecessary. Modules are expected to execute in strict mode. Compilers will add it for you when you export the module into a script for non-module consumption (i.e. UMD/CJS). See `--alwaysStrict` option for TS. Upvotes: 4 <issue_comment>username_2: ES6 modules are always in strict mode. Upvotes: 3
2018/03/20
382
1,198
<issue_start>username_0: In Python I have got this string ``` string = "<NAME>" ``` I need to get only first character of it. But when I tried `string[0]` it returned `�`. When I tried `string[:2]` it worked well. My question is why? I need to run this for several strings and when string does not start with diacritic character, it returns substring of two characters. I am also using `# encoding=utf8` and Python 2.7<issue_comment>username_1: You're dealing with byte-string (assuming you're using Python 2.x). Convert the byte-string to unicode-string using [`str.decode`](https://docs.python.org/2/library/stdtypes.html#str.decode), get the first character, then convert it back to binary string using [`str.encode`](https://docs.python.org/2/library/stdtypes.html#str.encode) (optional unless you should use byte-string) ``` >>> string = "<NAME>" >>> print(string.decode('utf-8')[0].encode('utf-8')) Ľ ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Try converting the string to Unicode and the encode to "utf-8" **Ex:** ``` string = u"<NAME>" print string[0].encode('utf-8') ``` **Output:** ``` Ľ ``` Tested in **python2.7** Upvotes: 0
2018/03/20
442
1,464
<issue_start>username_0: I'm trying to retrieve data from DBA using Yii2 methods. My problem is when I'm building the query using yii2 query builder like innerJoin, only the column related to the model that I'm starting from are returned. ``` $Categories = CategoryOfItemsTrans::find() ->innerJoin('category_of_items','category_of_items_trans.CATEGORY_OF_ITEM_ID = category_of_items.CATEGORY_OF_ITEM_ID') ->where('category_of_items.CATEGORY_FLAG = \''.$CategoryFlag.'\' AND category_of_items_trans.LANGUAGE_ID ='.$LanguageID) ->all(); ``` Here an example, I just receive columns related to 'CategoryOfItemsTrans' Model. What I need to retrieve all columns of both tables.<issue_comment>username_1: You're dealing with byte-string (assuming you're using Python 2.x). Convert the byte-string to unicode-string using [`str.decode`](https://docs.python.org/2/library/stdtypes.html#str.decode), get the first character, then convert it back to binary string using [`str.encode`](https://docs.python.org/2/library/stdtypes.html#str.encode) (optional unless you should use byte-string) ``` >>> string = "<NAME>" >>> print(string.decode('utf-8')[0].encode('utf-8')) Ľ ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Try converting the string to Unicode and the encode to "utf-8" **Ex:** ``` string = u"<NAME>" print string[0].encode('utf-8') ``` **Output:** ``` Ľ ``` Tested in **python2.7** Upvotes: 0
2018/03/20
764
2,964
<issue_start>username_0: I'm downloading all the ".htm" files of some directories by doing more or less: `wget http://some/url/ -r --accept="*.htm" -nv --show-progress` In which i turned off wget's printing but kept the progress bar which is useful in my case (`-nv --show-progress`) This works great but outputs a progress bar for *every* downloaded file. Is it a possible to have a *single* progress bar that would take into account the sum of all the sizes of the files? I looked at the the `progress=TYPE` option but this only seems to setup the style of the progress bar and not the total amount of downloaded data.<issue_comment>username_1: No, currently there is no way in Wget to have a single aggregate bar. However, you can try the alpha version of [Wget 2.0](https://gitlab.com/gnuwget/wget2). It's not exactly what you're looking for, but comes very close. It has been packages as Wget2 in Debian and is available on Arch Linux's AUR. I'm not sure about other distros. `wget2` supports parallel downloads and HTTP/2 by default and a line under the progress bars showing some aggregate stats. For example: ``` $ wget2 --progress=bar "example.com/?"{0,1,2,3,4,5,6,7,8} index.html?8 100% [========================================================================================================================>] 606 32,88KB/s index.html?5 100% [========================================================================================================================>] 606 18,49KB/s index.html?6 100% [========================================================================================================================>] 606 31,15KB/s index.html?7 100% [========================================================================================================================>] 606 32,88KB/s index.html?4 100% [========================================================================================================================>] 606 34,81KB/s [Files: 9 Bytes: 5,33K [11,78KB/s] Redirects: 0 Todo: 0 Errors: 0 ] ``` You see 5 progress bars because 5 threads were used to download the 9 files in parallel. The last bar indicates aggregate stats. You can easily build Wget2 from git or using the v1.99 tarball available here: <https://alpha.gnu.org/gnu/wget/wget2-1.99.0.tar.gz> DISCLAIMER: I maintain both GNU Wget and Wget2. Upvotes: 4 [selected_answer]<issue_comment>username_2: here is a snippet from a script I wrote to move files across multiple protocols. ###### URI -> STD.OUT wget -O - --reject "index.\*" -q --show-progress --no-parent "${url}" 2>/dev/null ###### URI -> F/S (solution) echo "mget : ${url} ==> ${destination}" 2>&1 wget -r --reject "index.\*" -q --show-progress --no-parent "${url}" -P "${destination}" 2>&1 | pv --progress 1>/dev/null it works for me. Upvotes: 0
2018/03/20
544
2,261
<issue_start>username_0: I came across a problem with mule database connector. I have a flow that receives JSON with some data, i take some of that data and use it to insert a record to a database. The database generates it's own ID when inserting record. Now in the same flow I want to select that ID, but sometimes it doesn't return any results. Is it possible that when I'm executing 'select' query right after insert, it's not inserted/hasn't generated ID yet? What can I do to make it work properly? How can I make the 'select' database connector to wait for that data?<issue_comment>username_1: first of all: Have you checked that the database connector retrieved 1 in the response (or the way your database shows the amount of rows affected)? If your flow is configured as asynchronous (mule's default) and it has a proper connection timeout, it should have added the information when it goes into the next connector. Also, did you check if its indeed inserting that row in the db? And did you check that in fact you are doing the select with the correct ID generated by db? I recommend you do a step by step debugging in order to check this information. Hope it helps! Upvotes: 0 <issue_comment>username_2: I did similar steps using oracle connector and worked fine with stored procedure. The steps would be ``` a. Call Stored procedure 1 1. Call stored procedure to insert record based on parameters. 2. Commit the transaction. 3. Return 1(success) or 0(fail) based on outcome b. Call Stored procedure 2 only if you get 1 (success) from Stored procedure 1. Retrieve id for the record inserted in stored procedure 1 ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: When in Insert mode the Database Connector has three properties **autoGeneratedKeys** and **autoGeneratedKeysColumnIndexes** or **autoGeneratedKeysColumnNames**. As well as the row count in the payload, it returns the auto generated values of the Insert [Mule doc here](https://docs.mulesoft.com/mule-user-guide/v/3.9/database-connector-reference#attributes-of-insert) Upvotes: 2 <issue_comment>username_4: You Should do a "Commit" after inserting a record otherwise the record is not available for Immediate Select .I agree with username_2 Answer. Upvotes: 0
2018/03/20
1,313
4,828
<issue_start>username_0: I have a Powershell script that returned an output that's close to what I want, however there are a few lines and HTML-style tags I need to remove. I already have the following code to filter out: ``` get-content "atxtfile.txt" | select-string -Pattern '' -Context 1 ``` However, if I attempt to pipe that output into a second `"select-string"`, I won't get any results back. I was looking at the REGEX examples online, but most of what I've seen involves the use of coding loops to achieve their objective. I'm more used to the Linux shell where you can pipe output into multiple `greps` to filter out text. Is there a way to achieve the same thing or something similar with PowerShell? Here's the file I'm working with as requested: ``` xml version="1.0" encoding="UTF-8"? Accept Default CancelEdit Default Today Default View Default SYSTEM false ActivityDate ActivityDateTime Guid Description ``` So, I only want the text between the descriptor and I have the following so far: ``` get-content "txtfile.txt" | select-string -Pattern '' -Context 1 ``` This will give me everything between the descriptor, however I essentially need the line without the XML tags.<issue_comment>username_1: Ok. So if you have that file then: ``` [xml]$xml = Get-Content atextfile.txt $xml.CustomObject.fields | select fullname ``` Upvotes: 1 <issue_comment>username_2: The simplest **PSv3+ solution** is to **use PowerShell's built-in XML DOM support**, which makes an XML document's **nodes accessible as a *hierarchy of objects* with *dot notation***: ```bash PS> ([xml] (Get-Content -Raw txtfile.txt)).CustomObject.fields.fullName ActivityDate ActivityDateTime Guid Description ``` Note: Even though the `[xml] (Get-Content -Raw ...)` approach to parsing an XML document is *convenient*, it isn't fully robust with respect to character encoding; see [this answer](https://stackoverflow.com/a/71848130/45375). Note how even though `.fields` is an *array* - representing all child elements of top-level element - `.fullName` was directly applied to it and returned the values of child elements *across all array elements* ( elements) as an *array*. This ability to access a property on a *collection* and have it implicitly applied to the collection's *elements*, with the results getting collected in an *array*, is a generic PSv3+ feature called [member-access enumeration](https://stackoverflow.com/a/44620191/45375). --- As an **alternative**, consider using the **[`Select-Xml` cmdlet](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/select-xml)** (available in PSv2 too), which **supports [XPath queries](https://en.wikipedia.org/wiki/XPath)** that generally allow for **more complex extraction logic** (though not strictly needed here); `Select-Xml` is a high-level wrapper around the `[xml]` .NET type's [`.SelectNodes()` method](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlnode.selectnodes). The following is the equivalent of the solution above: ```bash $namespaces = @{ ns="http://soap.force.com/2006/04/metadata" } $xpathQuery = '/ns:CustomObject/ns:fields/ns:fullName' (Select-Xml -LiteralPath txtfile.txt $xpathQuery -Namespace $namespaces).Node.InnerText ``` Note: Unlike with dot notation, ***XML namespaces* must be considered when using `Select-Xml`**. Given that and all its descendants are in namespace `xmlns`, identified via URI `http://soap.force.com/2006/04/metadata`, you must: * **define this namespace in a *hashtable* you pass as the `-Namespace` argument** + *Caveat*: Default namespace `xmlns` is special in that it *cannot* be used as the key in the hashtable; instead, choose an *arbitrary* key name such as `ns`, but be sure to use that chosen key name as the node-name prefix (see next point). * **prefix all node names in the XPath query with the namespace name followed by `:`**; e.g., `ns:CustomObject` Upvotes: 3 [selected_answer]<issue_comment>username_3: username_2 has provided the best solution to the problem. But to answer the question about filtering text twice using Select-String. If we pipe the results of `Select-String` into `Out-String -Stream` we can pass it to `Select-String` again. This can all be done on one line but I used a variable to try and make it more readable. ``` $Match = Get-Content "atxtfile.txt" | Select-String -Pattern '' -Context 1 $Match | Out-String -Stream | Select-String -Pattern "Guid" ``` If we pipe `$match` to `Get-Member`, we will find a couple of interesting properties. ``` $Match.Matches.Value ``` This will display all the instances of (the pattern match). ``` $Matches.Context.PostContext $Matches.Context.PreContext ``` This will contain the lines before and after (the context before and after). Upvotes: 1
2018/03/20
701
2,241
<issue_start>username_0: I am self-learning ruby by watching few videos and reading through blogs. I am currently in a situation where I have to parse the below parameters that I receive from an external CRM system into RUBY and find the relevant records from the database and delete them. It works well when I get a single record through below code: ``` temp_id = params[:salesforce_id] Table.find_or_initialize_by(:id => temp_id).destroy ``` but I am struggling when I have to parse through multiple records like below, I get this error "ArgumentError (wrong number of arguments (given 0, expected 1)):" I might have to loop through this but any sample code will really help. ``` {"_json"=>[{"id"=>"1"}, {"id"=>"2"}, {"id"=>"3"}], "delete_record"=>{"_json"=>[{"id"=>"1"}, {"id"=>"2"}, {"id"=>"3"}]}} ``` Thanks<issue_comment>username_1: Try using: ``` temp_id = {"_json"=>[{"id"=>"1"}, {"id"=>"2"}, {"id"=>"3"}], "delete_record"=>{"_json"=>[{"id"=>"1"}, {"id"=>"2"}, {"id"=>"3"}]}} temp_id["delete_record"]["_json"].pluck("id").each do |x| Table.find_or_initialize_by(:id => x.to_i).destroy end ``` Upvotes: 0 <issue_comment>username_2: The structure you have is a `Hash` (A) having a key `"_json"` and a key `"delete_record"` corresponding to a `Hash` (B) value. This `Hash` contains a key/value pair with `"_json"` as the key, and an `Array` as the value. This `Array` contains (several) `Hash` (C) objects. What you want here is: 1. Get the value inside Hash (A) for key `"delete_record"`, which is Hash (B) 2. Get the value inside Hash (B) for key `"_json"`, which is an Array of Hash objects (C) 3. For each Hash (C) inside the Array, get the value for the key `"id"`. You can do the following: ``` data = { "_json"=> [{"id"=>"1"}, {"id"=>"2"}, {"id"=>"3"}], "delete_record"=> {"_json"=>[{"id"=>"1"}, {"id"=>"2"}, {"id"=>"3"}]} } # step 1 data_for_delete_record_key = data['delete_record'] # step 2 array_of_hashes_with_id_key = data_for_delete_record_key['_json'] # step 3 ids_to_destroy = array_of_hashes_with_id_key.map { |hash| hash['id'] } ``` Then, you can simply use the following to destroy the matching records: ``` Table.where(id: ids_to_destroy).destroy_all ``` Upvotes: 2 [selected_answer]
2018/03/20
3,792
13,584
<issue_start>username_0: I have two events that work fine on their own, but one blocks the other. Full Code and GUI at bottom Goal: I am trying to DragDrop a TreeNode into a DataGridView and have the Cell I am hovering over be selected/highlighted. [![MouseDown and DragDrop events vb.net](https://i.stack.imgur.com/cel4j.png)](https://i.stack.imgur.com/cel4j.png) Event that highlights cell: ``` Public Sub DataGridView1_CellMouseEnter(ByVal sender As Object, ByVal e As System.Windows.Forms.DataGridViewCellEventArgs) Handles DataGridView1.CellMouseEnter Try Me.DataGridView1.CurrentCell = Me.DataGridView1(DataGridView1.Columns(e.ColumnIndex).DisplayIndex, DataGridView1.Rows(e.RowIndex).Index) Catch End Try End Sub ``` Events for drag and Drop: * TreeView1\_MouseDown - Sets boolean to determine whether mouse is down or not * TreeView1\_MouseMove - gets the item to be dragged * DataGridView1\_DragEnter - styling while dragging * DataGridView1\_Dragdrop - Drops the item into the datagridview My issue is that (any) MouseDown event blocks the CellMouseEnter event. I tried mousedown-ing elsewhere in the Form and then hovered over the DataGridView and my `CellMouseEnter` event did not work. The result is that the item is dropped into the cell that was selected BEFORE the MouseDown (technically this Cell is still selected b/c CellMouseEnter doesn't update the selected Cell to the hovered Cell) So the way I see it, I need to build a custom event similar to `CellMouseEnter` that isn't blocked by MouseDown, but I wouldn't even know where to start. I tried Peek Definition, but I couldn't find the actual method for `CellMouseEnter`, just `Public Event CellMouseEnter As DataGridViewCellEventHandler`. Is there a better way? Here my drag and drop events: ``` Private Sub TreeView1_MouseDown(sender As Object, e As MouseEventArgs) Handles TreeView1.MouseDown move_item = True End Sub Private Sub TreeView1_MouseMove(sender As Object, e As MouseEventArgs) Handles TreeView1.MouseMove If move_item Then On Error GoTo quit Dim item2move As New Label item2move.Text = TreeView1.SelectedNode.Text item2move.DoDragDrop(item2move.Text, DragDropEffects.Copy) Debug.Print(TreeView1.SelectedNode.Text) End If move_item = False ``` quit: Exit Sub End Sub ``` Private Sub DataGridView1_DragEnter(sender As Object, e As DragEventArgs) Handles DataGridView1.DragEnter If (e.Data.GetDataPresent(DataFormats.Text)) Then e.Effect = DragDropEffects.Copy Else e.Effect = DragDropEffects.None End If End Sub Private Sub DataGridView1_DragDrop(sender As Object, e As DragEventArgs) Handles DataGridView1.DragDrop With DataGridView1 Dim Col As Integer = .CurrentCell.ColumnIndex Dim row As Integer = .CurrentCell.RowIndex .Item(Col, row).Value = e.Data.GetDataPresent(DataFormats.Text) End With End Sub ``` Full Code: Note Some sample JSON is in the comment at the end, put that in the RichtextBox - `rtb_inputjson`: ``` Public Class Form1 Dim move_item As Boolean 'Add some rows to the treemap on load Private Sub Form1_Load(sender As Object, e As EventArgs) Handles Me.Load For i = 1 To 20 DataGridView1.Rows.Add() Next End Sub 'JSON to treemap Private Sub btn_jsontotreemap_Click(sender As Object, e As EventArgs) Handles btn_jsontotreemap.Click Try Dim json As String = rtb_inputjson.Text Dim obj As New JObject obj = JObject.Parse(json) TreeView1.Nodes.Clear() Dim parent As TreeNode = Json2Tree(obj) parent.Text = "Root Object" TreeView1.Nodes.Add(parent) Catch ex As Exception MsgBox(ex.Message) End Try End Sub Private Function Json2Tree(ByVal obj As JObject) As TreeNode 'creates the parent node Dim parent As TreeNode = New TreeNode() 'loop through the obj all token should be pair For Each token In obj parent.Text = token.Key.ToString() 'create child node Dim child As TreeNode = New TreeNode() child.Text = token.Key.ToString() 'self call :) If token.Value.Type.ToString() = "Object" Then Dim o As JObject = CType(token.Value, JObject) child = Json2Tree(o) child.Text = token.Key.ToString() parent.Nodes.Add(child) 'but if it is an array... ElseIf token.Value.Type.ToString() = "Array" Then Dim ix As Integer = -1 For Each itm In token.Value 'check each item of the array to see if they are objects or arrays If itm.Type.ToString() = "Object" Then Dim objTN As TreeNode = New TreeNode() ix += 1 Dim o As JObject = CType(itm, JObject) 'self call :) objTN = Json2Tree(o) objTN.Text = token.Key.ToString() & "[" & ix & "]" child.Nodes.Add(objTN) ElseIf itm.Type.ToString() = "Array" Then ix += 1 Dim dataArray As TreeNode = New TreeNode() For Each i In itm dataArray.Text = token.Key.ToString() & "[" & ix & "]" dataArray.Nodes.Add(i.ToString()) Next child.Nodes.Add(dataArray) Else child.Nodes.Add(itm.ToString()) End If Next parent.Nodes.Add(child) Else If token.Value.ToString() = "" Then child.Nodes.Add("N/A") Else child.Nodes.Add(token.Value.ToString()) parent.Nodes.Add(child) End If Next Return parent End Function 'drag & drop to datagridview Private Sub TreeView1\_MouseDown(sender As Object, e As MouseEventArgs) Handles TreeView1.MouseDown move\_item = True End Sub Private Sub TreeView1\_MouseMove(sender As Object, e As MouseEventArgs) Handles TreeView1.MouseMove If move\_item Then On Error GoTo quit Dim item2move As New Label item2move.Text = TreeView1.SelectedNode.Text item2move.DoDragDrop(item2move.Text, DragDropEffects.Copy) Debug.Print(TreeView1.SelectedNode.Text) End If move\_item = False quit: Exit Sub End Sub Public Sub DataGridView1\_CellMouseEnter(ByVal sender As Object, ByVal e As System.Windows.Forms.DataGridViewCellEventArgs) Handles DataGridView1.CellMouseEnter Debug.Print("CellMouseEnter event raised") Try Me.DataGridView1.CurrentCell = Me.DataGridView1(DataGridView1.Columns(e.ColumnIndex).DisplayIndex, DataGridView1.Rows(e.RowIndex).Index) Catch End Try End Sub Private Sub DataGridView1\_DragDrop(sender As Object, e As DragEventArgs) Handles DataGridView1.DragDrop With DataGridView1 Dim Col As Integer = .CurrentCell.ColumnIndex Dim row As Integer = .CurrentCell.RowIndex .Item(Col, row).Value = e.Data.GetDataPresent(DataFormats.Text) End With End Sub Private Sub DataGridView1\_DragEnter(sender As Object, e As DragEventArgs) Handles DataGridView1.DragEnter If (e.Data.GetDataPresent(DataFormats.Text)) Then e.Effect = DragDropEffects.Copy Else e.Effect = DragDropEffects.None End If End Sub End Class ``` Sample JSON for textbox: ``` { "data":[ { "symbol":"A", "name":"Agilent Technologies Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"2" }, { "symbol":"AA", "name":"Alcoa Corporation", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"12042" }, { "symbol":"AABA", "name":"Altaba Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"7653" }, { "symbol":"AAC", "name":"AAC Holdings Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"9169" }, { "symbol":"AADR", "name":"AdvisorShares <NAME>", "date":"2018-03-19", "isEnabled":true, "type":"et", "iexId":"5" }, { "symbol":"AAL", "name":"American Airlines Group Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"8148" }, { "symbol":"AAMC", "name":"Altisource Asset Management Corp Com", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"7760" }, { "symbol":"AAME", "name":"Atlantic American Corporation", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"7" }, { "symbol":"AAN", "name":"Aaron's Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"8" }, { "symbol":"AAOI", "name":"Applied Optoelectronics Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"7790" }, { "symbol":"AAON", "name":"AAON Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"9" }, { "symbol":"AAP", "name":"Advance Auto Parts Inc W/I", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"10" }, { "symbol":"AAPL", "name":"Apple Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"11" }, { "symbol":"AAT", "name":"American Assets Trust Inc.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"12" }, { "symbol":"AAU", "name":"Almaden Minerals Ltd.", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"13" }, { "symbol":"AAV", "name":"Advantage Oil & Gas Ltd", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"14" }, { "symbol":"AAWW", "name":"Atlas Air Worldwide Holdings", "date":"2018-03-19", "isEnabled":true, "type":"cs", "iexId":"15" } ] } ``` Controls and their Names: [![enter image description here](https://i.stack.imgur.com/wQN7W.png)](https://i.stack.imgur.com/wQN7W.png)<issue_comment>username_1: I ended up using the HittestInfo to get the right cell and instead of selecting the cell with CellMouseEnter, I decided to deselect all cells at the beginning of the sequence. I also added a hover to select the proper treeview item. ``` Private Sub TreeView1_MouseDown(sender As Object, e As MouseEventArgs) Handles TreeView1.MouseDown move_item = True End Sub Private Sub TreeView1_MouseMove(sender As Object, e As MouseEventArgs) Handles TreeView1.MouseMove If move_item Then On Error GoTo quit 'Deselect all cells in datagridview DataGridView1.ClearSelection() DataGridView1.CurrentCell = Nothing 'Begin Drag/drop Dim item2move As New Label item2move.Text = TreeView1.SelectedNode.Text item2move.DoDragDrop(item2move.Text, DragDropEffects.Copy) End If move_item = False quit: Exit Sub End Sub Private Sub DataGridView1_DragEnter(ByVal sender As Object, ByVal e As System.Windows.Forms.DragEventArgs) Handles DataGridView1.DragEnter If (e.Data.GetDataPresent(DataFormats.Text)) Then e.Effect = DragDropEffects.Copy Else e.Effect = DragDropEffects.None End If End Sub Private Sub DataGridView1_DragDrop(ByVal sender As Object, ByVal e As System.Windows.Forms.DragEventArgs) Handles DataGridView1.DragDrop Try Dim clickedcell As DataGridViewCell Dim ptscreen As New Point(e.X, e.Y) Dim ptclient = DataGridView1.PointToClient(ptscreen) Dim hit As DataGridView.HitTestInfo = DataGridView1.HitTest(ptclient.X, ptclient.Y) clickedcell = DataGridView1.Rows(hit.RowIndex).Cells(hit.ColumnIndex) DataGridView1.Item(clickedcell.ColumnIndex, clickedcell.RowIndex).Value = e.Data.GetData(DataFormats.Text).ToString Catch End Try End Sub Private Sub TreeView1_NodeMouseHover(sender As Object, e As TreeNodeMouseHoverEventArgs) Handles TreeView1.NodeMouseHover TreeView1.SelectedNode = e.Node End Sub ``` Upvotes: 0 <issue_comment>username_2: Using the `DragOver()` event, you have the advantage that a user can have a visual feedback of the cell when the `Drop()` operation will take effect, since the cell where the mouse is hovering becomes active and follows the mouse movement. You could also verify whether the Cell under the mouse pointer can receive the drop and avoid selecting it if not. ``` Private Sub dataGridView1_DragEnter(sender As Object, e As DragEventArgs) e.Effect = If(e.Data.GetDataPresent(DataFormats.Text) = True, DragDropEffects.Copy, DragDropEffects.None) Private Sub dataGridView1_DragDrop(sender As Object, e As DragEventArgs) 'Check whether the current Cell supports this Drop() DataGridView1.CurrentCell.Value = e.Data.GetDataPresent(DataFormats.Text).ToString() End Sub Private Sub dataGridView1_DragOver(sender As Object, e As DragEventArgs) Dim ClientAreaLocation As Point = dataGridView1.PointToClient(New Point(e.X, e.Y)) Dim CellPosition As DataGridView.HitTestInfo = dataGridView1.HitTest(ClientAreaLocation.X, ClientAreaLocation.Y) If CellPosition.ColumnIndex > -1 AndAlso CellPosition.RowIndex > -1 Then 'Enable if this Cell supports this Drop() dataGridView1.CurrentCell = dataGridView1(CellPosition.ColumnIndex, CellPosition.RowIndex) End If End Sub ``` Upvotes: 2 [selected_answer]
2018/03/20
729
2,587
<issue_start>username_0: I am fetching the courses from the Firebase database as so: ``` this.fetchItems() .subscribe((res) => { this.coursesFiltered = res.filter((filtered: any) => { return filtered.courseStatus === 2 || filtered.courseStatus === 3 }); }); ``` The function fetchItems() ``` fetchItems() { return this.afDB.list('courses', (ref) => ref.orderByChild('courseSemCode')) .snapshotChanges() .map((arr) => { return arr.map((snap: any) => { return snap.status = snap.payload.val(); }); }); } ``` Then I want to group them by courseSemCode so that it displays properly in the cards ``` ###### {{item.courseSemester + " " + item.courseYear}} {{item.courseName}} ``` The groupBy pipe is a custom pipe that I got from this thread: [How to group data in Angular 2?](https://stackoverflow.com/questions/37248580/how-to-group-data-in-angular-2) Which is this: ``` @Pipe({name: 'groupBy'}) export class GroupByPipe implements PipeTransform { transform(value: Array, field: string): Array { if(!value || !value.length) { return value; }else{ const groupedObj = value.reduce((prev, cur)=> { if(!prev[cur[field]]) { prev[cur[field]] = [cur]; } else { prev[cur[field]].push(cur); } return prev; }, {}); return Object.keys(groupedObj).map(key => ({ key, value: groupedObj[key] })); } } } ``` But unfortunately in the end I'm faced with the problem TypeError: Cannot read property 'reduce' of undefined I believe this has to do with the .subscribe which I've written in the constuctor, which only means that the coursesFiltered still has no data. How can I make sure it does and solve this issue? Thank you<issue_comment>username_1: I think it is failing when it is evaluating before coursesFiltered is set. Maybe initialise coursesFiltered to an empty array. ``` this.coursesFiltered: any[] = []; ``` Upvotes: 2 <issue_comment>username_2: In your pipe, you can simply test for it : ``` transform(value: any, args?: any) { if(!value || !value.length) { return value; } return value.reduce(...); } ``` **EDIT** In your loop you do this ``` *ngFor="let item of coursesFiltered | groupBy:'courseSemCode'" ``` But your structure for the grouped array is this ``` [{key: 'your key', value: [...]}] ``` This means that you have to make two loops : ``` ###### {{item.courseSemester + " " + item.courseYear}} {{item.courseName}} ``` Upvotes: 1 [selected_answer]
2018/03/20
398
1,577
<issue_start>username_0: I have weird caching issue on Drupal site. First D8 page caching is turned off. Then I have some content on page that changes for every page loading - to make it simple I'm printing current time from twig template: ``` {{ "now"|date("H:i:s") }} ``` It works like this: * After clearing Drupal's cache and opening the page I can see current time, as expected * When I reload the page I still see previous time (I'm logged in as master admin). No matter how many times I reload the page I see time of the first page loading after clearing the cache * When I open the same page from another browser for the first time I see current time and after that for any reload I see time of the first page opening in that browser * When I inspect page loading from browser it says that page is loaded from server - not cached by browser * Clearing cookies doesn't help. For the first time I clear the cookie I get logged out and I see current time once, but for every other cookie clearing I see the same time. Any idea what is happening here and how to disable that caching?<issue_comment>username_1: You need juste to disable Internal Page Cache Module and clear cache Upvotes: 0 <issue_comment>username_2: Add this to your themename.theme file ``` function themename_preprocess(&$vars, $hook) { $vars['#cache']['max-age'] = 0; } ``` and clear cache. Edit, on drupal.org you can read good guide how to prepare your development environment and disable cache during development. [Guide](https://www.drupal.org/node/2598914) Upvotes: 2 [selected_answer]
2018/03/20
767
2,777
<issue_start>username_0: I'm wondering why my git alias got stuck and how it caused a rebase. **Steps to reproduce:** `git config --global alias.am "commit --amend"` I can see that the alias is added with: `git config --global -e` ``` [alias] am = commit --amend --no-edit ``` Create new test repo: `> git init` `> git am` `Ctrl` + `c`, `Ctrl` + `c` Nothing happens so I have to close it manually. The expected result would be `fatal: You have nothing to amend.` in this case since there are no commits. But what happens is that git is now in rebase mode: `(master|AM/REBASE)` When running in a repo with a commit, the expected result is to open my default git editor so I can edit the commit message but the same unexpected behaviour explained above happens. **The question** How come my git am alias gets stuck as it does, and how does it put me in a rebase situation? **What I have tried** I guess that my git alias is faulty in some way and that it causes it to start the `git commit --amend` but in a way so its unable to start the editor. And since a git commit --amend probably does a rebase in the background to do its thing it is left in that state when I force it to abort? However, I have tried to add --no-edit to no avail so it seems not to be an editor error.. This issue caused me to accidentally do a `git rebase --abort` and lose some local non-staged changes. **System** I'm on Windows 10 using Git Bash. I have tested using both emacs and notepad as my default git editors, both with the same result.<issue_comment>username_1: The issue was that `git am` [already is a command](https://git-scm.com/docs/git-am). In general when dealing with git alias, note that. * Git gives no warning when you try to overwrite an already existing keyword. * Git still saves the alias in the config file. When writing a new git alias remember to test if the alias you intend to use already exists and hope that it's not added as a keyword in the future. Upvotes: 2 <issue_comment>username_2: According to the document on [alias.\*](https://www.git-scm.com/docs/git-config#git-config-alias) in `git-config`, > > To avoid confusion and troubles with script usage, aliases that hide > existing Git commands are ignored. > > > `am` is an existing Git command, so `alias.am` is ignored. `git am` is waiting for some input but nothing is received, neither from stdin or a patch file. So it looks stuck. When you press `Ctrl`+`C` to exit, the process of `git am` is interrupted as if it encounters a conflict. A temporary folder `rebase-apply` is created under `.git`. With this folder existing, Git knows it's in the status `(master|AM/REBASE)`. You could either remove `.git/rebase-apply` or run `git am --abort` to get rid of it. Upvotes: 1
2018/03/20
589
2,298
<issue_start>username_0: A file has a filename and an extension and these are usually written with a dot (a full stop) between them as a delimiter. This description tells us that the dot is not a part of the extension but when stating extension we usually include the dot. For example a PNG file with filename "image" is usually written as image.png. But we usually say the .png extension like the dot is a part of the extension. I'm just curious, is the dot a delimeter between them or a part of the extension, i.e. is the extension "png" or is it ".png"?<issue_comment>username_1: `png` is the extension, although without the `.` it would just a be a default file format, which can’t be opened Upvotes: -1 <issue_comment>username_2: According to the [Wikipedia article](https://en.wikipedia.org/wiki/Filename_extension) on File extensions: > > The exact definition, giving the criteria for deciding what part of the file name is its extension, belongs to the rules of the specific filesystem used; usually the extension is the **substring which *follows* the last occurrence, if any, of the dot character**. > > > "png" is technically the file extension, though you always look for the file extension after the *final* full-stop. Upvotes: 3 <issue_comment>username_3: In the .NET APIs, the dot is considered part of the extension. * [System.IO.Path.GetExtension(string)](https://learn.microsoft.com/en-us/dotnet/api/system.io.path.getextension?view=net-5.0#System_IO_Path_GetExtension_System_String_) > > This method obtains the extension of path by searching path for a period (.), starting with the last character in path and continuing toward the first character. If a period is found before a `DirectorySeparatorChar` or `AltDirectorySeparatorChar` character, **the returned string contains the period and the characters after it**; otherwise, `String.Empty` is returned. > > > ```cs // GetExtension('C:\mydir.old\myfile.ext') returns '.ext' // GetExtension('C:\mydir.old\') returns '' ``` This was important information for me since I'm modeling a class that relies on a configurable file extension for a certain activity, so I included the `.` character as part of the text for the property value: ```cs public string CacheFileExtension { get; } = ".cache" ``` Upvotes: 0
2018/03/20
1,010
2,693
<issue_start>username_0: I have a list of genes with 1-3 probes for each gene, and an intensity value for each probe. An example is as follows: ``` GENE_ID Probes Intensity GENE:JGI_V11_100009 GENE:JGI_V11_1000090102 253.479375 GENE:JGI_V11_100009 GENE:JGI_V11_1000090202 712.235625 GENE:JGI_V11_100036 GENE:JGI_V11_1000360103 449.065625 GENE:JGI_V11_100036 GENE:JGI_V11_1000360203 641.341875 GENE:JGI_V11_100036 GENE:JGI_V11_1000360303 1237.07125 GENE:JGI_V11_100044 GENE:JGI_V11_1000440101 456.133125 GENE:JGI_V11_100045 GENE:JGI_V11_1000450101 369.790625 GENE:JGI_V11_100062 GENE:JGI_V11_1000620102 2839.97375 GENE:JGI_V11_100062 GENE:JGI_V11_1000620202 6384.55125 ``` I want to determine the variance between the probes for each individual gene (so for every gene I hve a variance value) I am aware that I should use the tapply() function but dont know how to accomplish this other than: ``` tapply( , , var) ```<issue_comment>username_1: `png` is the extension, although without the `.` it would just a be a default file format, which can’t be opened Upvotes: -1 <issue_comment>username_2: According to the [Wikipedia article](https://en.wikipedia.org/wiki/Filename_extension) on File extensions: > > The exact definition, giving the criteria for deciding what part of the file name is its extension, belongs to the rules of the specific filesystem used; usually the extension is the **substring which *follows* the last occurrence, if any, of the dot character**. > > > "png" is technically the file extension, though you always look for the file extension after the *final* full-stop. Upvotes: 3 <issue_comment>username_3: In the .NET APIs, the dot is considered part of the extension. * [System.IO.Path.GetExtension(string)](https://learn.microsoft.com/en-us/dotnet/api/system.io.path.getextension?view=net-5.0#System_IO_Path_GetExtension_System_String_) > > This method obtains the extension of path by searching path for a period (.), starting with the last character in path and continuing toward the first character. If a period is found before a `DirectorySeparatorChar` or `AltDirectorySeparatorChar` character, **the returned string contains the period and the characters after it**; otherwise, `String.Empty` is returned. > > > ```cs // GetExtension('C:\mydir.old\myfile.ext') returns '.ext' // GetExtension('C:\mydir.old\') returns '' ``` This was important information for me since I'm modeling a class that relies on a configurable file extension for a certain activity, so I included the `.` character as part of the text for the property value: ```cs public string CacheFileExtension { get; } = ".cache" ``` Upvotes: 0
2018/03/20
531
1,736
<issue_start>username_0: I am trying to export some records from a teradata table into a csv file using BTEQ Export. While doing this I face 3 issues: 1. The leading zeroes of few columns get dropped. Please help as to how I can retain them. 2. I am not sure how to make the Headers bold, to make it look better. 3. Is it possible to put Grid lines, only in the records which contain data? Please do not suggest to use Teradata Export. I need to use UNIX. Thanks, Aswath<issue_comment>username_1: 1. Leading zeros are added with a explicit format-phrase ( see [Formatting Characters for Non-Monetary Numeric Information](https://www.info.teradata.com/HTMLPubs/DB_TTU_16_00/index.html#page/SQL_Reference%2FB035-1143-160K%2Fzcj1472241389161.html%23wwID0E32BR)) `select 4 (format '999,99') ;` 2. There are no terminal capabilities built in `Basic Teradata Query`. `bteq` is mainly built for commandline usage with some basic reporting/formating abilties. You may add terminal escape codes in your SQL query, but I wouldn't recommend it. Especially, because you mention `BTEQ export` (I assume to a file). 3. The `.set separator '|'` command may help, but I doubt that you can get `grid lines`, assuming you want horizontal and vertical lines between your data. By the way, creating a csv file with grid lines is somehow contradictory. Upvotes: 2 <issue_comment>username_2: `Unix` is an operating system, `TPT` or `FastExport` are programs *running* on an OS like Unix, exactly like `BTEQ`. There's a [CSV function](https://info.teradata.com/HTMLPubs/DB_TTU_15_10/SQL_Reference/B035_1145_151K/CSV.html) to create delimited/quoted data. But as @username_1 already stated, `csv` doesn't have *grid lines* or *bold headers*. Upvotes: 1
2018/03/20
719
1,993
<issue_start>username_0: I have an URL where having query parameter need to pick one particular value out of it. Ex: /abc/xyz?gcode=123456&pcode=21314u925 Need to pick this value (gcode=123456) I used condition like below ``` ``` The issue is in URI the order of the query parameter is not always gcode and followed by pcode. after gcode it can have any other parameter. How can i get that value<issue_comment>username_1: My first thoughts are that you should go try implement that sequence or something similar: * get string your parameters (you can use substring after '/abc/xyz?' * use tokenize function ( to get list of parameters ) * get element of list which contains gcode [Here](https://stackoverflow.com/questions/7425071/split-function-in-xslt-1-0) your can get some sample how to use tokenize in xslt 1.0 or xslt 2.0 I hope that can help you at solving your problem Upvotes: 1 <issue_comment>username_2: An XSLT-1.0 solution can use an `xsl:choose` to handle both possibilities: 1. Another parameter follows 2. `gcode=` is the last parameter in the string So some code could look like this: ``` ``` Upvotes: 0 <issue_comment>username_3: You were close with your attempt. What I would do is: * drop the `?` from `?gcode=` * change `&pcode=` to just `&` * add a concat to add a trailing `&` just in case `gcode` is the last query param example... ``` ``` You could also use a named template (XSLT 1.0+) or an xsl:function (XSLT 2.0+) if you need to get the value of a query param more than once. Examples... **XML Input** ``` /abc/xyz?gcode=123456&pcode=21314u925 /abc/xyz?pcode=21314u925&gcode=123456 /abc/xyz?gcode=123456&pcode=21314u925 /abc/xyz?pcode=21314u925&gcode=123456 ``` **XSLT 1.0** (fiddle: <http://xsltfiddle.liberty-development.net/b4GWVm>) ``` ``` **XSLT 2.0** (fiddle: <http://xsltfiddle.liberty-development.net/b4GWVm/1>) ``` ``` **XML Output** (from either stylesheet above) ``` 123456 123456 123456 123456 ``` Upvotes: 0
2018/03/20
933
2,559
<issue_start>username_0: I want to create a matrix C where each element is equal to the minimum of its corresponding row and column index. For example: the element corresponding to the first row and second column should have a value of 1, the element corresponding to the eighth row and the third columns should have a value of 3, etc. I have written the following code that returns to me what I want. Running the following code: ``` from numpy import empty C = empty(shape=(32,32)) for j in range(1,33): for i in range(1,33): minimum = min(i,j) C[i-1][j-1] = minimum print(C) ``` Results in ``` [[ 1. 1. 1. ..., 1. 1. 1.] [ 1. 2. 2. ..., 2. 2. 2.] [ 1. 2. 3. ..., 3. 3. 3.] ..., [ 1. 2. 3. ..., 30. 30. 30.] [ 1. 2. 3. ..., 30. 31. 31.] [ 1. 2. 3. ..., 30. 31. 32.]] ``` **Question:** Is this the most efficient way of doing this? If not; how can this method be improved?<issue_comment>username_1: My first thoughts are that you should go try implement that sequence or something similar: * get string your parameters (you can use substring after '/abc/xyz?' * use tokenize function ( to get list of parameters ) * get element of list which contains gcode [Here](https://stackoverflow.com/questions/7425071/split-function-in-xslt-1-0) your can get some sample how to use tokenize in xslt 1.0 or xslt 2.0 I hope that can help you at solving your problem Upvotes: 1 <issue_comment>username_2: An XSLT-1.0 solution can use an `xsl:choose` to handle both possibilities: 1. Another parameter follows 2. `gcode=` is the last parameter in the string So some code could look like this: ``` ``` Upvotes: 0 <issue_comment>username_3: You were close with your attempt. What I would do is: * drop the `?` from `?gcode=` * change `&pcode=` to just `&` * add a concat to add a trailing `&` just in case `gcode` is the last query param example... ``` ``` You could also use a named template (XSLT 1.0+) or an xsl:function (XSLT 2.0+) if you need to get the value of a query param more than once. Examples... **XML Input** ``` /abc/xyz?gcode=123456&pcode=21314u925 /abc/xyz?pcode=21314u925&gcode=123456 /abc/xyz?gcode=123456&pcode=21314u925 /abc/xyz?pcode=21314u925&gcode=123456 ``` **XSLT 1.0** (fiddle: <http://xsltfiddle.liberty-development.net/b4GWVm>) ``` ``` **XSLT 2.0** (fiddle: <http://xsltfiddle.liberty-development.net/b4GWVm/1>) ``` ``` **XML Output** (from either stylesheet above) ``` 123456 123456 123456 123456 ``` Upvotes: 0
2018/03/20
515
2,032
<issue_start>username_0: I have enum named PinpadModelCallback, declared like this: ``` enum PinpadModelCallback { case Connecting case SelectDevice case DevicesUpdated case Configuring case Stopping case Stopped case CardWaiting case TerminalProcessing case TransactionStarted case SignatureRequest case ApplicationSelection case TransactionSuccess(result: LTransactionResult) case TransactionError(msg: String) case AuthError(msg: String) } ``` I have function in which i pass enum type as an argument and try to check it: ``` private func setInstuction(_ msg: String, callback: PinpadModel.PinpadModelCallback? = nil) { guard let callback = callback else { return } if callback == .Connecting { } ``` The following code produce an error: > > `Binary operator '==' cannot be applied to operands of type > 'PinpadModel.PinpadModelCallback' and '_'` > > > It work with switch case: ``` switch callback { case .Connecting: break default: break } ``` But i want to use `if else`. Why it's produce an error?<issue_comment>username_1: You can use `if case` for this like ``` if case .Connecting = callback { // True } else { // false } ``` You can also get var of enum ``` if case .TransactionError(let msg) = callback { print(msg) } else { // No error } ``` Hope it is helpful Upvotes: 1 <issue_comment>username_2: As the error message tells you, the problem is the `==`. You cannot use equals comparison on an enumeration that has associated values for any of its cases. Such an enum is not Equatable. Your enum has associated values for the last three of its cases. Remove the associated values, or adopt Equatable and define `==` yourself, or use another mode of expression. This will change in Swift 4.1, where you can declare your enum Equatable and get an autogenerated definition of `==`. You can try it out by downloading the Xcode 9.3 beta. Upvotes: 3 [selected_answer]
2018/03/20
634
2,132
<issue_start>username_0: When attempting to upload and install a file created via `./mvnw clean install` I'm running into the error below: ``` Install Package: /etc/packages/apple-cms/cms-content-v1.zip Mon Mar 19 2018 15:53:13 GMT-0400 (EDT) Installing content Collecting import information... Installing node types... Installing privileges... Importing content... - / ... (contents of installation truncated) saving approx 1 nodes... Package imported (with errors, check logs!) ``` Content is missing from the crx/de tree under /etc/map/publish/http - so I wish to check the logs. Attempting to tail all of the logs under ../author/crx-quickstart/logs but not getting any output at all - same for both publish and authoring instance. `filter.xml` ``` xml version="1.0" encoding="UTF-8"? ```<issue_comment>username_1: There are no real log-entries created by the package manager. You have 3 options to see the package manager output. 1. Go to the pacakge manager <http://localhost:4502/crx/packmgr/index.jsp> and re-install the package manually. 2. Set the content-package-maven-plugin in verbose mode on the command line with: `mvn clean install -PautoInstallPackage -Dvault.verbose=true` 3. Enable the verbose mode in the pom.xml file permanently. `com.day.jcr.vault content-package-maven-plugin true true ...` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can see what happens during installation of a package in packagemanager by creating a custom log on *org.apache.jackrabbit.vault.packaging.impl* at DEBUG or TRACE level. You can find how to create a custom log [here](https://helpx.adobe.com/experience-manager/6-4/sites/deploying/using/configure-logging.html#CreatingYourOwnLoggersandWriters), basically e.g. directly in the OSGi-Felix-Console in [Log Support](http://localhost:4502/system/console/slinglog). * Log Level Debug * Log File logs/packagemanager.log * Message Pattern {0,date,dd.MM.yyyy HH:mm:ss.SSS} *{4}* [{2}] {3} {5} * Logger org.apache.jackrabbit.vault.packaging.impl The log entries will show up in the folder cq-quickstart/logs/packagemanager.log. Upvotes: 3
2018/03/20
555
1,915
<issue_start>username_0: I need to make a Web page as simple as HTML + simple JS/CSS but it needs to read list of objects (files) from a specific S3 bucket. Front end part I finished but in order to get a list of s3 objects - I need to issue a command `aws s3 ls` -> in order to get a list of files from S3 but that requires AWS tokens every 1-2h. Or I can run the command from EC2 with adding user to some role and then run from there without tokens. Ideally website will be hosted on some AWS service (S3, CloudFront...) and not on EC2. How to achieve above (without hosting it on EC2)?<issue_comment>username_1: There are no real log-entries created by the package manager. You have 3 options to see the package manager output. 1. Go to the pacakge manager <http://localhost:4502/crx/packmgr/index.jsp> and re-install the package manually. 2. Set the content-package-maven-plugin in verbose mode on the command line with: `mvn clean install -PautoInstallPackage -Dvault.verbose=true` 3. Enable the verbose mode in the pom.xml file permanently. `com.day.jcr.vault content-package-maven-plugin true true ...` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can see what happens during installation of a package in packagemanager by creating a custom log on *org.apache.jackrabbit.vault.packaging.impl* at DEBUG or TRACE level. You can find how to create a custom log [here](https://helpx.adobe.com/experience-manager/6-4/sites/deploying/using/configure-logging.html#CreatingYourOwnLoggersandWriters), basically e.g. directly in the OSGi-Felix-Console in [Log Support](http://localhost:4502/system/console/slinglog). * Log Level Debug * Log File logs/packagemanager.log * Message Pattern {0,date,dd.MM.yyyy HH:mm:ss.SSS} *{4}* [{2}] {3} {5} * Logger org.apache.jackrabbit.vault.packaging.impl The log entries will show up in the folder cq-quickstart/logs/packagemanager.log. Upvotes: 3
2018/03/20
719
2,661
<issue_start>username_0: I have reverted a pull request from GitHub by following this article <https://help.github.com/articles/reverting-a-pull-request/>. Now even after reverting when I am comparing the two branch it shows same. How can I raise a pull request again? Here is what I did 1. I raised a pull request from **prod\_bug\_fix** branch to **release/13.0.0** and went to github and merged. 2. Then I followed the above article and unmerged the pull request. Now I thought release/13.0.0 code would be back as before I raised the pull request. 3. I tried raising a pull request again from prod\_bug\_fix to release/13.0.0 but it says "There isn’t anything to compare." . But I can see there are code differences between the two branches. What I did wrong and how I can make release/13.0.0 to same state as before?<issue_comment>username_1: A little late on this answer, but I encountered this issue myself and finally tracked down that the revert behaves weirdly in that the original commit being reverted is still in the history, so when you go to create a new PR it still thinks it is in there so you see no difference when doing the diff. [This](https://stackoverflow.com/questions/35407770/github-changes-ignored-after-revert-git-cherrypick-git-rebase/35408117#35408117) StackOverflow answers gives some more details about it. Upvotes: 2 <issue_comment>username_2: Since the merge you reverted will always be in the history, you'll need to replay that merge onto **prod\_bug\_fix** before submitting your new PR. First, get **prod\_bug\_fix** up to date by checking it out and doing `git merge **release/13.0.0**` (this will probably be a simple fast forward merge). Now let's say that the merge you reverted had hash `abc123`; you would replay all the changes to that merge onto **prod\_bug\_fix** with `git cherry-pick -m 1 abc123`. The `-m 1` tells git "replay the changes that were made to the first parent of the merge (**release/13.0.0**) as opposed to the changes made to the second parent (**prod\_bug\_fix**). Once you have done this, a PR from **prod\_bug\_fix** to **release/13.0.0** will work as expected. Upvotes: 0 <issue_comment>username_3: Revert will create a new commit that undoes the specified commit. It doesn't remove the specified commit, it just creates a new commit on top of your commit to remove your changes. That's why when you check diff between the branch there will be none because the commit is already present in the branch. For eg: Branch A => commit1 => commit2 => commit3 after reverting, commit order will be Branch A => commit1 => commit2 => commit3 => commit4(this is your revert commit) Upvotes: 0
2018/03/20
326
1,115
<issue_start>username_0: The html code is as follows for the image : ``` ![](file:///res/drawable/jobs.jpg) ``` The image is located in the "drawable folder" and this html document is in the "assets" folder. The directory structure is such : App->src->main->assets; and App->src->main->res->drawable [Android - local image in webview](https://stackoverflow.com/questions/6127696/android-local-image-in-webview) In this link, they say that put the image file in assets folder then it would load, but if i do that, an error pops up saying that this folder can only hold .xml files. I also tried ../drawable/jobs.jpg but that didnt work too Even tried ../../drawable/jobs.jpg, no luck there too. Can someone help me out here really stuck!<issue_comment>username_1: Try to below code ``` web_object.loadDataWithBaseURL("file:///android_res/drawable/", "![](test.jpg)", "text/html", "utf-8", null); ``` Upvotes: 0 <issue_comment>username_2: Try this. ```html ![](file:///android_asset/jobs.jpg) ``` Put your image file in app->src->main->assets Have your html page in the assets folder as well. Upvotes: 1
2018/03/20
566
2,065
<issue_start>username_0: My code seems to crash due to an infinite loop, but I can't find the error. Could someone help me out and look over the code? **Here is my controller:** ``` override func touchesMoved(_ touches: Set, with event: UIEvent?) { for touch in touches{ let location = touch.location(in: self.view) let x = Float(location.x) let y = Float(location.y) let newTouch = Touch(latitude: x,longitude: y) TouchService().addTouch(touch: newTouch, grid: \_grid) } } ``` **And my model:** ``` import Foundation class Touch { var _lat: Float var _long: Float var _startingPoint: Touch init(latitude lat: Float, longitude long: Float){ self._lat = lat self._long = long self._startingPoint = Touch(latitude: lat, longitude: long) } } ``` I guess there is something wrong with the way I am using the init() function. Kind regards and thanks in advance, Chris<issue_comment>username_1: This is how your code is working ``` class Touch { var _lat: Float var _long: Float var _startingPoint: Touch init(latitude lat: Float, longitude long: Float){ self._lat = lat self._long = long self._startingPoint = Touch(latitude: lat, longitude: long) // It invokes recursively your Touch class with no end, so it causes infinite loop } } ``` **Solution** You have to create different class for `StartingPoint`. Upvotes: 0 <issue_comment>username_2: **Problem** ----------- Your initializer creates an infinite loop. ``` //Touch(latitude: lat, longitude: long) calls the initializer again. //Since you are inside the initializer, it creates an infinite loop. self._startingPoint = Touch(latitude: lat, longitude: long) ``` Comment that out and you'll see. **Solution** ------------ Create a separate class or struct for your values (composition). ``` class AngularLocation { var _lat: Float var _long: Float ``` and ``` class Touch { var destination: AngularLocation var startingPoint: AngularLocation ``` Tweak that to your needs. Upvotes: 2 [selected_answer]
2018/03/20
533
1,883
<issue_start>username_0: I just updated to play 2.6, and now none of my reusable templates work. For example In `index.scala.html` I had ``` @(word: String) @main("My index Page"){ @word } ``` And in my `main.scala.html` I had ``` @(title: String, stuff: HTML) //and then things here that did stuff that aren't needed for this example and, anyway, i made the whole thing up ``` The gist is that I get errors that say: ``` C:\file\path\index.scala.html:3: not found: value main [error] @main("My index Page") { ``` I found this is resolved if I add `@this(path.to.main)` at the beginning, but I don't see that written in the play docs, so I'm not sure if I'm doing something right, or something wrong that just happens to work.<issue_comment>username_1: This is how your code is working ``` class Touch { var _lat: Float var _long: Float var _startingPoint: Touch init(latitude lat: Float, longitude long: Float){ self._lat = lat self._long = long self._startingPoint = Touch(latitude: lat, longitude: long) // It invokes recursively your Touch class with no end, so it causes infinite loop } } ``` **Solution** You have to create different class for `StartingPoint`. Upvotes: 0 <issue_comment>username_2: **Problem** ----------- Your initializer creates an infinite loop. ``` //Touch(latitude: lat, longitude: long) calls the initializer again. //Since you are inside the initializer, it creates an infinite loop. self._startingPoint = Touch(latitude: lat, longitude: long) ``` Comment that out and you'll see. **Solution** ------------ Create a separate class or struct for your values (composition). ``` class AngularLocation { var _lat: Float var _long: Float ``` and ``` class Touch { var destination: AngularLocation var startingPoint: AngularLocation ``` Tweak that to your needs. Upvotes: 2 [selected_answer]
2018/03/20
1,066
4,027
<issue_start>username_0: I could not find an answer on my question so I hope someone can help me I want to validate if I add a new appointment that the chosen employee has not been chosen on the day of the appointment. So I can't double-book someone on a day. I'm using laravel 5.6 and MySQL with table appointments using following rows: id, day, employee\_id and resource\_id My controller is a resource controller (with the index,create,store,... functions). So if $appointmentExists is 1 I need to throw an error and stay on the same page of the create form. ``` public function store(Request $request) { $appointmentExist = \DB::table('appointments') ->where([ ['day','=',$request->day], ['employee_id','=',$request->employee_id], ]) ->exists(); $request->validate([ 'day' => 'required|min:1|max:10', 'employee_id' => 'required', 'resource_id' => 'required', $appointmentExist => 'in:0', ]); $appointment = Appointment::create(['day' => $request->day, 'employee_id' => $request->employee_id, 'resource_id' => $request->resource_id]); return redirect('/appointments/' . $appointment->id); } ``` I hope someone can help<issue_comment>username_1: So I found the answer myself, maybe someone else can use it: ``` if(\DB::table('appointments') ->where([ ['day','=',$request->day], ['employee_id','=',$request->employee_id], ]) ->exists()){ return redirect()->back()->withErrors(['This employee is already busy for that day, select another employee or another day.']); }; ``` So now I respond with the error 'this employee is already busy for that day,...'. I have not found how I return the errors from $request->validate(), but I don't need that in this occasion. If you know feel free to let me know. Upvotes: 1 <issue_comment>username_2: Your problem is this line: ``` $appointmentExist => 'in:0', ``` That's checking that `in_array($request->input($appointmentExist), [0])`, but `$request->input($appointmentExist)` would be checking for `$request->input(0)` or `$request->input(1)`, neither of which technically exist. I would change to use Request additions: ``` $exists = \DB::table(...)->exists(); // Same query, just assigned to a variable $request->request->add(["exists", $exists]); $request->validate([ ..., "exists" => "in:0" ]); ``` By adding the key `"exists"` to the request payload, you can then validate it as you would the actual data being sent in the request, and return all errors at once. Following @N.B.'s comment, the above would only prevent double-booking for this situation; as should the validation fail, `Appointment::create()` would never be called, and the data would not be inserted. With that in mind, should the validation pass unexpectedly, it's best to have a fallback, in this case a `unique` constraint on the combination of `employee_id` and `day`, if you truly want to prevent double-booking, and handle like so: ``` try { Appointment::create(...); catch (\Illuminate\Database\QueryException $qex){ \Log::error("Unable to Create Appointment: ".$qex->getMessage()); // Handle individual codes if($qex->getCode() == "23000"){ return redirect()->back()->withErrors(...); } } ``` Upvotes: 1 <issue_comment>username_3: ``` $request->validate([ 'day' => 'required|min:1|max:10', 'employee_id' => 'required', 'resource_id' => 'required', $appointmentExist => 'in:0', ]); ``` This invalid code. Validator will search 1 or 0 ($appointmentExist) in request data. These keys never will include in this request. Try use Rule class. Example: ``` $day = $request->day; $request->validate([ 'day' => 'required|min:1|max:10', 'employee_id' => [ 'required', Rule::unique('appointments')->where(function ($query) use ($day) { return $query->where('day', $day); }) ], 'resource_id' => 'required' ]); ``` Upvotes: 0
2018/03/20
840
2,465
<issue_start>username_0: In my application I have a need to create a single-row DataFrame from a Map. So that a Map like ``` ("col1" -> 5, "col2" -> 10, "col3" -> 6) ``` would be transformed into a DataFrame with a single row and the map keys would become names of columns. ``` col1 | col2 | col3 5 | 10 | 6 ``` In case you are wondering why would I want this - I just need to save a single document with some statistics into MongoDB using MongoSpark connector which allows saving DFs and RDDs.<issue_comment>username_1: here you go : ``` val map: Map[String, Int] = Map("col1" -> 5, "col2" -> 6, "col3" -> 10) val df = map.tail .foldLeft(Seq(map.head._2).toDF(map.head._1))((acc,curr) => acc.withColumn(curr._1,lit(curr._2))) df.show() +----+----+----+ |col1|col2|col3| +----+----+----+ | 5| 6| 10| +----+----+----+ ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I thought that sorting the column names doesn't hurt anyway. ``` import org.apache.spark.sql.types._ val map = Map("col1" -> 5, "col2" -> 6, "col3" -> 10) val (keys, values) = map.toList.sortBy(_._1).unzip val rows = spark.sparkContext.parallelize(Seq(Row(values: _*))) val schema = StructType(keys.map( k => StructField(k, IntegerType, nullable = false))) val df = spark.createDataFrame(rows, schema) df.show() ``` Gives: ``` +----+----+----+ |col1|col2|col3| +----+----+----+ | 5| 6| 10| +----+----+----+ ``` The idea is straightforward: convert map to list of tuples, unzip, convert the keys into a schema and the values into a single-entry row RDD, build dataframe from the two pieces (the [interface for `createDataFrame`](https://spark.apache.org/docs/2.2.1/api/scala/index.html#org.apache.spark.sql.SparkSession) is a bit strange there, accepts `java.util.List`s and kitchen sinks, but doesn't accept the usual scala `List` for some reason). Upvotes: 3 <issue_comment>username_3: A slight variation to Rapheal's answer. You can create a dummy column DF (1\*1), then add the map elements using foldLeft and then finally delete the dummy column. That way, your foldLeft is straight forward and easy to remember. ``` val map: Map[String, Int] = Map("col1" -> 5, "col2" -> 6, "col3" -> 10) val f = Seq("1").toDF("dummy") map.keys.toList.sorted.foldLeft(f) { (acc,x) => acc.withColumn(x,lit(map(x)) ) }.drop("dummy").show(false) +----+----+----+ |col1|col2|col3| +----+----+----+ |5 |6 |10 | +----+----+----+ ``` Upvotes: 0
2018/03/20
552
1,633
<issue_start>username_0: I need to concatenate few cells values into one cell that will always be next to the concatenated ones. Here is the example: ![My excel sheet](https://i.stack.imgur.com/C7i5a.jpg) There will always be 4 main columns with data and the 5th one "Wynik" will be used to concatenate the values from the 4 cells in the same row. So for example the final result in each row (in "Wynik column) will be "REG-15\_S1\_2018-01-20\_12333" I want to create macro for it to do it instantly because there will be thousends of records in each columns but I didnt found any example for it, neither I'm expert in VBA... Do u have any solution how to do it or any popular macro that can be used? I will be thankful for any help. Alex<issue_comment>username_1: In the column of the combined cells, put the formula: ``` =A2&B2&C2&D2 ``` Copy the formula down in the column. Upvotes: 0 <issue_comment>username_2: Try, ``` =A2&"_"&B2&"_"&text(C2, "yyyy-mm-dd")&"_"&D2 'optional for leading zeroes in column D =A2&"_"&B2&"_"&text(C2, "yyyy-mm-dd")&"_"&text(D2, "00000") ``` Upvotes: 1 <issue_comment>username_3: try this ``` Option Explicit Sub main() Dim data As Variant With Range("A1").CurrentRegion data = .Resize(.Rows.Count - 1, 4).Offset(1).Value End With ReDim wynik(1 To UBound(data)) As Variant Dim i As Variant For i = 1 To UBound(data) data(i, 3) = Format(data(i, 3), "yyyy-mm-dd") wynik(i) = Join(Application.index(data, i, 0), "_") Next Range("E2").Resize(UBound(data)).Value = Application.Transpose(wynik) End Sub ``` Upvotes: 2 [selected_answer]
2018/03/20
702
2,736
<issue_start>username_0: I have this code. At the very end we have `List` of anonymous objects with fields `FILE_NAME`, `DATE`, `REPORT_LINK`. ``` public ServiceResult Get(string userIdString) { var userId = Int32.Parse(userIdString); var downloadLink = _serviceConfigManager.Parameters[ServiceResources.DownloadService].FirstOrDefault(); var query = SharedUnitOfWork.Context.ReportRequest .Where(item => item.IS_DELETED == 0 && item.TENANT_ID == TenantId && item.REPORT_STATE_ID == 2 && item.CREATED_BY_USER_ID == userId) .OrderByDescending(item => item.ID) .Take(10) .ToList() .Select(item => new { FILE_NAME = item.FILENAME, DATE = item.CREATION_DATE, REPORT_LINK = string.Format("{0}/{1}", downloadLink, item.ID) } ); return new ServiceResult(query.ToList()); } ``` First of all as you can see we return `ServiceResult` object. The second, the returning type of function is `ServiceResult` (NOT GENERIC). This code i didnt write SO I CANT CHANGE THE SOURCE CODE of this method. The task: In my unit tests i need to get returning object and check if it is populated with correct data. I wrote my test the way the function `Get` returns 2 records. SO, i need to check every field in this records. How to do that? In my tests i get from this function variable of `serviceResult` name. How to "unwrap" `serviceResult` so as it looks like a `List` and I could check `List[0].FILE\_NAME`, `List[1].FILE\_NAME` and etc? `serviceResult` has `Result` field that equals `query.ToList()`. **TO MODERATORS**: I read all offered similar topics, non of them describes|solves my task because I can't rewrite source code of Get function.<issue_comment>username_1: You can't get hold of the anonymous type. Your options are: * use reflection (`GetType().GetProperty()` etc) to read the data * use `dynamic` The latter is probably more convenient: ``` foreach(dynamic obj in listOfAnonObjects) { string filename = obj.FILE_NAME; DateTime date = obj.DATE; string link = obj.REPORT_LINK; // TODO: now use filename, date and link } ``` You could use the above to project the data into your *own* list type that has well advertised properties. Upvotes: 2 <issue_comment>username_2: Problem solved using Reflection. Using pattern `targetType.GetType().GetProperty(propertyName).GetValue(targetType,index:optional)` I can get value of any field of anonymous type. In case of indexer I use `GetProperty("Item")` and `GetValue(targetType,new object[]{0})`, where the second parameter is needed index. Upvotes: 1 [selected_answer]
2018/03/20
817
2,212
<issue_start>username_0: I have multiple tables created with `astropy.table.Table`, for example: ``` from astropy.table import Table import numpy as np #table 1 ta=Table() ta["test1"]=np.arange(0,100.) #table 2 tb=Table() tb["test2"]=np.arange(0,100.) ``` I can save them individually to `.fits` files using ``` ta.write('table1.fits') tb.write('table2.fits') ``` But I would like to have them saved to the same `.fits` file, each of them with a different `hdu`. How can I do that?<issue_comment>username_1: There's an example of how to do this [here](http://docs.astropy.org/en/stable/io/fits/#creating-a-new-table-file). So, you could do something like the following: ``` import numpy as np from astropy.io import fits ta = Table() ta['test1'] = np.arange(0, 100.) col1 = fits.Column(name=ta.colnames[0], format='E', array=ta) tb = Table() tb['test2'] = np.arange(0, 100.) col2 = fits.Column(name=tb.colnames[0], format='E', array=tb) cols = fits.ColDefs([col1, col2]) hdu = fits.BinTableHDU.from_columns(cols) hdu.writeto('table.fits') ``` which only has one binary table HDU, but with two columns. Alternatively, to add them as separate HDUs, you could do something like ``` ta = Table() ta['test1'] = np.arange(0, 100.) col1 = fits.Column(name=ta.colnames[0], format='E', array=ta) hdu1 = fits.BinTableHDU.from_columns(fits.ColDefs([col1])) tb = Table() tb['test2'] = np.arange(0, 100.) col2 = fits.Column(name=tb.colnames[0], format='E', array=tb) hdu2 = fits.BinTableHDU.from_columns(fits.ColDefs([col2])) # create a header hdr = fits.Header() hdr['Author'] = 'Me' primary_hdu = fits.PrimaryHDU(header=hdr) # put all the HDUs together hdul = fits.HDUList([primary_hdu, hdu1, hdu2]) # write it out hdul.writeto('table.fits') ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: There's a utility function [astropy.io.fits.table\_to\_hdu](http://docs.astropy.org/en/stable/io/fits/api/tables.html#table-to-hdu). To continue your example if you have two table objects `ta` and `tb`: ``` from astropy.io import fits hdu_list = fits.HDUList([ fits.PrimaryHDU(), fits.table_to_hdu(ta), fits.table_to_hdu(tb), ]) hdu_list.writeto('tables.fits') ``` Upvotes: 3
2018/03/20
725
2,945
<issue_start>username_0: Long time I write code in JavaScript, using jQuery, ReactJS and NodeJS server side (ExpressJS). I learned partially MDN and other sources, however there is one question I can not found an answer for. Why follow code requires `prototype` object's property? Why I can not use `forEach` directly from `Array` object? I mean, in terms of OOP, if `Array` class extends class contains `forEach` method, I can call it from object instantiated from `Array` directly, I don't need, using reflection find base class, instantiate it and call the method from the base class. ``` Array.prototype.forEach.call(document.querySelectorAll('.klasses'), function(el){ el.addEventListener('click', someFunction); }); ``` Example taken from here: <https://www.smashingmagazine.com/2014/01/understanding-javascript-function-prototype-bind/><issue_comment>username_1: Your code: ``` Array.prototype.forEach.call(document.querySelectorAll('.klasses'), function(el){ el.addEventListener('click', someFunction); }); ``` could indeed be written as ``` [].forEach.call(document.querySelectorAll('.klasses'), function(el){ el.addEventListener('click', someFunction); }); ``` Going through the Array prototype object is just a way to access the functions available to any array instance without having to actually create an array. Using one or the other is mostly a matter of preference, and though the second form does involve the creation of an array object, the performance implications are minimal. Upvotes: 3 [selected_answer]<issue_comment>username_2: > > Why I can not use forEach directly from Array object? > > > The Array object is a constructor function used to create arrays. It doesn't have a `forEach` property of its own. You can access the `forEach` property either from an instance of Array (which you can create with `new Array` or, more idiomatically, with `[]`), or by accessing the `prototype` that gets applied to instances of Array. Upvotes: 1 <issue_comment>username_3: `querySelectorAll` doesn't return an Array, returns a `NodeList` (like array-object) instead. The NodeList has a property called `.length` that indicates the count of elements inside of it. Some browsers/engines/backend techs are incompatible because that `NodeList` no necessarally will provide the function `forEach`. So, an alternative is converting that `NodeList` to an Array using the Array prototype: ``` Array.prototype.forEach.call(document.querySelectorAll('.klasses'), function(el){...}) ``` Or, you can use the function `Array.from` that will use the property `.length` to create the array: ``` Array.from(document.querySelectorAll('.klasses')); ``` From MDN docs ------------- > > Although NodeList is not an Array, it is possible to iterate on it using `forEach()`. Several older browsers have not implemented this method yet. You can also convert it to an Array using `Array.from`. > > > Upvotes: 1
2018/03/20
782
3,216
<issue_start>username_0: I want to download a photo that was previously uploaded from Firebase. I am using the following code: ``` fbUser = Auth.auth().currentUser let storage = Storage.storage() let storageRef = storage.reference(forURL: "somepath.some") let profilePicRef = storageRef.child(fbUser.uid+"_profile_pic.jpg") var imageFB : UIImage? = nil profilePicRef.downloadURL(completion: { (url, error) in if error != nil { print(error!) return } URLSession.shared.dataTask(with: url!, completionHandler: { (data, response, error) in if error != nil { print(error!) return } guard let imageData = UIImage(data: data!) else { return } DispatchQueue.main.async { imageFB = imageData } }).resume() }) ``` The photo is there - if I delete the photo, then I get an error that the file is missing. However, after the download, imageFB is always equal to nil, even if the photo is there. Any suggestions on how to fix that?<issue_comment>username_1: Your code: ``` Array.prototype.forEach.call(document.querySelectorAll('.klasses'), function(el){ el.addEventListener('click', someFunction); }); ``` could indeed be written as ``` [].forEach.call(document.querySelectorAll('.klasses'), function(el){ el.addEventListener('click', someFunction); }); ``` Going through the Array prototype object is just a way to access the functions available to any array instance without having to actually create an array. Using one or the other is mostly a matter of preference, and though the second form does involve the creation of an array object, the performance implications are minimal. Upvotes: 3 [selected_answer]<issue_comment>username_2: > > Why I can not use forEach directly from Array object? > > > The Array object is a constructor function used to create arrays. It doesn't have a `forEach` property of its own. You can access the `forEach` property either from an instance of Array (which you can create with `new Array` or, more idiomatically, with `[]`), or by accessing the `prototype` that gets applied to instances of Array. Upvotes: 1 <issue_comment>username_3: `querySelectorAll` doesn't return an Array, returns a `NodeList` (like array-object) instead. The NodeList has a property called `.length` that indicates the count of elements inside of it. Some browsers/engines/backend techs are incompatible because that `NodeList` no necessarally will provide the function `forEach`. So, an alternative is converting that `NodeList` to an Array using the Array prototype: ``` Array.prototype.forEach.call(document.querySelectorAll('.klasses'), function(el){...}) ``` Or, you can use the function `Array.from` that will use the property `.length` to create the array: ``` Array.from(document.querySelectorAll('.klasses')); ``` From MDN docs ------------- > > Although NodeList is not an Array, it is possible to iterate on it using `forEach()`. Several older browsers have not implemented this method yet. You can also convert it to an Array using `Array.from`. > > > Upvotes: 1
2018/03/20
185
726
<issue_start>username_0: I know that in Fortran 90 the symbol for greater than (for example) has been changed from `.gt.` to `>`. Is this the case also for other logical expressions like `.and.` and `.or.`? Are there equivalent symbolic expressions for these as well?<issue_comment>username_1: No there are no such symbols nor have I noticed any interest or need from anyone who would call for such symbols before reading your question. So no, there are none in Fortran 90, none in Fortran 2018 and there are no plans for them for future Fortran AFAIK. Upvotes: 4 [selected_answer]<issue_comment>username_2: With I fort there is == and /= but many will point to 'non standard' as a rebuttal. I use them routinely. Upvotes: 0
2018/03/20
372
1,482
<issue_start>username_0: I'm having trouble navigating the Laravel documentation. I use the static method `\Illuminate\Support\Facades\Validator::make`, and I would like to find out more information about it. However, I cannot find it in the documentation page for `\Illuminate\Support\Facades\Validator` (<https://laravel.com/api/5.1/Illuminate/Validation/Validator.html>). I have had similar experiences with other static methods that I find in Laravel code snippets. Where can such things be found (specifically where can documentation on `\Illuminate\Support\Facades\Validator::make` be found)?<issue_comment>username_1: If you see the actual class itself (`\Illuminate\Support\Facades\Validator`) in your IDE you will see this: ``` /** * @see \Illuminate\Validation\Factory <-- TELLS YOU WHERE TO LOOK */ class Validator extends Facade { ... } ``` The `make` method is in `\Illuminate\Validation\Factory` and the docs for it is found here <https://laravel.com/api/5.1/Illuminate/Validation/Factory.html#method_make> Upvotes: 3 [selected_answer]<issue_comment>username_2: The class you mentioned is a so-called facade. This means that it has an underlying class. You can find more information about facades [in the official documentation](https://laravel.com/docs/5.6/facades). You can find the underlying class in the comment above the facade class. ``` /** * @see \Illuminate\Validation\Factory <- HERE */ class Validator extends Facade ... ``` Upvotes: 2
2018/03/20
304
1,060
<issue_start>username_0: For example, ``` for (i in Cow.array){...} ``` If either `Cow` or `Cow.array` is removed, what will happen? Also, if error occurs, how can I fix it?<issue_comment>username_1: If you see the actual class itself (`\Illuminate\Support\Facades\Validator`) in your IDE you will see this: ``` /** * @see \Illuminate\Validation\Factory <-- TELLS YOU WHERE TO LOOK */ class Validator extends Facade { ... } ``` The `make` method is in `\Illuminate\Validation\Factory` and the docs for it is found here <https://laravel.com/api/5.1/Illuminate/Validation/Factory.html#method_make> Upvotes: 3 [selected_answer]<issue_comment>username_2: The class you mentioned is a so-called facade. This means that it has an underlying class. You can find more information about facades [in the official documentation](https://laravel.com/docs/5.6/facades). You can find the underlying class in the comment above the facade class. ``` /** * @see \Illuminate\Validation\Factory <- HERE */ class Validator extends Facade ... ``` Upvotes: 2
2018/03/20
410
1,434
<issue_start>username_0: Suppose I have ``` const bold = content => **{content}**; const aside = content => {content}; ``` Now, I write: ``` aside("hello world"); ``` And everything performs correctly. --- Next, suppose I want to render a list of components in aside: ``` const bold = content => **{content}**; const aside_items = ??? ``` Such that when I write: ``` aside_items([bold("hello"), bold("world")]); ``` The following HTML is produced: ``` **hello** **world** ``` How does one implement `const aside_items = ???`?<issue_comment>username_1: If you see the actual class itself (`\Illuminate\Support\Facades\Validator`) in your IDE you will see this: ``` /** * @see \Illuminate\Validation\Factory <-- TELLS YOU WHERE TO LOOK */ class Validator extends Facade { ... } ``` The `make` method is in `\Illuminate\Validation\Factory` and the docs for it is found here <https://laravel.com/api/5.1/Illuminate/Validation/Factory.html#method_make> Upvotes: 3 [selected_answer]<issue_comment>username_2: The class you mentioned is a so-called facade. This means that it has an underlying class. You can find more information about facades [in the official documentation](https://laravel.com/docs/5.6/facades). You can find the underlying class in the comment above the facade class. ``` /** * @see \Illuminate\Validation\Factory <- HERE */ class Validator extends Facade ... ``` Upvotes: 2
2018/03/20
303
1,105
<issue_start>username_0: The original code is: ``` modelBuilder.Entity<OriginalClass>().Map<OtherClass>(x => x.Requires("Field1").HasValue("Value1")); ``` It doesn't compile and I can't seem to find the equivalent code.<issue_comment>username_1: If you see the actual class itself (`\Illuminate\Support\Facades\Validator`) in your IDE you will see this: ``` /** * @see \Illuminate\Validation\Factory <-- TELLS YOU WHERE TO LOOK */ class Validator extends Facade { ... } ``` The `make` method is in `\Illuminate\Validation\Factory` and the docs for it is found here <https://laravel.com/api/5.1/Illuminate/Validation/Factory.html#method_make> Upvotes: 3 [selected_answer]<issue_comment>username_2: The class you mentioned is a so-called facade. This means that it has an underlying class. You can find more information about facades [in the official documentation](https://laravel.com/docs/5.6/facades). You can find the underlying class in the comment above the facade class. ``` /** * @see \Illuminate\Validation\Factory <- HERE */ class Validator extends Facade ... ``` Upvotes: 2
2018/03/20
438
1,464
<issue_start>username_0: Given three or more variables in my `DTL` template what is the most convenient way to assure that an interpunct is always between two variables? ``` {{ person.name }} · {{ person.phone }} · {{ person.city }} ``` Expected: ``` John · 1234567 · New York John · 1234567 ``` Is there an easy way to solve this with built-in functionality? I try to avoid writing custom `Django` filters/template tags.<issue_comment>username_1: You can't do this easily in Django template language. Either pass a list of non-empty values from your view, e.g. `["John", "1234567"]`, or write a custom tag or filter so that you can do something like `{% interpunct person.name person.phone person.city %}` or `{{ person|display_person }}`. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can (ab)use the `cycle` tag for this purpose. Place it in front of each list item, then use it to emit nothing on first pass, then the separator thereafter. Use in a `for` loop is straightforward. Use outside needs this pattern: ``` {% cycle '' '|' '|' '|' as itemsep silent %} {% if foo %} {{ itemsep }}{% cycle itemsep %} Foo! {% endif %} {% if bar %} {{ itemsep }}{% cycle itemsep %} Bar! {% endif %} {% if foobar %} {{ itemsep }}{% cycle itemsep %} FooBar! {% endif %} ``` Or, miss the `silent` complexity by just using an extra `''` ``` {% cycle '' '' '|' '|' '|' as itemsep %} {% if foo %} {% cycle itemsep %} Foo! {% endif %} ... ``` Upvotes: 0
2018/03/20
988
3,496
<issue_start>username_0: I am trying to implement the cluster estimation method using EM found in [Weka](http://weka.sourceforge.net/doc.dev/weka/clusterers/EM.html), more precisely the following description: > > The cross validation performed to determine the number of clusters is > done in the following steps: > > > 1. the number of clusters is set to 1 > 2. the training set is split randomly into 10 folds. > 3. EM is performed 10 times using the 10 folds the usual CV way. > 4. the loglikelihood is averaged over all 10 results. > 5. if loglikelihood has increased the number of clusters is increased by 1 and the program continues at step 2. > > > My current implementation is as follows: ``` def estimate_n_clusters(X): "Find the best number of clusters through maximization of the log-likelihood from EM." last_log_likelihood = None kf = KFold(n_splits=10, shuffle=True) components = range(50)[1:] for n_components in components: gm = GaussianMixture(n_components=n_components) log_likelihood_list = [] for train, test in kf.split(X): gm.fit(X[train, :]) if not gm.converged_: raise Warning("GM not converged") log_likelihood = np.log(-gm.score_samples(X[test, :])) log_likelihood_list += log_likelihood.tolist() avg_log_likelihood = np.average(log_likelihood_list) if last_log_likelihood is None: last_log_likelihood = avg_log_likelihood elif avg_log_likelihood+10E-6 <= last_log_likelihood: return n_components last_log_likelihood = avg_log_likelihood ``` I am getting a similar number of clusters both trough Weka and my function, however, using the number of clusters `n_clusters` estimated by the function ``` gm = GaussianMixture(n_components=n_clusters).fit(X) print(np.log(-gm.score(X))) ``` Results in NaN, since the `-gm.score(X)` yields negative results (about -2500). While Weka reports `Log likelihood: 347.16447`. My guess is that the likelihood mentioned in step 4 of Weka is not the same as the one mentioned in the [function](https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/mixture/base.py#L289)`score_samples()`. Can anyone tell where I am getting something wrong? Thanks<issue_comment>username_1: As per the documentation, `score` returns the average **log**likelihood. You don't want to use log-log, obviously. Upvotes: 2 [selected_answer]<issue_comment>username_2: For future reference, the fixed function looks like: ``` def estimate_n_clusters(X): "Find the best number of clusters through maximization of the log-likelihood from EM." last_log_likelihood = None kf = KFold(n_splits=10, shuffle=True) components = range(50)[1:] for n_components in components: gm = GaussianMixture(n_components=n_components) log_likelihood_list = [] for train, test in kf.split(X): gm.fit(X[train, :]) if not gm.converged_: raise Warning("GM not converged") log_likelihood = -gm.score_samples(X[test, :]) log_likelihood_list += log_likelihood.tolist() avg_log_likelihood = np.average(log_likelihood_list) print(avg_log_likelihood) if last_log_likelihood is None: last_log_likelihood = avg_log_likelihood elif avg_log_likelihood+10E-6 <= last_log_likelihood: return n_components-1 last_log_likelihood = avg_log_likelihood ``` Upvotes: 0
2018/03/20
268
1,104
<issue_start>username_0: I have a python code which is quite heavy, my computer cant run it efficiently, therefore i want to run the python code on cloud. Please tell me how to do it ? any step by step tutorial available thanks<issue_comment>username_1: You can try Heroku. It's free and they got their own tutorials. But it's good enough only if you will use it for studying. AWS, Azure or google cloud are much better for production. Upvotes: 0 <issue_comment>username_2: Based on my experience I would recommend Amazon Web Services: <https://aws.amazon.com/>. I would suggest for you to look about creating an EC2 Instance and run your code there. An EC2 Instance basically is some kind of server and you can automate your Python script there as well. Now, there's this tutorial that helped me a lot to have a clearer image about running Python script using AWS (specifically EC2): <https://www.youtube.com/watch?v=WE303yFWfV4>. For further informations about Cloud Services in Amazon and products, you can get informations here: <https://aws.amazon.com/products/>. Upvotes: 3 [selected_answer]
2018/03/20
473
1,831
<issue_start>username_0: I'm trying to test my service with JEST and mocking endpoint with nock. Service looks like this ``` export async function get(id) { const params = { mode: 'cors', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' } }; let response = await fetch(`{$API}/projects/${id}`, params); return response.json(); } ``` Test: ``` import { get } from './project'; import nock from 'nock'; const fetchNockProject = nock($API) .get('/projects/1') .reply('200', {}); const data = await get(1); expect(data).resolves.toEqual(project); ``` When I run the test I get error: *console.error node\_modules/jsdom/lib/jsdom/virtual-console.js:29 Error: Cross origin null forbidden* *TypeError: Network request failed* Any idea why virtual-console is throwing this as this is only service.<issue_comment>username_1: I found a solution for my problem which was connected with CORS. Nock mock should be: ``` fetchNockProject = nock($API) .defaultReplyHeaders({ 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true' }) .get('/projects/1') .reply('200', project); ``` Upvotes: 5 <issue_comment>username_2: Well, the posted answer didn't work for me and while I'm not saying it's wrong I thought I should share what **did work for me**. In my case, my server is ASP.NET Core. I wasn't using the [Cors middleware module](https://learn.microsoft.com/en-us/aspnet/core/security/cors?view=aspnetcore-2.1) that is provided. I added that in and configured it and that fixed my issue with no change to my javascript files. Upvotes: 1
2018/03/20
1,497
6,063
<issue_start>username_0: I wrap some content into a ScrollPane because I want a horizontal scroll bar if the content does not fit on screen. As long as the scroll bar is not needed, everything is fine: [![Without scrollbar, everything is well visible](https://i.stack.imgur.com/WM6NP.png)](https://i.stack.imgur.com/WM6NP.png) Yet, when the scroll bar is shown, it (vertically) hides parts of the content: [![With scroll bar, parts of the content are not visible](https://i.stack.imgur.com/Vfnyc.png)](https://i.stack.imgur.com/Vfnyc.png) How can I prevent this behavior? The content should always be shown completely. I tried to use `fitToHeight="true"`, yet this did not help. --- --- Following some example FXML (the multiple layers of HBox and VBox are added to mimic my real application's structure): ``` ```<issue_comment>username_1: You could work around this by setting the minHeight of your vbox to a size in which it would show the text fully alternatively you can add padding ex.(Padding) ``` xml version="1.0" encoding="UTF-8"? import java.lang.*? import javafx.scene.control.*? import javafx.scene.layout.*? import javafx.scene.control.ColorPicker? import javafx.scene.layout.VBox? import javafx.scene.text.Text? import javafx.geometry.Insets? ``` ex. (min Height) ``` xml version="1.0" encoding="UTF-8"? import javafx.scene.control.ScrollPane? import javafx.scene.control.TitledPane? import javafx.scene.layout.BorderPane? import javafx.scene.layout.HBox? import javafx.scene.layout.VBox? import javafx.scene.text.*? ``` Upvotes: 1 <issue_comment>username_2: Looks like a bug ([reported](https://bugs.openjdk.java.net/browse/JDK-8199934)) in ScrollPaneSkin: its computePrefHeight method doesn't take the scrollBar's height into account if the policy is AS\_NEEDED and the scrollBar is visible. So the workaround is a custom skin that does ;) Note, that doing so isn't quite enough if the policy is changed from ALWAYS to AS\_NEEDED (at the time of calling computeXX, the bar is visible - not quite sure why), so we are listening to changes in the policy and hide the bar .. rude but effective. The custom skin (beware: not formally testet!) and a driver to play with: ``` public class ScrollPaneSizing extends Application{ public static class DebugScrollPaneSkin extends ScrollPaneSkin { public DebugScrollPaneSkin(ScrollPane scroll) { super(scroll); registerChangeListener(scroll.hbarPolicyProperty(), p -> { // rude .. but visibility is updated in layout anyway getHorizontalScrollBar().setVisible(false); }); } @Override protected double computePrefHeight(double x, double topInset, double rightInset, double bottomInset, double leftInset) { double computed = super.computePrefHeight(x, topInset, rightInset, bottomInset, leftInset); if (getSkinnable().getHbarPolicy() == ScrollBarPolicy.AS_NEEDED && getHorizontalScrollBar().isVisible()) { // this is fine when horizontal bar is shown/hidden due to resizing // not quite okay while toggling the policy // the actual visibilty is updated in layoutChildren? computed += getHorizontalScrollBar().prefHeight(-1); } return computed; } } private Parent createContent() { HBox inner = new HBox(new Text("somehing horizontal and again again ........")); TitledPane titled = new TitledPane("my title", inner); ScrollPane scroll = new ScrollPane(titled) { @Override protected Skin createDefaultSkin() { return new DebugScrollPaneSkin(this); } }; scroll.setVbarPolicy(NEVER); scroll.setHbarPolicy(ALWAYS); // scroll.setFitToHeight(true); Button policy = new Button("toggle HBarPolicy"); policy.setOnAction(e -> { ScrollBarPolicy p = scroll.getHbarPolicy(); scroll.setHbarPolicy(p == ALWAYS ? AS_NEEDED : ALWAYS); }); HBox buttons = new HBox(10, policy); BorderPane content = new BorderPane(); content.setTop(scroll); content.setBottom(buttons); return content; } @Override public void start(Stage stage) throws Exception { stage.setScene(new Scene(createContent(), 400, 200)); stage.setTitle(FXUtils.version()); stage.show(); } public static void main(String[] args) { launch(args); } @SuppressWarnings("unused") private static final Logger LOG = Logger .getLogger(ScrollPaneSizing.class.getName()); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: I wrote a openJDK 8 version because the accepted answer only works since 9 ``` public class ScrollPaneHSkin extends ScrollPane { ScrollBar hbar; public ScrollPaneHSkin() { super(); } public ScrollPaneHSkin(Node content) { super(content); } @Override protected Skin createDefaultSkin() { return new HSkin(); } private class HSkin extends ScrollPaneSkin { HSkin() { super(ScrollPaneHSkin.this); hbarPolicyProperty().addListener((ov, old, current) -> // rude .. but visibility is updated in layout anyway hsb.setVisible(false) ); } @Override protected double computePrefHeight(double x, double topInset, double rightInset, double bottomInset, double leftInset) { double computed = super.computePrefHeight(x, topInset, rightInset, bottomInset, leftInset); if (getSkinnable().getHbarPolicy() == ScrollBarPolicy.AS_NEEDED && hsb.isVisible()) { // this is fine when horizontal bar is shown/hidden due to resizing // not quite okay while toggling the policy // the actual visibilty is updated in layoutChildren? computed += hsb.prefHeight(-1); } return computed; } } } ``` Upvotes: 1
2018/03/20
836
2,980
<issue_start>username_0: I've a class `Testendride`: ``` public Testendride guranteBoneses; ``` set the values on `response` ``` JSONObject jsonObject = new JSONObject(s); Gson gc = new Gson(); guranteBoneses = gc.fromJson(s, Testendride.class); ``` I'd like to send this `Testendride` object to the next `Activity` ``` Intent intent = new Intent(ActivityGuaranteeBonesesOffers.this, ActivityDetailGaurantee.class); intent.putExtra("gurantee","gurantee"); intent.putExtras("key", guranteBoneses); startActivity(intent); ```<issue_comment>username_1: You need to convert you `guaranteeBonueses` object to parcelable and pass it to your desired activity. You can read about parcelable in this [codepath article](https://guides.codepath.com/android/using-parcelable) You can refer following articles too <https://code.tutsplus.com/tutorials/how-to-pass-data-between-activities-with-android-parcelable--cms-29559> <https://www.101apps.co.za/articles/passing-data-between-activities.html> If you want to pass the network response to your desired activity , prefer to create a new model and pass it , as it is a good practice to seperate your network layer. Upvotes: 0 <issue_comment>username_2: You should make your `Object` implement `Parcelable`. A short example how to do it: ``` public class MyObject implements Parcelable { private final String item1; private final String item2; public MyObject(String item1, String item2) { this.item1 = item1; this.item2 = item2; } public String getItem1() { return item1; } public String getItem2() { return item2; } private MyObject(Parcel in) { item1 = in.readString(); item2 = in.readString(); } public static final Creator CREATOR = new Creator() { @Override public MyObject createFromParcel(Parcel in) { return new MyObject(in); } @Override public MyObject[] newArray(int size) { return new MyObject[size]; } }; @Override public int describeContents() { return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeString(item1); dest.writeString(item2); } } ``` Then you can use your object like this: ``` MyObject object = new MyObject("first", "second"); intent.putExtra("myObject", object); ``` To get extra use: `intent.getParcelableExtra("myObject");` Upvotes: 1 <issue_comment>username_3: I simply do like this ``` Gson gc = new Gson(); Intent intent = new Intent(ActivityGuaranteeBonesesOffers.this, ActivityDetailGaurantee.class); intent.putExtra("gurantee","gurantee"); intent.putExtra("key", gc.toJson(guranteBoneses)); startActivity(intent); ``` And received on second Activity As string and then convert it into object class ``` Bundle b = new Bundle(); b = getIntent().getExtras(); Gson gc = new Gson(); guranteBoneses = gc.fromJson(b.getString("key"), Testendride.class); ``` Upvotes: 0
2018/03/20
971
3,657
<issue_start>username_0: I am using this dependency in a spring boot application: ``` org.springframework.boot spring-boot-devtools true ``` The [documentation](https://docs.spring.io/spring-boot/docs/2.0.0.RELEASE/reference/htmlsingle/#using-boot-devtools-restart-exclude) says: > > By default, changing resources in /META-INF/maven, > /META-INF/resources, /resources, **/static**, /public, or /templates does > not trigger a restart but **does trigger a live reload**. > > > The [live reload documentation](https://docs.spring.io/spring-boot/docs/2.0.0.RELEASE/reference/htmlsingle/#using-boot-devtools-livereload) says: > > The spring-boot-devtools module includes an embedded LiveReload server > that can be used to trigger a browser refresh when a resource is > changed. LiveReload browser extensions are freely available for > Chrome, Firefox and Safari from livereload.com. > > > Now, I am using Maven and my static folder is under src/main/resources, so my folder structure is: src/main/resources/static/index.html This is what's in my index.html file: ``` Getting Started: Serving Web Content HI THERE! Get your greeting [here](/greeting) ``` I using chrome browser but am not using any live reload browser extension in chrome. I run the application with this Powershell command (since I'm using spring boot maven plugin): ``` mvn clean package; java -jar target\project-name-version.jar ``` This starts up the server on localhost:8080 and displays the contents of index.html in a web page. However, when I then make changes to index.html in Eclipse IDE and save the file, and I then refresh the browser page, **I do not see the new changes.** How come live reload isn't working for me? What am I doing wrong?<issue_comment>username_1: Running the application from the IDE is not a constraint to make Developer tools working. Your problem is somewhere else. These commands : ``` mvn clean package; java -jar target\project-name-version.jar ``` mean that you don't use the spring-boot maven plugin to run your application. You run the autobootable jar of your fully packaged application. Consequently, Spring Boot devtools are disabled as stated by the [documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/using-boot-devtools.html#using-boot-devtools) : > > **Developer tools are automatically disabled when running a fully > packaged application**. If your application is launched from java -jar > or if it is started from a special classloader, then it is considered > a “production application”. Flagging the dependency as optional is a > best practice that prevents devtools from being transitively applied > to other modules that use your project. Gradle does not support > optional dependencies out-of-the-box, so you may want to have a look > at the propdeps-plugin. > > > To run the application from command line in exploded/dev mode, `mvn clean package` is not required and helpless. Just execute `mvn spring-boot:run` Upvotes: 5 [selected_answer]<issue_comment>username_2: I fixed this issue by below approach - * First open setting.json (ctrl + shift+P) file and define a property there as "java.autobuild.enabled": true. * second open application.properties file and define a property as spring.devtools.restart.pollInterval=10s Hope it should work. Upvotes: 2 <issue_comment>username_3: Set Build project automatically. In intellij  File –> Setting –> Build, Execution, Deployment –> Compiler –> check the Build project automatically In eclipse  Window -> Preferences -> Workspace -> General -> Build -> Check ‘build automatically’ Upvotes: 0
2018/03/20
800
2,501
<issue_start>username_0: I want to find the indices of the maximum values along axis 0 in a multi dimensional array. This can be done in `python` using `numpy`: <https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html> In `python`: ``` a = array([[1,2,3],[4,3,1]]) a.argmax(axis=0) ``` Will return: ``` array([1, 1, 0]) ``` How would one do the exact same thing in `swift`?<issue_comment>username_1: Try the fellowing code: ``` let a = Array([[1,2,4],[4,3,1]]) func findMax(on axis: Int, in array: [[Int]]) -> [Int] { let result: [Int] if axis == 0 { result = array.map() { $0.index(of: $0.max()!)! // Make sure the array is not empty } } else { result = findMax(on: 0, in: transpose(array: array)) } return result } func transpose(array: [[Int]]) -> [[Int]] { let d1 = array.count let d2 = array[0].count // Take the length of the first sub array(first row). var result: [[Int]] = Array(repeating: Array.init(repeating: 0, count: d1), count: d2) for (k, a) in array.enumerated() { for (i, v) in a.enumerated() { result[i][k] = v } } return result } findMax(on: 0, in: a) // [2, 0] findMax(on: 1, in: a) // [1, 1, 0] ``` It's not a rigorous demo. But I think this can inspire you. Upvotes: 0 <issue_comment>username_2: One simple implementation, not actually very safe because it is based on the precondition that all subarrays have the same length and are not empty: ``` extension Array where Element == [Int] { func axisItems(axis: Int) -> [[Int]] { // let's suppose all elements have the same length guard !isEmpty else { return [] } switch axis { case 0: // transpose return self[0].indices.map { index in self.map { $0[index] } } case 1: return self default: fatalError("Invalid axis: \(axis)") } } func argMax(axis: Int) -> [Int] { // add indices to items, find max item and return its index return axisItems(axis: axis).map { $0.enumerated().max { $0.1 < $1.1 }!.0 } } } let array: [[Int]] = [[1, 2, 3], [4, 3, 1]] print(array.argMax(axis: 0)) // [1, 1, 0] print(array.argMax(axis: 1)) // [2, 0] ``` I am transposing the matrix for `axis = 0`. This is not easily extensible for arrays with higher dimensions. For that it would be better to use a custom object instead of an array. Upvotes: 3 [selected_answer]
2018/03/20
648
2,130
<issue_start>username_0: hi am new to curl but need it for a particular project could you help me format this code to work i would like to get the results and print out the raw JSON on the page here is the code i am using ``` $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, "curl -u api key: https://api.companieshouse.gov.uk/search/companies"); $x = curl_exec($curl); curl_close($curl); print($x); ``` this is a link to the api page i am trying to use <https://developer.companieshouse.gov.uk/api/docs/search/companies/companysearch.html> this is the example they give on the page ``` curl -uYOUR_APIKEY_FOLLOWED_BY_A_COLON: https://api.companieshouse.gov.uk/search/companies ``` these are the parameters for the call if possible i would like to set them as well ``` q (required) items_per_page (optional) start_index (optional) ```<issue_comment>username_1: I simplified your request.you can try this by using below code. ``` $params = array("q"=>"","items\_per\_page"=>"","start\_index"=>""); $curl = curl\_init(); curl\_setopt($curl, CURLOPT\_USERPWD, "api key:"); curl\_setopt($curl, CURLOPT\_URL, "https://api.companieshouse.gov.uk/search/companies?".http\_build\_query($params)); $x = curl\_exec($curl); curl\_close($curl); print($x); ``` Upvotes: 1 <issue_comment>username_2: Thanks for this opportunity to learn more of curl :-) I tested the code below, it works. Of course you must be registered on the site and you must have got your API key. You will then provide it as username/password, but without username (this is written [here on the site](https://developer.companieshouse.gov.uk/api/docs/index/gettingStarted/apikey_authorisation.html)). ``` $searchvalue= 'Test'; $curl = curl_init("https://api.companieshouse.gov.uk/search/companies?q=$searchvalue"); curl_setopt($curl, CURLOPT_USERPWD, "your_api_key_provided_by_the_site"); // curl_setopt($curl, CURLOPT_HEADER, true); // when debugging : to show returning header $rest = @curl_exec($curl); print_r($rest); if(curl_errno($curl)) { echo 'Curl error : ' . curl_error($curl); } @curl_close($curl); ``` Upvotes: 1 [selected_answer]
2018/03/20
728
2,914
<issue_start>username_0: I have a question that might be a matter of pure taste, however I just like to get some opinions from different angles to become more clear about this. A class reads and writes status values from and into a database. There are currently 5 status values, they might become a few more. For brevity, I only use 3 status values in the example. Also, though this is Java code, the concept can be transferred to other languages as well. ``` class Status { private enum Value { FAILED, FINISHED, RUNNING } // getters (read from db) // ... // setters (write into db) // ... } ``` I can't decide whether I should use generic getters and setters like ``` Value get() void set(Value value) ``` or if I should create specific getters and setters with more semantics like ``` boolean isRunning() boolean hasFailed() boolean hasFinished() // could also be named isFinished()... void setFailed() void setFinished() void setRunning() ``` The component is not performance-critical, still the obvious drawback of the semantic getters is that you need multiple database reads for `if-else if` statements and `switch` cannot be used. On the other hand, it hides the internal details of the implementation from the consumer and there is no need to work with the enum values outside this class: ``` if (status.get() == Status.Value.RUNNING) { ... } ``` vs. ``` if (status.isRunning()) { ... } ``` Any comments are appreciated.<issue_comment>username_1: If clients have to write multiple times comparisons such as : ``` if (status.get() == Status.Value.RUNNING) { ... } ``` You provide probably a too much fine grained API for the enum field containing in the `Status` class. It will force each client to have duplicate code or otherwise create their own util methods such as `boolean isRunning(Status status){return status.get() == Status.Value.RUNNING}`. It is not a good idea for two reasons : * the boiler plate code has to be in the internal/implementation of the API, not in the client side. * a method has to be located in the class that suits best to hold it. In this case, this is `Status`. Don't constraint clients classes to take the responsibility of this method. Note that you should not provide both : Value getter/setter + specific methods for them. It could be misleading. Upvotes: 2 <issue_comment>username_2: I agree with Mena that this is opinion based, I would prefer using isRunning, hasFailed etc. Because names give more meaning and is much shorter and more understandable. As you wrote: ``` if (status.get() == Status.Value.RUNNING) { ... } if (status.isRunning()) { ... } ``` The second option is much shorter and gives better overall sense of what is happening, but again this is more of personal thing. Also you could consider having more functional approach, something similar to Optional, for example `ifFailed(Consumer)`. Upvotes: 1
2018/03/20
508
2,116
<issue_start>username_0: In Git pull I was getting merge conflict. So I deleted all the folders and files inside it which was giving conflict in my local workspace. After which I did : ``` git pull ``` Now no conflict came and my local branch became sync to remote branch. Now when I am doing ``` git status ``` Its returning hundreds of deleted files which I removed. I assumed that most of the deleted files would be replaced by the existing files in remote but I was wrong. How can I remove the deleted files in the git status. Its because I cannot read properly the actual files which I am going to modify or delete because of this list of files.<issue_comment>username_1: If clients have to write multiple times comparisons such as : ``` if (status.get() == Status.Value.RUNNING) { ... } ``` You provide probably a too much fine grained API for the enum field containing in the `Status` class. It will force each client to have duplicate code or otherwise create their own util methods such as `boolean isRunning(Status status){return status.get() == Status.Value.RUNNING}`. It is not a good idea for two reasons : * the boiler plate code has to be in the internal/implementation of the API, not in the client side. * a method has to be located in the class that suits best to hold it. In this case, this is `Status`. Don't constraint clients classes to take the responsibility of this method. Note that you should not provide both : Value getter/setter + specific methods for them. It could be misleading. Upvotes: 2 <issue_comment>username_2: I agree with Mena that this is opinion based, I would prefer using isRunning, hasFailed etc. Because names give more meaning and is much shorter and more understandable. As you wrote: ``` if (status.get() == Status.Value.RUNNING) { ... } if (status.isRunning()) { ... } ``` The second option is much shorter and gives better overall sense of what is happening, but again this is more of personal thing. Also you could consider having more functional approach, something similar to Optional, for example `ifFailed(Consumer)`. Upvotes: 1
2018/03/20
454
1,740
<issue_start>username_0: How to convert the below string to XML, ``` SET @string = ' '+@Cname+' System.String ' ``` I tried `SET @xmlstring = CONVERT(XML,@string)` but it displays below error Msg 9459, Level 16, State 1, Line 17 XML parsing: line 4, character 13, undeclared prefix<issue_comment>username_1: If clients have to write multiple times comparisons such as : ``` if (status.get() == Status.Value.RUNNING) { ... } ``` You provide probably a too much fine grained API for the enum field containing in the `Status` class. It will force each client to have duplicate code or otherwise create their own util methods such as `boolean isRunning(Status status){return status.get() == Status.Value.RUNNING}`. It is not a good idea for two reasons : * the boiler plate code has to be in the internal/implementation of the API, not in the client side. * a method has to be located in the class that suits best to hold it. In this case, this is `Status`. Don't constraint clients classes to take the responsibility of this method. Note that you should not provide both : Value getter/setter + specific methods for them. It could be misleading. Upvotes: 2 <issue_comment>username_2: I agree with Mena that this is opinion based, I would prefer using isRunning, hasFailed etc. Because names give more meaning and is much shorter and more understandable. As you wrote: ``` if (status.get() == Status.Value.RUNNING) { ... } if (status.isRunning()) { ... } ``` The second option is much shorter and gives better overall sense of what is happening, but again this is more of personal thing. Also you could consider having more functional approach, something similar to Optional, for example `ifFailed(Consumer)`. Upvotes: 1
2018/03/20
900
3,328
<issue_start>username_0: I am working on a project using Spring Boot 1.5.2.RELEASE and have been tasked with adding database backed HTTP sessions. So, I got this working in the 2.0.0.RELEASE easily enough and the application started and created the tables *spring\_session* and *spring\_session\_attributes* Here's the properties I added to the later version that got things working: ``` spring.session.store-type=jdbc spring.session.jdbc.initialize-schema=ALWAYS ``` Looking at spring-boot 1.5.2.RELEASE it seems to use spring-session 1.3.0.RELEASE as the managed version so I found the docs here: <https://docs.spring.io/spring-session/docs/1.3.0.RELEASE/reference/html5/guides/httpsession-jdbc.html> No matter what I try I get the following error: > > Whitelabel Error Page This application has no explicit mapping for > /error, so you are seeing this as a fallback. Tue Mar 20 14:02:06 GMT > 2018 There was an unexpected error (type=Internal Server Error, > status=500). PreparedStatementCallback; bad SQL grammar [SELECT > S.SESSION\_ID, S.CREATION\_TIME, S.LAST\_ACCESS\_TIME, > S.MAX\_INACTIVE\_INTERVAL, SA.ATTRIBUTE\_NAME, SA.ATTRIBUTE\_BYTES FROM > SPRING\_SESSION S LEFT OUTER JOIN SPRING\_SESSION\_ATTRIBUTES SA ON > S.SESSION\_ID = SA.SESSION\_ID WHERE S.SESSION\_ID = ?]; nested exception > is org.postgresql.util.PSQLException: ERROR: relation "spring\_session" > does not exist Position: 127 > > > Here is my application.properties (I am trying to get the tables to appear in my PostgreSQL database)- these tables should be created for me automatically, right? ``` spring.datasource.url=jdbc:postgresql://localhost:5432/sandbox spring.datasource.password=<PASSWORD> spring.datasource.username=sandbox spring.thymeleaf.cache=false spring.template.cache=false spring.session.store-type=jdbc spring.session.jdbc.initializer.enabled=true ```<issue_comment>username_1: If clients have to write multiple times comparisons such as : ``` if (status.get() == Status.Value.RUNNING) { ... } ``` You provide probably a too much fine grained API for the enum field containing in the `Status` class. It will force each client to have duplicate code or otherwise create their own util methods such as `boolean isRunning(Status status){return status.get() == Status.Value.RUNNING}`. It is not a good idea for two reasons : * the boiler plate code has to be in the internal/implementation of the API, not in the client side. * a method has to be located in the class that suits best to hold it. In this case, this is `Status`. Don't constraint clients classes to take the responsibility of this method. Note that you should not provide both : Value getter/setter + specific methods for them. It could be misleading. Upvotes: 2 <issue_comment>username_2: I agree with Mena that this is opinion based, I would prefer using isRunning, hasFailed etc. Because names give more meaning and is much shorter and more understandable. As you wrote: ``` if (status.get() == Status.Value.RUNNING) { ... } if (status.isRunning()) { ... } ``` The second option is much shorter and gives better overall sense of what is happening, but again this is more of personal thing. Also you could consider having more functional approach, something similar to Optional, for example `ifFailed(Consumer)`. Upvotes: 1
2018/03/20
689
2,723
<issue_start>username_0: Is there a way to run a python that can **process inputs interactively**? Without user input. That way methods can be called **without needed to import** and initialize the script. What I have: ``` import very_heavy_package very_heavy_package.initialize() if very_heavy_package.check(input_file): do_something() else: do_something_else() ``` I want something like: ``` import very_heavy_package very_heavy_package.initialize() @entry_point() def check_something(input_file): if very_heavy_package.check(input_file): do_something() else: do_something_else() ``` `import` and `initialize()` lines take a very long time, but `check_something()` is pretty much instantaneous. I want to be able to `check_something()` multiple times on demand, without executing the full script all over. --- I know this could be achieved with a server built in flask, but it seems a little overkill. Is there a more "local" way of doing this? This example in particular is about running some *Google Vision* processing in an image from a surveillance camera on a Raspberry Pi Zero. Initializing the script takes a while (~10s), but making the API request is very fast(<100ms). I'm looking to achieve fast response time.<issue_comment>username_1: If clients have to write multiple times comparisons such as : ``` if (status.get() == Status.Value.RUNNING) { ... } ``` You provide probably a too much fine grained API for the enum field containing in the `Status` class. It will force each client to have duplicate code or otherwise create their own util methods such as `boolean isRunning(Status status){return status.get() == Status.Value.RUNNING}`. It is not a good idea for two reasons : * the boiler plate code has to be in the internal/implementation of the API, not in the client side. * a method has to be located in the class that suits best to hold it. In this case, this is `Status`. Don't constraint clients classes to take the responsibility of this method. Note that you should not provide both : Value getter/setter + specific methods for them. It could be misleading. Upvotes: 2 <issue_comment>username_2: I agree with Mena that this is opinion based, I would prefer using isRunning, hasFailed etc. Because names give more meaning and is much shorter and more understandable. As you wrote: ``` if (status.get() == Status.Value.RUNNING) { ... } if (status.isRunning()) { ... } ``` The second option is much shorter and gives better overall sense of what is happening, but again this is more of personal thing. Also you could consider having more functional approach, something similar to Optional, for example `ifFailed(Consumer)`. Upvotes: 1
2018/03/20
635
2,165
<issue_start>username_0: Is there a way to create a table footer in pdfMake? I have a big table that only fits on multiple pages. I want to repeat the table footer on each page.<issue_comment>username_1: Seems like it is still not implemented: <https://github.com/bpampuch/pdfmake/issues/620> Upvotes: 3 [selected_answer]<issue_comment>username_2: Add this to your document: ``` footer: { columns: [ '', { alignment: 'right', text: 'Footer text' } ], margin: [10, 0] } ``` Upvotes: 2 <issue_comment>username_3: I finally figured out how to do this. (1.) Increase your current "headerRows" count by 1. (2.) Put the footer row between your header row and data rows. (3.) Set marginTop of footer row to a little less than the height of the page. (4.) Set marginBottom of footer row to ((marginTop + height of row) \* -1). This pushes the table footer outside the space of the table into the footer space of the page. Example: ``` table: { headerRows: 2, widths: [700], body: [ [{ text: 'My Header' }], [{ marginTop: 480, marginBottom: -490, text: 'My Footer' }], //480 roughly fits Landscape-Letter [{ text: 'My Data Row 1' }], [{ text: 'My Data Row 2' }], // more data rows [{ text: 'My Data Row N' }] ]} ``` Upvotes: 1 <issue_comment>username_4: try this footer ``` footer: function (currentPage, pageCount) { return currentPage.toString() + " of " + pageCount; }, ``` or ``` footer: { columns: [ 'Report generated on 8/30/2021 6:02:31PM', { text: 'Page 1 of 2', alignment: 'right' }, // { text: 'Developed by KRATOS Fitness Software ', alignment: 'left' }, ], }, ``` ``` footer: { columns: [ 'Report generated on 8/30/2021 6:02:31PM', 'Developed by XYZ Fitness Software', { alignment: 'right', text: 'Page 1 of 2' }, { alignment: 'right', text: 'www.xyz.com' }, ], margin: [60, 10, 60, 10 ] }, ``` It's work Upvotes: -1
2018/03/20
220
704
<issue_start>username_0: i set break point in razor code but i am not being able to debug the code. here is the picture which clearly saying **break point will not currently hit** so please tell me what option i need to turn on in VS2013 IDE. [![enter image description here](https://i.stack.imgur.com/k1Jlm.png)](https://i.stack.imgur.com/k1Jlm.png)<issue_comment>username_1: * 1.close your cshtml file * 2.Rebuild your Project * 3.Reopen your cshtml file,now it will be ok and you can debug your c# code Upvotes: 2 <issue_comment>username_2: Make sure you are in **debug** mode [![enter image description here](https://i.stack.imgur.com/dzbRW.png)](https://i.stack.imgur.com/dzbRW.png) Upvotes: 0
2018/03/20
1,294
3,680
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/UI1n3.png)](https://i.stack.imgur.com/UI1n3.png) How can i add text at each line end like 97th, 95th etc using **charts.js**. Any help would be great. I tried documentation and did googling.<issue_comment>username_1: You can [extend](http://www.chartjs.org/docs/latest/developers/charts.html#extending-existing-chart-types) line controller drawing serie labels at the end of each line: ```js var chartData = { datasets: [{ label: 'serie1', backgroundColor: 'rgba(255, 255, 255, 0.0)', borderColor: 'rgba(0, 119, 290, 0.6)', data: [{x: 1, y: 10}, {x: 2, y: 12}, {x: 3, y: 18}, {x: 4, y: 5}, {x: 5, y: 25}, {x: 6, y: 30}] }, { label: 'serie2', backgroundColor: 'rgba(255, 255, 255, 0.0)', borderColor: 'rgba(120, 0, 190, 0.6)', data: [{x: 1, y: 8}, {x: 2, y: 9}, {x: 3, y: 16}, {x: 4, y: 8}, {x: 5, y: 12}, {x: 6, y: 20}] }, { label: 'serie3', backgroundColor: 'rgba(255, 255, 255, 0.0)', borderColor: 'rgba(0, 200, 10, 0.6)', data: [{x: 1, y: 10}, {x: 2, y: 5}, {x: 3, y: 26}, {x: 4, y: 18}, {x: 5, y: 19}, {x: 6, y: 10}] }] }; var originalController = Chart.controllers.line; Chart.controllers.line = Chart.controllers.line.extend({ draw: function() { originalController.prototype.draw.call(this, arguments); drawLabels(this); } }); function drawLabels(t) { ctx.save(); ctx.font = Chart.helpers.fontString(12, Chart.defaults.global.defaultFontStyle, Chart.defaults.global.defaultFontFamily); ctx.fillStyle = 'red'; ctx.textBaseline = 'bottom'; var chartInstance = t.chart; var datasets = chartInstance.config.data.datasets; datasets.forEach(function(ds, index) { var label = ds.label; var meta = chartInstance.controller.getDatasetMeta(index); var len = meta.data.length-1; //console.log(ds, meta.data[len]._model); var xOffset = meta.data[len]._model.x+10; var yOffset = meta.data[len]._model.y; ctx.fillText(label, xOffset, yOffset); }); ctx.restore(); } var ctx = document.getElementById("myChart").getContext("2d"); var myBar = new Chart(ctx, { type: 'line', data: chartData, options: { legend: { display: false }, scales: { xAxes: [{ type: 'linear', scaleLabel: { display: true, labelString: 'x' } }], yAxes: [{ ticks: { min: 0 }, scaleLabel: { display: true, labelString: 'y' } }] }, layout: { padding: { left: 0, right: 60, top: 20, bottom: 0 } } } }); ``` ```css #myChart { border: solid 1px rgba(255, 0, 0, 0.5); } ``` ```html ``` Here is a jsfiddle: <http://jsfiddle.net/username_171/q62yv516/> Upvotes: 4 [selected_answer]<issue_comment>username_2: ``` plugins = [ { id: 'customPlugin', beforeDraw: chart => { const datasets = chart.config.data.datasets; if (datasets) { const { ctx } = chart.chart; ctx.save(); ctx.fillStyle = 'black'; ctx.font = '400 12px Open Sans, sans-serif'; for (let i = 0; i < datasets.length - 1; i++) { const ds = datasets[i]; const label = ds.label; const meta = chart.getDatasetMeta(i); const len = meta.data.length - 1; const xOffset = chart.canvas.width - 26; const yOffset = meta.data[len]._model.y; ctx.fillText(label, xOffset, yOffset); } ctx.restore(); } } } ]; ``` Same as above, but as a plugin Upvotes: 2
2018/03/20
2,018
6,985
<issue_start>username_0: Why I got -4 by the community? Nobody knows the answer so they give me -4 for asking question you cannot figure out? Please explain... The question is highly technical and hard and deserve much better reaction in the community. Unless i will start asking easy questions... Wrong answers i got: 1. "Just call the OAuth endpoint with your user and password" No, I tracked all browser request from login to the action of follow(when the bearer token is needed) and at no time the browser calls the server to get the bearer token. so it is not that easy. Nonetheless the browser has the bearer token as it sends it after login, when you want to do an action such as follow. (this is why it generates it in JS from login data) 2. "just use the auth\_toekn" No, the auth\_token and the bearer are both needed and sent when you follow, if one is missing the call will fails, and they are very different from each other... 3. Please do not answer that twitter has an API, I know and it is not the question. This is a hard question for really advanced developers. I am trying to research how Twitter Creates the bearer\_token. This token is provided later on after login to follow or to do any other actions. I put fiddler on, and after a successful login throguh twitter website (not an app or anything) Twitter calls: <https://twitter.com/sessions> This functions returns 5 cookies: --------------------------------- ``` set-cookie: fm=0; Expires=Mon, 19 Mar 2018 23:58:22 GMT; Path=/; Domain=.twitter.com; Secure; HTTPOnly set-cookie: ads_prefs="HBERAAA="; Expires=Thu, 16 Mar 2028 23:58:32 GMT; Path=/; Domain=.twitter.com set-cookie: kdt=1lMiTzCNWYEh4IrZmXmF1t9cjAT4LNVfUhfvBIa5; Expires=Tue, 17 Sep 2019 23:58:32 GMT; Path=/; Domain=.twitter.com; Secure; HTTPOnly set-cookie: _twitter_sess=BAh7CiIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxhc2g6OkZsYXNo%250ASGFzaHsABjoKQHVzZWR7ADoPY3JlYXRlZF9hdGwrCMcps0BiAToMY3NyZl9p%250AZCIlZWM2ZmU1M2RlOGU3NmI4Yzc4ZDY4MTg3YmFiMmNlZTk6B2lkIiUyOTg4%250AOGE4NTY0MDNmNTY5NGU0YzM0MTI2ZjNkNWNhZjoJdXNlcmwrB5ynikA%253D--dc71be55646bbc98ff0043e50e7fe15c4fa80cc4; Path=/; Domain=.twitter.com; Secure; HTTPOnly set-cookie: remember_checked_on=1; Expires=Thu, 16 Mar 2028 23:58:32 GMT; Path=/; Domain=.twitter.com set-cookie: twid="u=1082828700"; Expires=Thu, 16 Mar 2028 23:58:32 GMT; Path=/; Domain=.twitter.com; Secure ``` set-cookie: auth\_token=<PASSWORD>; Expires=Thu, 16 Mar 2028 23:58:32 GMT; Path=/; Domain=.twitter.com; Secure; HTTPOnly ------------------------------------------------------------------------------------------------------------------------------------------------------- Later on when I follow somebody twitter calls to: <https://api.twitter.com/1.1/friendships/create.json> Among the headers that are sent on this post request one important header is: "authorization: Bearer" AAAAAAAAAAAAAAAAAAAAAPYXBAAAAAAACLXUNDekMxqa8h%2F40K4moUkGsoc%3DTY<KEY> My questions is simple but important for my research, how is this Bearer token created? 1. Obviously it is created on the client side through Js, as inspecting all the twitter calls from login to follow(where bearer is used) no other call, calls to the server to get the bearer token. this is a fact after a long research 2. I suspect that through some sort of encoding, the auth\_token, twid or kdt or a combination of those headers sent to me after login, the bearer token is created on the client side and then sent to server. 3. after investigating all the js files of twitter (and unminifing them using online tool) i came up with some functions that create it, but still cannot find a way to create the bearer token on my side? Any help will be highly appreciated, i am sure the concept is easy bearer token generated on client side through data got from login cookies. but i cannot figuere out how, and the js is hard to understand.... here are some snipseet taken from 3 files: ``` this._addBearerTokenToUrl = function(t, e) { return t ? h.a.addToUrl(e, { bearer_token: t }) : e function n() { var t = arguments.length > 0 && void 0 !== arguments[0] ? arguments[0] : u; return function(e, i) { var n = { authorization: "Bearer " + t }; a.a.user.id && (n["x-twitter-auth-type"] = "OAuth2Session"); var o = Object(r.a)(c); return o && (n["x-csrf-token"] = o), e.headers = s({}, e.headers, n), i(e) } } function n() { var t = arguments.length > 0 && void 0 !== arguments[0] ? arguments[0] : u; return function(e, i) { var n = { authorization: "Bearer " + t }; a.a.user.id && (n["x-twitter-auth-type"] = "OAuth2Session"); var o = Object(r.a)(c); return o && (n["x-csrf-token"] = o), e.headers = s({}, e.headers, n), i(e) } } if (i.useOAuthSessionAuth) { var E = this.getCsrfToken(); S = { authorization: "Bearer " + k, "x-twitter-auth-type": "OAuth2Session", "x-csrf-token": E }, x = { withCredentials: !0 } } ```<issue_comment>username_1: > > How is this Bearer token created? > > > That is specific to the server implementation, but for sure it is not generated on the client. The token contains some information that can lead back that token to your user (similar to a logon cookie would do for you). You can't generate such token yourself. If you could, you could create one for me without knowing, which is a huge security loophole. You can however obtain a token through some sort of authentication endpoint, usually something OAuth like (on which you can read more here in the [Twitter docs](https://developer.twitter.com/en/docs/basics/authentication/overview/oauth)). For what I know, OWIN (the ASP.NET OAuth server implementation) using an encryption mechanism that uses the machine key registered in the `machine.config` as input to ensure only the proper servers can decrypt the token. Upvotes: 2 <issue_comment>username_2: Will when you load the home page in twitter ,the barer is in the content ,i ussely extract it from there ,and i did not find where the request to server side code get from,also you have to wait until page is loaded and then extract the auth token > > Search for : bearer\_token > > > and also the token only received once on the login and then cached if you extract the cache you will probably find it there , try to monitor the login requests and make sure that you cleared the cache to find the request Upvotes: 0
2018/03/20
461
1,549
<issue_start>username_0: my Object has this structure: ``` { "head" : { "nestedObject" : {...}, "someMoreObjects": {...} } } ``` how can I change the key "head" into i.e. "newHead" without loosing the child values?<issue_comment>username_1: I think you can do like that ? ```js const oldObj = { "head": { "nestedObject": { // ... }, "someMoreObjects": { // ... } } } const newObj = { "newHead": { ...oldObj.head } } console.log(newObj) ``` Upvotes: 2 <issue_comment>username_2: I believe something like this would do the trick: ```js const myobj = { "head": { "nestedObject": { // ... }, "someMoreObjects": { // ... } } }; myobj.newHead = myobj.head; delete myobj.head; console.log(myobj) ``` Basically just assign the value of `head` to the new property name (`newHead`), then delete the old property name. Upvotes: 2 <issue_comment>username_3: To achieve expected result, use below option of using JSON.stringify, replace, JSON.parse 1. JSON.stringify will change object to string format 2. replace method looks for the head and replaces with newHead 3. JSON.parse will parse data to javascript object ```js var Obj = { "head" : { "nestedObject" : {}, "someMoreObjects": {} } } Obj = JSON.parse(JSON.stringify(Obj).replace("head", 'newHead')); console.log(Obj) ``` code sample - <https://codepen.io/nagasai/pen/oqBKRb?editors=1010> Upvotes: 2 [selected_answer]
2018/03/20
833
2,700
<issue_start>username_0: I am trying to view list of items from database ( I am using Entity Framework). My Repository method: ``` public List getListOfItems(int i) { return (from x in db.Items where x.ID == i select x.Text).ToList(); } ``` My Controller: ``` public ActionResult Index() { var itemOutput = repo.getListOfItems(1); // I just put 1 since I didn't know how to specify "i" - However theoretically it should return first item in database but its not ViewBag.itemOutput = itemOutput ; return View(); } ``` Model: ``` public class items { [Key] public int ID{ get; set; } public string name{ get; set; } public string quantity{ get; set; } } ``` ItemModel: ``` public class itemModels { public List> itemData{ get; set; } } ``` View: ``` @foreach (var item in ViewBag.itemOutput ) { | | | --- | | @item.name | } ```<issue_comment>username_1: getListOfItems() returns a list of string, but you're referring to an actual object in ViewBag.itemOutput Instead of select x.Text do a select x, and make the return value `List` ``` public List getListOfItems(int i) { return (from x in db.Items where x.ID == i select x).ToList(); } ``` Then you can keep your razor template the same to refer to @item.name Upvotes: 0 <issue_comment>username_2: Answer: ------- `ViewBag.itemOutput` is a `List` which makes `item` a `string`. Therefore, use `@item` instead of `@item.name` (as `string` does not have `.name` property) in your view: ``` @foreach (var item in ViewBag.itemOutput ) { | | | --- | | @item | } ``` Also, to get the full list, you could do: ``` public List getListOfItems() { return (from x in db.Items select x.Text).ToList(); } ``` And then just call `getListOfItems()` with no param. --- Random comments: ---------------- 1) Don't use plural for class name, *unless the class is somewhat a collection of things* ``` --> public class item // without s ``` 2) You said in comments that `items` are full `varchar` which is untrue as per your class definition (you have `ID`, `name` & `quantity`). 3) Using `string` for `quantity` is a bit weird. 4) You could indeed change your `getListOfItems` method to: ``` public List getListOfItems() { return (from x in db.Items select x).ToList(); // which can be simplified to: // return db.Items.ToList(); } ``` but you should then change your view back to `@item.name`. This would however allow you to do: ``` @foreach (var item in ViewBag.itemOutput ) { | | | | --- | --- | | @item.name | @item.quantity | } ``` 5) You have an `ItemModel` but you are not using it. You could modify it and use it instead of the `ViewBag`. Upvotes: 3 [selected_answer]
2018/03/20
801
2,682
<issue_start>username_0: I am trying to install latest Meteor 1.6 using Chocolatey on windows. My Windows configurations is > > Processor - core i7, 2nd gen, 2.2 Ghz > > > RAM - 8 GB, > > > System Type - 64 bit > > > I also used below command in COMMAND PROMPT to start with installation, ``` choco install meteor -y --execution-timeout 10000 ``` still I am facing below issue. ``` meteor v0.0.2 [Approved] meteor package files install completed. Performing other installation steps. Downloading meteor 64 bit from 'https://packages.meteor.com/bootstrap-link?arch=os.windows.x86_64' Progress: 4% - Saving 8.12 MB of 171.74 MB Download of meteor-bootstrap-os.windows.x86_64.tar.gz (171.74 MB) completed. ERROR: Chocolatey expected a file at 'C:\Users\HP\AppData\Local\Temp\chocolatey\ meteor\0.0.2\meteor-bootstrap-os.windows.x86_64.tar.gz' to be of length '180086547' but the length was '8552029'. The install of meteor was NOT successful. Error while running 'C:\ProgramData\chocolatey\lib\meteor\tools\chocolateyinstall.ps1'. See log for details. Chocolatey installed 0/1 packages. 1 packages failed. See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log). Failures - meteor (exited -1) - Error while running 'C:\ProgramData\chocolatey\lib\meteo r\tools\chocolateyinstall.ps1'. See log for details. ```<issue_comment>username_1: I have just attempted the installation of the meteor package on my own virtual machine, and I can tell you that the installation was indeed successful. Looking at the log, it would appear that the download of the tar.gz did not complete successfully, and as a result, the file is not the same size as it was expected based on the response headers. Best advice would be to try the installation again. Upvotes: 0 <issue_comment>username_2: I suggest do not use Windows **CMD**, better use **Power Shell** and the issue will be resolved. Chocolatey is behaving very strange as you can see my question with error log. Below is the success I got using Power Shell [![enter image description here](https://i.stack.imgur.com/LezBg.png)](https://i.stack.imgur.com/LezBg.png) Upvotes: 3 [selected_answer]<issue_comment>username_3: I had exactly the same problem (same error message while installing meteor 1.6 on windows cmd shell runnning in admin mode) until i tried installing with powershell running in admninistrator mode and suspending windows defender. Hope this helps. Upvotes: 0 <issue_comment>username_4: I would suggest making the timeout value about 100000. This will at least save you from the occasional termination of the download process due to maybe changes in internet speeds. Upvotes: 0
2018/03/20
1,237
4,902
<issue_start>username_0: It's been a decade since I've written VBA and trying to reach out to see what I broke. I wrote a macro which copies data from one sheet to another, 1 column at a time for 4 different columns, and pastes it in the next free cell. This formula worked but I would like to adjust it to only copy certain data. Below is an example, I am trying to only copy A if the date value in E is equal to the input date value you enter when the macro starts. I am having most trouble balancing the `If/Then` with the `For/Next`. Every time I place an `End If` or `Next`, I receive errors. ``` Dim DateValue As Variant DateValue = InputBox("Enter the date to copy") 'copy and paste column A to column A if E = input date For Each Cell In Worksheets("Enrichment Report").Range("E:E") If Cell.Value = DateValue Then Sheets("Enrichment Report").Select iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow With Worksheets ("Enrichment Report").Cells(iRow, iCol) If .Value = "" Then 'empty row, do nothing Else .Copy Sheets("Intake Form").Select Range (A" & Rows.Count).End(xlUp).Offset(1).Select Activesheet.Paste End If End With Next End If Next iRow Next iCol ```<issue_comment>username_1: Obviously, with the proper indentation done by CallumDA, it should be written as below. Also there is a typo in the `Range (A"`, it should be `Range ("A"`: ``` For Each Cell In Worksheets("Enrichment Report").Range("E:E") If Cell.Value = DateValue Then Sheets("Enrichment Report").Select iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow With Worksheets ("Enrichment Report").Cells(iRow, iCol) If .Value = "" Then 'empty row, do nothing Else .Copy Sheets("Intake Form").Select Range ("A" & Rows.Count).End(xlUp).Offset(1).Select Activesheet.Paste End If End With Next iRow Next iCol End If Next ``` Upvotes: 0 <issue_comment>username_2: I think the following code will be much easier for you to follow Also, it will be much faster looping through occupied cells with data in Column E, and not the entire column. ***Code*** ``` Option Explicit Sub Test() Dim LastRow As Long, iMaxRow As Long, iCol As Long, iRow As Long Dim DateValue As Variant Dim Cell As Range DateValue = InputBox("Enter the date to copy") With Worksheets("Enrichment Report") ' get last row with data in column E LastRow = .Cells(.Rows.Count, "E").End(xlUp).Row 'copy and paste column A to column A if E = input date For Each Cell In .Range("E1:E" & LastRow) If Cell.Value = DateValue Then iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow If .Cells(iRow, iCol).Value = "" Then 'empty row, do nothing Else .Cells(iRow, iCol).Copy Sheets("Intake Form").Range("A" & Rows.Count).End(xlUp).Offset(1).PasteSpecial xlPasteAll End If Next iRow Next iCol End If Next Cell End With End Sub ``` Upvotes: 1 <issue_comment>username_3: you could use `AutoFilter()` and avoid looping also, use `Application.InputBox()` method instead of VBA `InputBox()` function to exploit its `Type` parameter and force a numeric input ``` Sub mmw() Dim targetSht As Worksheet Set targetSht = Sheets("Intake Form") Dim DateValue As Variant DateValue = Application.InputBox("Enter the date to copy", , , , , , , 2) With Worksheets("Enrichment Report") ' reference your "source" sheet With .Range("A1", .Cells(.Rows.Count, "E").End(xlUp)) ' reference its columns A:E cells from row 1 down to column E last not empty cell .AutoFilter Field:=1, Criteria1:="<>" 'filter on referenced range 1st column with not empty cells .AutoFilter Field:=5, Criteria1:=CStr(CDate(DateValue)) If Application.WorksheetFunction.Subtotal(103, .Cells) > 1 Then _ .Resize(.Rows.Count - 1).Offset(1).SpecialCells(xlCellTypeVisible).Copy _ Sheets("Intake Form").Cells(Sheets("Intake Form").Rows.Count, "A").End(xlUp).Offset(1) '<--| if any cell filtered other than headers (which get always filtered) then copy filtered values to "paste" sheet End With .AutoFilterMode = False End With End Sub ``` Upvotes: 1 [selected_answer]
2018/03/20
945
3,622
<issue_start>username_0: Is a tabulator a valid character for whitespace between elements in XML/SOAP? ``` \*TAB\* \*TAB\***test**\*TAB\* *TAB* stands for \t = 0x09 ```<issue_comment>username_1: Obviously, with the proper indentation done by CallumDA, it should be written as below. Also there is a typo in the `Range (A"`, it should be `Range ("A"`: ``` For Each Cell In Worksheets("Enrichment Report").Range("E:E") If Cell.Value = DateValue Then Sheets("Enrichment Report").Select iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow With Worksheets ("Enrichment Report").Cells(iRow, iCol) If .Value = "" Then 'empty row, do nothing Else .Copy Sheets("Intake Form").Select Range ("A" & Rows.Count).End(xlUp).Offset(1).Select Activesheet.Paste End If End With Next iRow Next iCol End If Next ``` Upvotes: 0 <issue_comment>username_2: I think the following code will be much easier for you to follow Also, it will be much faster looping through occupied cells with data in Column E, and not the entire column. ***Code*** ``` Option Explicit Sub Test() Dim LastRow As Long, iMaxRow As Long, iCol As Long, iRow As Long Dim DateValue As Variant Dim Cell As Range DateValue = InputBox("Enter the date to copy") With Worksheets("Enrichment Report") ' get last row with data in column E LastRow = .Cells(.Rows.Count, "E").End(xlUp).Row 'copy and paste column A to column A if E = input date For Each Cell In .Range("E1:E" & LastRow) If Cell.Value = DateValue Then iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow If .Cells(iRow, iCol).Value = "" Then 'empty row, do nothing Else .Cells(iRow, iCol).Copy Sheets("Intake Form").Range("A" & Rows.Count).End(xlUp).Offset(1).PasteSpecial xlPasteAll End If Next iRow Next iCol End If Next Cell End With End Sub ``` Upvotes: 1 <issue_comment>username_3: you could use `AutoFilter()` and avoid looping also, use `Application.InputBox()` method instead of VBA `InputBox()` function to exploit its `Type` parameter and force a numeric input ``` Sub mmw() Dim targetSht As Worksheet Set targetSht = Sheets("Intake Form") Dim DateValue As Variant DateValue = Application.InputBox("Enter the date to copy", , , , , , , 2) With Worksheets("Enrichment Report") ' reference your "source" sheet With .Range("A1", .Cells(.Rows.Count, "E").End(xlUp)) ' reference its columns A:E cells from row 1 down to column E last not empty cell .AutoFilter Field:=1, Criteria1:="<>" 'filter on referenced range 1st column with not empty cells .AutoFilter Field:=5, Criteria1:=CStr(CDate(DateValue)) If Application.WorksheetFunction.Subtotal(103, .Cells) > 1 Then _ .Resize(.Rows.Count - 1).Offset(1).SpecialCells(xlCellTypeVisible).Copy _ Sheets("Intake Form").Cells(Sheets("Intake Form").Rows.Count, "A").End(xlUp).Offset(1) '<--| if any cell filtered other than headers (which get always filtered) then copy filtered values to "paste" sheet End With .AutoFilterMode = False End With End Sub ``` Upvotes: 1 [selected_answer]
2018/03/20
1,060
3,839
<issue_start>username_0: **input**: ``` random adsf 845 asdsf.$Ecdda-string0.rand me39 84abd dee rand a[s% 845 a1sEdsf.$cdNda-string1.rand me39 84abd ``` **output** - keep everything between `da-` and `.` ``` string0 string1 ``` **what I have tried so far**: ``` sed -e 's/.*da-\(.*\)./\1/' file grep -o -P '(?<=da-).*(?=.)' file grep -o -P '(?<=da-).*(?=\.)' file ```<issue_comment>username_1: Obviously, with the proper indentation done by CallumDA, it should be written as below. Also there is a typo in the `Range (A"`, it should be `Range ("A"`: ``` For Each Cell In Worksheets("Enrichment Report").Range("E:E") If Cell.Value = DateValue Then Sheets("Enrichment Report").Select iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow With Worksheets ("Enrichment Report").Cells(iRow, iCol) If .Value = "" Then 'empty row, do nothing Else .Copy Sheets("Intake Form").Select Range ("A" & Rows.Count).End(xlUp).Offset(1).Select Activesheet.Paste End If End With Next iRow Next iCol End If Next ``` Upvotes: 0 <issue_comment>username_2: I think the following code will be much easier for you to follow Also, it will be much faster looping through occupied cells with data in Column E, and not the entire column. ***Code*** ``` Option Explicit Sub Test() Dim LastRow As Long, iMaxRow As Long, iCol As Long, iRow As Long Dim DateValue As Variant Dim Cell As Range DateValue = InputBox("Enter the date to copy") With Worksheets("Enrichment Report") ' get last row with data in column E LastRow = .Cells(.Rows.Count, "E").End(xlUp).Row 'copy and paste column A to column A if E = input date For Each Cell In .Range("E1:E" & LastRow) If Cell.Value = DateValue Then iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow If .Cells(iRow, iCol).Value = "" Then 'empty row, do nothing Else .Cells(iRow, iCol).Copy Sheets("Intake Form").Range("A" & Rows.Count).End(xlUp).Offset(1).PasteSpecial xlPasteAll End If Next iRow Next iCol End If Next Cell End With End Sub ``` Upvotes: 1 <issue_comment>username_3: you could use `AutoFilter()` and avoid looping also, use `Application.InputBox()` method instead of VBA `InputBox()` function to exploit its `Type` parameter and force a numeric input ``` Sub mmw() Dim targetSht As Worksheet Set targetSht = Sheets("Intake Form") Dim DateValue As Variant DateValue = Application.InputBox("Enter the date to copy", , , , , , , 2) With Worksheets("Enrichment Report") ' reference your "source" sheet With .Range("A1", .Cells(.Rows.Count, "E").End(xlUp)) ' reference its columns A:E cells from row 1 down to column E last not empty cell .AutoFilter Field:=1, Criteria1:="<>" 'filter on referenced range 1st column with not empty cells .AutoFilter Field:=5, Criteria1:=CStr(CDate(DateValue)) If Application.WorksheetFunction.Subtotal(103, .Cells) > 1 Then _ .Resize(.Rows.Count - 1).Offset(1).SpecialCells(xlCellTypeVisible).Copy _ Sheets("Intake Form").Cells(Sheets("Intake Form").Rows.Count, "A").End(xlUp).Offset(1) '<--| if any cell filtered other than headers (which get always filtered) then copy filtered values to "paste" sheet End With .AutoFilterMode = False End With End Sub ``` Upvotes: 1 [selected_answer]
2018/03/20
1,548
4,992
<issue_start>username_0: There are a lot of matching "X" and "Y" questions in R on this site but I think I have a new one. I have two datasets, one is shorter (500 rows) and has one entry per individual. The second is bigger (~20,000 rows) and an individual can have multiple entries. Both have columns for date of birth and gender. My goal is to find people represented in both datasets and to start by finding date of birth and gender matches. My python influenced brain came up with this horrifically slow solution: ``` dob_big <- c('1975-05-04','1968-02-16','1985-02-28','1980-12-12','1976-06-06','1979-06-24','1981-01-28', '1985-01-16','1984-03-04','1979-06-26','1988-12-22','1975-10-02','1968-02-04','1972-02-01', '1981-08-06','1989-01-21','1956-06-25','1986-01-19','1980-03-24','1965-08-16') gender_big <- c(0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0) big_df <- data_frame(date_birth = dob_big, gender = gender_big) dob_small <- c('1985-01-16','1984-03-04','1979-06-26') gender_small <- c(1,0,1) small_df <- data_frame(date_birth = dob_small, gender = gender_small) for (i in 1:length(big_df$date_birth)) { save_row <- FALSE for (j in 1:length(small_df$date_birth)) { if (big_df$date_birth[i] == small_df$date_birth[j] & big_df$gender[i] == small_df$gender[j]) { print(paste("Match found at ",i,",",j)) save_row <- TRUE } } if (save_row == TRUE) { matches <- c(matches,i) } } ``` Is there a more functional solution that would run faster in R?<issue_comment>username_1: Obviously, with the proper indentation done by CallumDA, it should be written as below. Also there is a typo in the `Range (A"`, it should be `Range ("A"`: ``` For Each Cell In Worksheets("Enrichment Report").Range("E:E") If Cell.Value = DateValue Then Sheets("Enrichment Report").Select iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow With Worksheets ("Enrichment Report").Cells(iRow, iCol) If .Value = "" Then 'empty row, do nothing Else .Copy Sheets("Intake Form").Select Range ("A" & Rows.Count).End(xlUp).Offset(1).Select Activesheet.Paste End If End With Next iRow Next iCol End If Next ``` Upvotes: 0 <issue_comment>username_2: I think the following code will be much easier for you to follow Also, it will be much faster looping through occupied cells with data in Column E, and not the entire column. ***Code*** ``` Option Explicit Sub Test() Dim LastRow As Long, iMaxRow As Long, iCol As Long, iRow As Long Dim DateValue As Variant Dim Cell As Range DateValue = InputBox("Enter the date to copy") With Worksheets("Enrichment Report") ' get last row with data in column E LastRow = .Cells(.Rows.Count, "E").End(xlUp).Row 'copy and paste column A to column A if E = input date For Each Cell In .Range("E1:E" & LastRow) If Cell.Value = DateValue Then iMaxRow = 100 For iCol = 1 To 1 For iRow = 2 To iMaxRow If .Cells(iRow, iCol).Value = "" Then 'empty row, do nothing Else .Cells(iRow, iCol).Copy Sheets("Intake Form").Range("A" & Rows.Count).End(xlUp).Offset(1).PasteSpecial xlPasteAll End If Next iRow Next iCol End If Next Cell End With End Sub ``` Upvotes: 1 <issue_comment>username_3: you could use `AutoFilter()` and avoid looping also, use `Application.InputBox()` method instead of VBA `InputBox()` function to exploit its `Type` parameter and force a numeric input ``` Sub mmw() Dim targetSht As Worksheet Set targetSht = Sheets("Intake Form") Dim DateValue As Variant DateValue = Application.InputBox("Enter the date to copy", , , , , , , 2) With Worksheets("Enrichment Report") ' reference your "source" sheet With .Range("A1", .Cells(.Rows.Count, "E").End(xlUp)) ' reference its columns A:E cells from row 1 down to column E last not empty cell .AutoFilter Field:=1, Criteria1:="<>" 'filter on referenced range 1st column with not empty cells .AutoFilter Field:=5, Criteria1:=CStr(CDate(DateValue)) If Application.WorksheetFunction.Subtotal(103, .Cells) > 1 Then _ .Resize(.Rows.Count - 1).Offset(1).SpecialCells(xlCellTypeVisible).Copy _ Sheets("Intake Form").Cells(Sheets("Intake Form").Rows.Count, "A").End(xlUp).Offset(1) '<--| if any cell filtered other than headers (which get always filtered) then copy filtered values to "paste" sheet End With .AutoFilterMode = False End With End Sub ``` Upvotes: 1 [selected_answer]
2018/03/20
388
1,267
<issue_start>username_0: I have to deliver an application running in several containers to a customer. I can export all single images to files with: ``` docker save --output app1.tar app2 docker save --output app2.tar app2 ... ``` My goal is do deliver a ZIP-folder with only the docker-compose.yml and the images (app1.tar,app2.tar...) needed to run the containers. Is there any way to reference an image-file in the `docker-compose.yml`?<issue_comment>username_1: No you can't reference a tar ball from a docker-compose file. You need to provide an addition script with the ZIP folder. This script would import the images, and start docker-compose. It can be more or less like below: ``` docker import app1.tar docker import app2.tar docker-compose up ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Try to export zip the plesk onyx docker. Use this code: ``` `FROM wpcache MAINTAINER wpCache® <EMAIL> ENV WPCACHE_VERSION="2.0" # download wpcache RUN curl -O "https://github.com/wpCache/wpCache/raw/master/version/2.0.zip" # unarchive wpcache RUN unzip 2.0.zip` ``` ``` finde PATH PLESK exoport zip export per domain how to make dir main root example: domain.com /wpCache docker export zip /wpCache ``` Upvotes: -1
2018/03/20
879
3,185
<issue_start>username_0: i Want to enable or configure https on my tomcat 8 server, This requires me to configure certificate paths. i have received .pem file, how do i use this .pem file to configure https on tomcat ?<issue_comment>username_1: To enable `https` on your project, follow the steps below: **1-** Go to your `JAVA_HOME` and run the following commmand: (Your directory of java may be different) ``` "C:\Program Files\Java\jre1.8.0_161\bin\keytool" -genkey -alias tomcat -keyalg RSA -keystore \path\to\your\directory\keystore.exe ``` It will take you through a process, and will ask a password for the `keystore.exe`. Remember this **password**. **2-** At `\path\to\your\directory`, you should have your `keystore.exe`. **3-** Now in your `apache tomcat's directory, open`server.xml` and write the following code: ``` ``` **4-** Restart the server and open your project with `https` and port 9999. You will find your project on `https` now. Upvotes: -1 <issue_comment>username_2: If certificate files including Root, Intermediate and Primary certificate received in PEM format by Certificate Authority for your domain, then import certificate files into the Java Keystore using following command in keytool command line utility: ``` "%JAVA_HOME%\bin\keytool” -import -trustcacerts -alias root -file RootCertFileName.crt -keystore keystore.key "%JAVA_HOME%\bin\keytool” -import -trustcacerts -alias intermediate -file IntermediateCertFileName.crt -keystore keystore.key "%JAVA_HOME%\bin\keytool” keytool -import -trustcacerts -alias tomcat -file PrimaryCertFileName.crt -keystore keystore.key ``` **Note:** If you did not specify the alias during the keystore creation, the default value will be 'mykey'. Upon executing commands successful, you will have .keystore file that needs to be copy to home directory. Now open Tomcat configuration file (server.xml) in text editor and locate the element port is 8443. Specify keystoreFile and keystorePass as follows: ``` ``` And, save your configuration file and Restart the server to enable SSL on Tomcat using .pem file. You can follow instructions stated in the post: <https://tomcat.apache.org/tomcat-8.0-doc/ssl-howto.html> to enable SSL on Tomcat 8 server. Upvotes: -1 <issue_comment>username_3: While most answers concentrate on versions 7.0 and 8.0 of Tomcat that were supported at the time of the question, since version 8.5.2 (May 2016) it is possible to use PEM files directly without conversion to a PKCS12 file. You can either: * put the PEM encoded private key and all certificates in the order from leaf to root into a single file (let's say `conf/cert.pem`) and use: ```xml ``` Storing both private key and certificate in the same file is highly **discouraged**. * put the private key in `conf/privkey.pem` and the certificates (in the usual order) in `conf/cert.pem` and use: ```xml ``` * use three separate files: e.g. `conf/privkey.pem` for the private key, `conf/cert.pem` for the server certificate and `conf/chain.pem` for the intermediary certificates and use: ```xml ``` This configuration is supported for all three connector types: `NIO`, `NIO2` and `APR`. Upvotes: 2
2018/03/20
295
954
<issue_start>username_0: It possible to count the ion-option inside the ion-select? ``` ``` `console.log(something)` and I will got 3 ? Thanks.<issue_comment>username_1: Something like ``` @ViewChildren('ion-option') ionOptions: QueryList; ``` and then ``` ionOptions.length; ``` Upvotes: 0 <issue_comment>username_2: [EDIT] Sorry for my last answer which was wrong. You can also use [Viewchild](https://angular.io/api/core/ViewChild) to access to the dom element if you don't want to use document.getElementById('mySelect') So the solution with ViewChild myPage.html ``` Bacon Black Olives Extra Cheese Mushrooms Pepperoni Sausage ``` First use it in your components: ``` import { Component,ViewChild } from '@angular/core'; ``` Then Declare your variable: ``` @ViewChild('mySelect') selectDom; ionViewDidLoad(){ console.log(this.selectDom._options.length); // = 6 in my case } ``` Upvotes: 3 [selected_answer]
2018/03/20
490
1,684
<issue_start>username_0: I have a web application with heavy JS scripts (by heavy I mean lots of client processing which can't be done on the server side). After 1 hour or so (not constant) of processing I get Chrome's *"Aw, Snap!"* error, I have debugged as suggest in <https://superuser.com/questions/607563/how-to-determine-what-is-causing-chrome-to-show-the-aw-snap-dialogue> and I noticed that everytime I get the error, the log is prompting `WARNING:audio_sync_reader.cc(177)] ASR: No room in socket buffer.` I strongly believe that I am kind of running out of memory, because if I open other tabs after this error I get others *"Aw, Snap!"*. However, considering that my JS script is long and it takes a long time to throw the error, how can I identify which piece of code is raising it? PS.: I also have many DOM manipulations (mainly insertions on a table) Thanks!<issue_comment>username_1: Something like ``` @ViewChildren('ion-option') ionOptions: QueryList; ``` and then ``` ionOptions.length; ``` Upvotes: 0 <issue_comment>username_2: [EDIT] Sorry for my last answer which was wrong. You can also use [Viewchild](https://angular.io/api/core/ViewChild) to access to the dom element if you don't want to use document.getElementById('mySelect') So the solution with ViewChild myPage.html ``` Bacon Black Olives Extra Cheese Mushrooms Pepperoni Sausage ``` First use it in your components: ``` import { Component,ViewChild } from '@angular/core'; ``` Then Declare your variable: ``` @ViewChild('mySelect') selectDom; ionViewDidLoad(){ console.log(this.selectDom._options.length); // = 6 in my case } ``` Upvotes: 3 [selected_answer]
2018/03/20
1,405
5,232
<issue_start>username_0: I understand why fetching a value from an array using an ordered index is O(1). But suppose you are fetching by value? If the value is the last in array.count == 100, you have to iterate through the whole collection. Wouldn't that be O(n)?<issue_comment>username_1: **Simple Answer** ----------------- Locating a value in an array by index is O(1). Searching for a value in an **unsorted array** is O(n). **[Big O notation](https://en.wikipedia.org/wiki/Big_O_notation)** ------------------------------------------------------------------ Array is an abstract data structure in computer programming. When discussing Big O notation for an abstract data structure, what we’re asking is: how long does an algorithm take when applied to the abstract data structure given any number of elements. Put another way: how fast is your algorithm as a function of the data within the abstract structure. **Swift Context** ----------------- This isn’t [Machine Code](https://en.wikipedia.org/wiki/Low-level_programming_language#Machine_code). Swift is a [high-level programming language](https://en.wikipedia.org/wiki/High-level_programming_language). Meaning Swift code is itself an abstraction from the internal details of the computer it’s running on. So, when we discuss Big O notation for an array in Swift, we aren’t really talking about how it’s stored in memory because we’re assuming the way it’s stored in memory is a reasonable representation of an [Array data structure](https://en.wikipedia.org/wiki/Array_data_structure). I think this is important to understand, and it’s why an array in [Ruby](https://en.wikipedia.org/wiki/Ruby_(programming_language)) will be slower than an Array in [C](https://en.wikipedia.org/wiki/C_(programming_language)) for example, because one language is a greater abstraction of the computer it’s running on. We can now move forward knowing the discussion about Big O in the context of a Swift Array is indeed affected by it’s underlying implementation in the computer, but we’re still referring to a perfect version of an instantiated Array when using Big O. **Searching for a value in an unsorted array is O(n)** ------------------------------------------------------ **| 2 | 6 | 10 | 0 |** The above text is a visual representation of an array containing 4 elements. If I said every time you look at an element, that is a single action with a complexity of 1. Then I said, I want you to look at each element from left to right only once. What’s the complexity? 4. Good. --- **| 2 | 6 | 10 | 0 |** in Swift is: ``` let array: Array = [2, 6, 10, 0] ``` > > I want you to look at each element from left to right only once. > > > I am a computer programmer writing Swift, and you are the computer. ``` for element in array { print(element) } ``` --- Lets make it more general. Lets say we didn’t know what was inside our array and we named it something… How about `book`. I now ask: I want you to look at each element from left to right only once. What’s the complexity? Well, it’s the number of elements in book! Good. **The number of pages in our book is n**. **In the worst case, whether we find the exclamation point or not, we have to look, at least once, at every single page in book until we find the page with !** **O(n)** **Bringing it all together in an example** ------------------------------------------ ``` struct Page { let content: String func contains(_ character: Character) -> Bool { return content.contains(character) } init(_ content: String) { self.content = content } } typealias Book = Array let emotionalBook: Book = [Page("this"), Page("is"), Page("a"), Page("simple"), Page("book!")] let unemotionalBook: Book = [Page("this"), Page("book"), Page("lacks"), Page("emotion.")] enum EmotionalBook: Error { case lacking } func findExclamationPoint(within book: Book) throws -> Page { //Look at every single page once. for page in book { if page.contains("!") { return page } } //Didn't find a page with ! inside it throw EmotionalBook.lacking } do { //Discovers the last page! let emotionalPage = try findExclamationPoint(within: emotionalBook) print(emotionalPage.content) } catch { print("This book is \(error) in emotion!") } do { //Doesn't discover a page, but still checks all of them! let emotionalPage = try findExclamationPoint(within: unemotionalBook) print(emotionalPage.content) } catch { print("This book is \(error) in emotion!") } ``` I hope this helps! Upvotes: 2 [selected_answer]<issue_comment>username_2: This is the best answer I could find on `stackoverflow` it self An `array` starts at a specific `memory` address `start`. An element occupies a certain amount of bytes `element_size`. The `array` elements are located one after another in the `memory` from the `start` address on. So you can calculate the memory address of the element `i` with `start + i * element_size`. This computation is independent of the array size and is therefor `O(1)`. Original answer is here [click me](https://stackoverflow.com/questions/23103690/why-is-accessing-any-single-element-in-an-array-done-in-constant-time-o1) Upvotes: 0
2018/03/20
1,429
5,273
<issue_start>username_0: I am searching for an regex which would match exactly 7 occurences of the .\*: (7 fields colon separated) unfortunatelly, what I combined: ``` grep -E '(.*:){7}' ... ``` does also print the same lines when I decrase number in {}. how to test it for fixed exactly 7 occurences?<issue_comment>username_1: **Simple Answer** ----------------- Locating a value in an array by index is O(1). Searching for a value in an **unsorted array** is O(n). **[Big O notation](https://en.wikipedia.org/wiki/Big_O_notation)** ------------------------------------------------------------------ Array is an abstract data structure in computer programming. When discussing Big O notation for an abstract data structure, what we’re asking is: how long does an algorithm take when applied to the abstract data structure given any number of elements. Put another way: how fast is your algorithm as a function of the data within the abstract structure. **Swift Context** ----------------- This isn’t [Machine Code](https://en.wikipedia.org/wiki/Low-level_programming_language#Machine_code). Swift is a [high-level programming language](https://en.wikipedia.org/wiki/High-level_programming_language). Meaning Swift code is itself an abstraction from the internal details of the computer it’s running on. So, when we discuss Big O notation for an array in Swift, we aren’t really talking about how it’s stored in memory because we’re assuming the way it’s stored in memory is a reasonable representation of an [Array data structure](https://en.wikipedia.org/wiki/Array_data_structure). I think this is important to understand, and it’s why an array in [Ruby](https://en.wikipedia.org/wiki/Ruby_(programming_language)) will be slower than an Array in [C](https://en.wikipedia.org/wiki/C_(programming_language)) for example, because one language is a greater abstraction of the computer it’s running on. We can now move forward knowing the discussion about Big O in the context of a Swift Array is indeed affected by it’s underlying implementation in the computer, but we’re still referring to a perfect version of an instantiated Array when using Big O. **Searching for a value in an unsorted array is O(n)** ------------------------------------------------------ **| 2 | 6 | 10 | 0 |** The above text is a visual representation of an array containing 4 elements. If I said every time you look at an element, that is a single action with a complexity of 1. Then I said, I want you to look at each element from left to right only once. What’s the complexity? 4. Good. --- **| 2 | 6 | 10 | 0 |** in Swift is: ``` let array: Array = [2, 6, 10, 0] ``` > > I want you to look at each element from left to right only once. > > > I am a computer programmer writing Swift, and you are the computer. ``` for element in array { print(element) } ``` --- Lets make it more general. Lets say we didn’t know what was inside our array and we named it something… How about `book`. I now ask: I want you to look at each element from left to right only once. What’s the complexity? Well, it’s the number of elements in book! Good. **The number of pages in our book is n**. **In the worst case, whether we find the exclamation point or not, we have to look, at least once, at every single page in book until we find the page with !** **O(n)** **Bringing it all together in an example** ------------------------------------------ ``` struct Page { let content: String func contains(_ character: Character) -> Bool { return content.contains(character) } init(_ content: String) { self.content = content } } typealias Book = Array let emotionalBook: Book = [Page("this"), Page("is"), Page("a"), Page("simple"), Page("book!")] let unemotionalBook: Book = [Page("this"), Page("book"), Page("lacks"), Page("emotion.")] enum EmotionalBook: Error { case lacking } func findExclamationPoint(within book: Book) throws -> Page { //Look at every single page once. for page in book { if page.contains("!") { return page } } //Didn't find a page with ! inside it throw EmotionalBook.lacking } do { //Discovers the last page! let emotionalPage = try findExclamationPoint(within: emotionalBook) print(emotionalPage.content) } catch { print("This book is \(error) in emotion!") } do { //Doesn't discover a page, but still checks all of them! let emotionalPage = try findExclamationPoint(within: unemotionalBook) print(emotionalPage.content) } catch { print("This book is \(error) in emotion!") } ``` I hope this helps! Upvotes: 2 [selected_answer]<issue_comment>username_2: This is the best answer I could find on `stackoverflow` it self An `array` starts at a specific `memory` address `start`. An element occupies a certain amount of bytes `element_size`. The `array` elements are located one after another in the `memory` from the `start` address on. So you can calculate the memory address of the element `i` with `start + i * element_size`. This computation is independent of the array size and is therefor `O(1)`. Original answer is here [click me](https://stackoverflow.com/questions/23103690/why-is-accessing-any-single-element-in-an-array-done-in-constant-time-o1) Upvotes: 0
2018/03/20
606
2,039
<issue_start>username_0: I have a CakePHP 3.5 website. It was necessary to set slugs without any url prefixes like /pages/slug, so I wrote the following rule: ``` $routes->connect( '/:slug', ['controller' => 'Pages', 'action' => 'redirectPage'] ) ->setMethods(['GET', 'POST']) ->setPass(['slug']) ->setPatterns([ 'slug' => '[a-z0-9\-_]+' ]); ``` It works nice, but in some cases I want cakePHP to route as default (Controller/Action/Params). For example, I want /admin/login to call 'login' action in 'AdminController'. I have two ideas that doesn't need exact routings, but I can't make any of them to work: 1. Filtering some strings by pattern: It would be nice if I could filter some strings, that if slug doesn't match pattern, it will simply skip the routing rule. 2. Create a '/admin/:action' routing rule, but then I cant use :action as an action variable. It causes errors. ``` $routes->connect( '/admin/:action', ['controller' => 'Admin', 'action' => ':action'] ) ``` Any ideas? Thanks<issue_comment>username_1: Try this: ``` $routes->connect( '/admin/:action', ['controller' => 'Admin'], ['action' => '(login|otherAllowedAction|someOtherAllowedAction)'] ); ``` Also, your slug routes seems to not catch `/admin/:action` routes, b/c dash is not allowed there: `[a-z0-9\-_]+` Upvotes: 1 <issue_comment>username_2: You can use `prefix` for admin restricted area. Example: ``` Router::prefix('admin', function ($routes) { $routes->connect('/', ['controller' => 'Users', 'action' => 'login']); $routes->fallbacks(DashedRoute::class); }); $routes->connect('/:slug' , [ 'controller' => 'Pages', 'action' => 'display', 'plugin' => NULL ], [ 'slug' => '[A-Za-z0-9/-]+', 'pass' => ['<PASSWORD>'] ]); ``` Now, for example, path `/admin/dashboard/index` will execute method in `Admin` "subnamespace" `\App\Controller\Admin\DashboardController::index()` Its nicely described in docs: <https://book.cakephp.org/3.0/en/development/routing.html#prefix-routing> Upvotes: 2
2018/03/20
343
1,388
<issue_start>username_0: I'm new to programming so please dont bite my head off! I'm using Xcode with GitHub, but it's suddenly stopped working. Xcode is not showing the current branch on source control, it doesnt load remote repositories, and the git status is 'head detached'. I've gone to the command line and done 'sudo git --version', and accepted the new license. This project is for fun, but I've put a lot of work into it. I don't mind just losing the history of the project if need be, I'd just like to get GitHub's source control back to a working state. Any suggestions? [![enter image description here](https://i.stack.imgur.com/dXSC1.png)](https://i.stack.imgur.com/dXSC1.png)<issue_comment>username_1: Your screenshot is showing that you checked out a specific commit. The long hexadecimal number partially obscured by the red line in the screenshot is the commit number. When you check out a specific commit, you don't have a current branch so the git head is detached. The fix is to check out a branch so you have a current branch again. Select a branch inside the Branches folder, the BrianTests branch in your screenshot, right-click, and choose Checkout. Upvotes: 0 <issue_comment>username_2: If you want to add commit with detached head to any branch you should use cherry-pick command ``` git checkout "branchname" git cherry-pick "commitnumber" ``` Upvotes: 2
2018/03/20
722
3,101
<issue_start>username_0: I need to call functionB from within a promise in functionA ``` constructor(props) { super(props); this.functionA = this.functionA.bind(this); this.functionB = this.functionB.bind(this); } functionA(canvas) { let data = new FormData(); canvas.toBlob(function(blob) { data.append('data', blob); axios .post(`https://api.graph.cool/file/v1/${CONST.projectID}`, data, { headers: { 'Content-Type': 'multipart/form-data', }, }) .then(function(res) { console.log('res ', res); // res is as expected console.log('this ', this); // this is null this.functionB(); // Errors }); }); } ``` However Im getting this error: ``` Uncaught (in promise) TypeError: Cannot read property 'functionB' of undefined ``` This is within a React component but I don't think thats important.<issue_comment>username_1: You need to bind the function in your constructor ('this' is undefined) this.functionB = this.functionB.bind(this); Upvotes: 0 <issue_comment>username_2: Use arrow functions like this, regular functions have their own context ('this') while arrow functions take their parent's context: ```js functionA(canvas) { let data = new FormData(); canvas.toBlob(blob => { data.append('data', blob); axios .post(`https://api.graph.cool/file/v1/${CONST.projectID}`, data, { headers: { 'Content-Type': 'multipart/form-data', }, }) .then(res => { console.log('res ', res); // res is as expected console.log('this ', this); // this is not null this.functionB(); // no Error :) }); }); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: Inside .then() it's in a different scope. Assigning "this" to a variable ahead of time then using it inside the .then function will do the trick. ``` constructor(props) { super(props); this.functionA = this.functionA.bind(this); this.functionB = this.functionB.bind(this); } functionA(canvas) { let data = new FormData(); let ctrl = this; canvas.toBlob(function(blob) { data.append('data', blob); axios .post(`https://api.graph.cool/file/v1/${CONST.projectID}`, data, { headers: { 'Content-Type': 'multipart/form-data', }, }) .then(function(res) { console.log('res ', res); // res is as expected console.log('this ', this); // this is null (expected) console.log('ctrl', ctrl); //this is the controller ctrl.functionB(); }); }); } ``` Upvotes: 0
2018/03/20
911
2,216
<issue_start>username_0: I am trying to plot a bar chart and attach labels to each bar. I can plot the chart with this code: ``` y = np.array([ 0.06590843, 0.10032079, 0.03295421, 0.12277632, 0.04257801, 0.00641586, 0.05774278, 0.15106445, 0.13852435, 0.03732867, 0.05570137, 0.11548556, 0.22834646, 0.09477982, 0.12569262, 0.09711286, 0.05920093, 0.03295421, 0.11286089, 0.05453485, 0.08486439, 0.09857101, 0.00641586]) x= ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '20', '40', '60', '80', '100', '300', '500', '700', '900', '1000', '1300', '1512'] plt.figure(figsize = (16, 2)) plt.bar(range(23), y) ``` but when I try to add the labels with this: ``` plt.xticks(x) ``` I get the following error: ``` AttributeError: 'NoneType' object has no attribute 'seq' ``` If I do: ``` plt.bar(x, y) ``` The x labels get jumbled and I get a figure like this: [![enter image description here](https://i.stack.imgur.com/rN25H.png)](https://i.stack.imgur.com/rN25H.png) I am using %matplotlib inline for my backend.<issue_comment>username_1: I got it to work with this modification to plt.xticks(x) ``` plt.xticks(range(23), x) ``` Upvotes: 3 <issue_comment>username_2: The issue with your first code is that you are failing to specify the positions for the ticks. If you look up Matplotlib's documentation, you'll see that parameters for the xticks function are: ``` matplotlib.pyplot.xticks(ticks=None, labels=None, **kwargs) ``` **ticks : array\_like** A list of positions at which ticks should be placed. You can pass an empty list to disable xticks. **labels : array\_like, optional** A list of explicit labels to place at the given locs. **\*\*kwargs** Text properties can be used to control the appearance of the labels. (Source: <https://matplotlib.org/api/_as_gen/matplotlib.pyplot.xticks.html>) Hence, in your original post you are incorrectly calling the function, as opposed to your second post where you give the ticks positions and labels. Upvotes: 2 <issue_comment>username_3: In your specific case, if `x` is float, you don't need ticks. ``` x = np.array([1., 2., ...]) ``` And go simply with `plt.bar(x,y)` Upvotes: 0
2018/03/20
567
1,988
<issue_start>username_0: If I have a variable of type `std::unordered_map`, how can I access the `std::unordered_map::iterator` typedef without having to always write the type itself but, for example, making the compiler infer it from an existing variable like `decltype(my_map)::iterator)`? **EDIT:** Example code: ``` #include #include std::unordered\_map test() { return std::unordered\_map(); } int main() { const auto ↦ = test(); decltype(map)::const\_iterator it; return 0; } ``` Compile output: ``` main.cpp: In function 'int main()': main.cpp:13:5: error: decltype evaluates to 'const std::unordered_map&', which is not a class or enumeration type decltype(map)::const_iterator it; ^ main.cpp:13:35: error: expected initializer before 'it' decltype(map)::const_iterator it; ^ ```<issue_comment>username_1: I got it to work with this modification to plt.xticks(x) ``` plt.xticks(range(23), x) ``` Upvotes: 3 <issue_comment>username_2: The issue with your first code is that you are failing to specify the positions for the ticks. If you look up Matplotlib's documentation, you'll see that parameters for the xticks function are: ``` matplotlib.pyplot.xticks(ticks=None, labels=None, **kwargs) ``` **ticks : array\_like** A list of positions at which ticks should be placed. You can pass an empty list to disable xticks. **labels : array\_like, optional** A list of explicit labels to place at the given locs. **\*\*kwargs** Text properties can be used to control the appearance of the labels. (Source: <https://matplotlib.org/api/_as_gen/matplotlib.pyplot.xticks.html>) Hence, in your original post you are incorrectly calling the function, as opposed to your second post where you give the ticks positions and labels. Upvotes: 2 <issue_comment>username_3: In your specific case, if `x` is float, you don't need ticks. ``` x = np.array([1., 2., ...]) ``` And go simply with `plt.bar(x,y)` Upvotes: 0
2018/03/20
982
3,449
<issue_start>username_0: I am wondering how matrix multiplication can be supported in numpy with arrays of `dtype=object`. I have homomorphically encrypted numbers that are encapsulated in a class `Ciphertext` for which I have overriden the basic math operators like `__add__`, `__mul__` etc. I have created numpy array where each entry is an instance of my class `Ciphertext` and numpy understands how to broadcast addition and multiplication operations just fine. ``` encryptedInput = builder.encrypt_as_array(np.array([6,7])) # type(encryptedInput) is encryptedOutput = encryptedInput + encryptedInput builder.decrypt(encryptedOutput) # Result: np.array([12,14]) ``` However, numpy won't let me do matrix multiplications ``` out = encryptedInput @ encryptedInput # TypeError: Object arrays are not currently supported ``` I don't quite understand why this happens considering that addition and multiplication works. I guess it has something to do with numpy not being able to know the shape of the object, since it could be a list or something fance. **Naive Solution**: I could write my own class that extends `ndarray` and overwrite the `__matmul__` operation, but I would probably lose out on performance and also this approach entails implementing broadcasting etc., so I would basically reinvent the wheel for something that should work as it is right now. **Question**: How can I use the standard matrix multiplication provided by numpy on arrays with `dtype=objects` where the objects behave exactly like numbers? Thank you in advance!<issue_comment>username_1: For whatever reason matmul doesn't work, but the tensordot function works as expected. ``` encryptedInput = builder.encrypt_as_array(np.array([6,7])) out = np.tensordot(encryptedInput, encryptedInput, axes=([1,0])) # Correct Result: [[ 92. 105.] # [120. 137.]] ``` Now it's just a hassle to adjust the axes. I still wonder whether this is actually faster than a naive implementation with for loops. Upvotes: 3 [selected_answer]<issue_comment>username_2: `tensordot` has an extended example using `object` dtype and string concatenation. It's actually using `np.dot` for this: ``` In [89]: np.dot(np.array([['a'],['b']],object),np.array([[2,3]])) Out[89]: array([['aa', 'aaa'], ['bb', 'bbb']], dtype=object) ``` This example is small, but it does suggest that the `object` version is taking a slower route (than the equivalent numeric one): ``` In [98]: timeit np.dot(np.array([[1],[2]]),np.array([[2,3]])) 7.3 µs ± 20.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [99]: timeit np.dot(np.array([[1],[2]],object),np.array([[2,3]])) 12 µs ± 121 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) ``` `np.dot` code is compiled, so it'll take more work to verify the difference. For 1 and 2d arrays, `np.dot` is just as good as `np.matmul`. `matmul` was introduced for the `@` operator convenience, and for its extension to 3d and higher. Previously that 3d+ behavior could only be achieved with `einsum` or with iteration on the upper dimensions. `matmul` for 2 3d arrays is effectively: ``` for i in range(a.shape[0]): data[i,:,:] = a[i,:,:].dot(b[i,:,:]) ``` Upvotes: 1 <issue_comment>username_3: You can use the `ndarray.dot` method, which apparently works for `np.object` dtypes even when the `@` operator fails: ``` out = encryptedInput.dot(encryptedInput) ``` Upvotes: 0
2018/03/20
1,331
4,655
<issue_start>username_0: Let's say I have three MySQL tables: ``` ╔═════════════════╗ ╔════════════╗ ╔══════════════════════╗ ║ animals ║ ║ colors ║ ║ animal_color ║ ╠════╦════════════╣ ╠════╦═══════╣ ╠═══════════╦══════════╣ ║ id ║ name ║ ║ id ║ name ║ ║ animal_id ║ color_id ║ ╠════╬════════════╣ ╠════╬═══════╣ ╠═══════════╬══════════╣ ║ 1 ║ Panther ║ ║ 1 ║ black ║ ║ 1 ║ 1 ║ ║ 2 ║ Zebra ║ ║ 2 ║ white ║ ║ 2 ║ 1 ║ ║ 3 ║ Polar bear ║ ║ 3 ║ blue ║ ║ 2 ║ 2 ║ ╚════╩════════════╝ ╚════╩═══════╝ ║ 3 ║ 2 ║ ╚═══════════╩══════════╝ ``` Each animal can have one or more colors assigned to it. How would I look for an animal that has both **black** and **white** color assigned (zebra)? I know there are multiple ways to do it, but what are some simple ones? Perhaps without any subqueries? I came out with something like that: ``` SELECT * FROM `animals` WHERE `id` IN (SELECT `animal_id` FROM `animal_color` WHERE `color_id` IN ( 1, 2 ) GROUP BY `animal_id` HAVING COUNT(*) = 2) ``` But I feel like it's clumsy and I also don't like that I need to count the number of colors I'm interested in. It seems like there is some simple way to do it. **UPDATE** It would be great to know simple ways to both versions of this problem: 1. Selecting animal that is black and white **without any other color** 2. Selecting animal that is black, white **and possibly more colors, but not necessarily**<issue_comment>username_1: For animals with black and white colours: ``` SELECT a.name FROM animals a INNER JOIN animal_color ac ON (a.id = ac.animal_id) INNER JOIN colors c ON (ac.color_id = c.id AND c.name IN ('black', 'white')) GROUP BY a.id HAVING COUNT(DISTINCT ac.color_id) = 2 ``` That being said, your method is not inherently "incorrect". Using joins to filter could actually be argued to be less readable than inside a where statement, however I prefer the lack of subqueries, that's opinion based though and both methods will be planned without performance issue. Also note that if you don't care about specifying the colors by ID you can remove a join: ``` SELECT a.name FROM animals a INNER JOIN animal_color ac ON (a.id = ac.animal_id AND ac.color_id IN (1, 2)) GROUP BY a.id HAVING COUNT(DISTINCT ac.color_id) = 2 ``` Upvotes: 0 <issue_comment>username_2: I don't think you have too bad of a method, along with the unique index you mention to ensure the result is correct. There is a 'simpler' version without sub queries - you can join twice to animal colour, once for black, once for white - it would self-filter, but this method is not extensible to cover 3 colours animal queries etc, so I'm not keen on it as a solution. As long as the query plan your SQL produces is efficient, and the meaning of the query is obvious / maintainable, that should be sufficient. Those should be the goals of the SQL statement. Upvotes: 1 <issue_comment>username_3: Check if animal have 2 colors and those colors are white and black. ``` SELECT a.id, a.name FROM animals a JOIN animal_color c on c.animal_id = a.id GROUP a.id, a.name HAVING Count(color_id) = 2 AND Count(CASE WHEN c.color_id IN (1,2) THEN 1 END) = 2 ``` Upvotes: 0 <issue_comment>username_4: I came up with my own solutions. Selecting animals that are black&white, but also can have other colors: ``` SELECT * FROM animals WHERE (SELECT COUNT(DISTINCT colors.id) FROM colors JOIN animal_color ON animal_color.color_id = colors.id WHERE animal_color.animal_id = animals.id AND colors.name IN ( 'black', 'white' )) = 2 ``` Selecting animals that are black&white only: ``` SELECT * FROM animals WHERE (SELECT GROUP_CONCAT(DISTINCT colors.name ORDER BY colors.name) FROM colors JOIN animal_color ON animal_color.color_id = colors.id WHERE animal_color.animal_id = animals.id) = 'black,white' ``` As you see, I fetch by color, because it's more useful case sometimes. It's easi to change it to color ID. Also, I can easily fetch any other data I want, because finding animals happens in `WHERE` clause only. And finally, it works correctly with duplicated colors. Fetching black&white only requires script to sort needed colors alphabetically, but it's okay. I'm happy with relatively simple queries. Upvotes: 0
2018/03/20
832
2,999
<issue_start>username_0: I have made a simple WP plugin, that will get some documents from `calameo.com` and present them in a nice manner. I have made a custom **shortcode** for it. Let's call it "Shortcode"… I will have `[Shortcode vendor=vendor1]` to show only the documents related to a vendor and I know how to do that. What I need to do is **to pass arguments values from the url to the shortcode** but I didn't found yet the way to do it. Any help is appreciated.<issue_comment>username_1: Agreed, with Derek, the question is quite unclear. To what I understand, you want to extract the parameters passed on the URL of the page that contains your shortcode (say a 'vendor' parameter), and make that the shortcode parameters can take this value dynamically ? If it's the case, that does not make sense : shortcodes are used to generate your page's code (HTML, JavaScript, .. whatever runs in the browser), and have completely disappeared in the resulting page, meaning that you can't behave differently upon the value of the URL parameter.. **unless** your shortcode generates code (JavaScript) that contains some 'vendor' variable that could take its value from the parameter, and generate in its turn something (HTML, SVG….), in brief, some kind of tricky code.. Upvotes: -1 <issue_comment>username_2: To pass some variables from an URL, you will use `$_GET` in your shortcode like in this example: ``` if( ! function_exists('this_is_my_shortcode') ) { function this_is_my_shortcode( $atts, $content = null ) { // Attributes $atts = shortcode_atts( array( 'vendor' => '', 'thekey1' => isset($_GET['thekey1']) ? sanitize_key($_GET['thekey1']) : '', 'thekey2' => isset($_GET['thekey2']) ? sanitize_key($_GET['thekey2']) : '', ), $atts, 'my_shortcode' ); // Variables to be used $vendor_value = $atts['vendor']; $value1 = $atts['thekey1']; // the value from "thekey1" in the url $value2 = $atts['thekey2']; // the value from "thekey2" in the url // Your code … / … if( ! empty( $value1 ) ) $value1 = ' | Value 1: ' . $value1; if( ! empty( $value2 ) ) $value2 = ' | Value 2: ' . $value2; // Output: Always use return (never echo or print) return 'Vendor: ' . $vendor\_value . $value1 . $value2 . ''; } add\_shortcode("my\_shortcode", "this\_is\_my\_shortcode"); } ``` Code goes in function.php file of your active child theme (or theme). tested and works. **USAGE:** * The URL like: `http://www.example.com/your-page/?thekey1=document1&thekey2=document2` *(Each argument key / value pair is separated by `&` character)* * The shordcode: + In wordPress page / post text editor: `[my_shortcode vendor="vendor1"]` + In php code: `echo do_shortcode( "[my_shortcode vendor='vendor1']" );` You will get as generated html output: ``` Vendor: vendor1 | Value 1: document1 | Value 2: document2 ``` Upvotes: 2
2018/03/20
673
2,230
<issue_start>username_0: I not completely understand how whole avro serialization ecosystem is built. Originally I imagined something like that: [![enter image description here](https://i.stack.imgur.com/eGupE.png)](https://i.stack.imgur.com/eGupE.png) However, I tried to post a binary avro message message to a topic and later read it using REST proxy and got an error: `{"error_code":50002,"message":"Kafka error: java.io.CharConversionException: Invalid UTF-32 character 0xa126572(above 10ffff) at char #1, byte #7)"}`. Where am I wrong? If it matters, I used example from [here](https://github.com/confluentinc/confluent-kafka-dotnet/tree/master/examples/AvroSpecific) to write, and from [here](https://github.com/confluentinc/kafka-rest) to read.<issue_comment>username_1: Use the Kafka command line tools to produce a new Avro message to a new topic and read it back: `/usr/bin/kafka-avro-console-producer` to produce an Avro message. `/usr/bin/kafka-avro-console-consumer` to consume an Avro message. These tools are documented in the quickstart here: <https://docs.confluent.io/current/quickstart.html> Once you have that working, then make sure your custom client works with the official command line producer and make sure your custom producer works with the official command line client. If you break it into small steps like that, you will see exactly where your problem is, have a more narrow problems to solve. Upvotes: -1 <issue_comment>username_2: In the end turned out that I defined consumer with a [wrong format (json instead of avro)](https://docs.confluent.io/current/kafka-rest/docs/api.html). Otherwise everything works as expected. Thanks to @cricket\_007 for a hint. When creating the consumer I was doing ``` curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" \ --data '{"name": "my_consumer_instance", "format": "json", "auto.offset.reset": "earliest"}' \ http://192.168.99.101:8082/consumers/my_json_consumer ``` while the correct version is ``` curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" \ --data '{"name": "my_consumer_instance", "format": "avro", "auto.offset.reset": "earliest"}' \ http://192.168.99.101:8082/consumers/my_json_consumer ``` Upvotes: 1
2018/03/20
1,527
4,979
<issue_start>username_0: ``` PROGRAM VOTING_SLIP (INPUT,OUTPUT); CONST Array_size = 20; VAR Counter, Increment, i, j : Integer; Found, full_name, District_ID : string; DistrictArea : Array[1..20] of string; DistrictID : Array[1..20] of string; BEGIN WRITELN('Please Populate District Area Array'); FOR i := 1 to 20 DO READLN(DistrictArea[i]); WRITELN('Input Values for District ID'); FOR j := 1 to 20 DO READLN(DistrictID[i]); WRITELN('Please Enter Voter Name'); READLN(full_name); IF (full_name <> 'END') THEN DO READLN(District_ID); Increment := 1; Counter := 0; Found := 'FALSE'; WHILE(Found = 'FALSE') AND (Increment <= Array_size) DO IF (DistrictID[Increment] = District_ID) THEN Found := 'TRUE'; ELSE Increment := Increment + 1; IF (Found = 'TRUE') THEN WRITELN (full_name,'you have been registered to vote in', DistrictArea[Increment]); Counter := Counter + 1; ELSE Writeln ('Error! Invalid District ID'); ELSE Writeln ('you have', Counter, 'registered voters. Goodbye!'); END. ``` The above code will not execute the pas the 10th line. Compiler says: ``` 1. 26 / 3 oneste~1.pas Error: Illegal expression 2. 26 / 3 oneste~1.pas Error: Illegal expression 3. 26 / 3 oneste~1.pas Fatal: Syntax error, ; expected but identifier READLN found ``` Can anyone help? Thank you.<issue_comment>username_1: Why do you say line 10? The error message says line 26. There is not supposed to be a DO after IF-THEN. You should encompass some of your statements in BEGIN-END blocks. You have superfluous semicolons before the keywords ELSE and END. Near the end you have two ELSE clauses. What is that supposed to do? Upvotes: 2 [selected_answer]<issue_comment>username_2: Below is the updated code with respect to comments: ``` PROGRAM VOTING_SLIP (INPUT,OUTPUT); uses crt; CONST Array_size = 20; VAR Counter, Increment, i, j : Integer; Found, full_name, District_ID : string; DistrictArea : Array[1..20] of string; DistrictID : Array[1..20] of string; BEGIN WRITELN('Please Populate District Area Array'); FOR i := 1 to 20 DO READLN(DistrictArea[i]); WRITELN('Input Values for District ID'); FOR j := 1 to 20 DO READLN(DistrictID[j]); WRITELN('Please Enter Voter Name'); READLN(full_name); IF (full_name <> 'END') THEN BEGIN READLN(District_ID); Increment := 1; Counter := 0; Found := 'FALSE'; END WHILE (Found = 'FALSE') AND (Increment <= Array_size) DO BEGIN IF (DistrictID[Increment] = District_ID) THEN BEGIN Found := 'TRUE'; END ELSE BEGIN Increment := Increment + 1; END END IF (Found = 'TRUE') THEN BEGIN WRITELN (full_name,'you have been registered to vote in', DistrictArea[Increment]); Counter := Counter + 1; END ELSE BEGIN Writeln ('Error! Invalid District ID'); END ELSE BEGIN Writeln ('you have', Counter, 'registered voters. Goodbye!'); END END. ``` New compiler Error: 35 / 3 oneste~1.pas Fatal: Syntax error, ; expected but WHILE found The code is supposed to allow entering values for a district Area and District ID in an array. then accept voter name, if end is entered then code close and display number of voters using counter variable, else it allows for district ID to be entered for that voter name. after this is done the code the district ID array to ensure its a correct district ID; if yes prints voter name and corresponding district area to screen, if no notifies that the district ID is invalid. Upvotes: 0 <issue_comment>username_2: ``` PROGRAM VOTING_SLIP (INPUT,OUTPUT); uses crt; CONST Array_size = 20; VAR Counter, Increment, i, j, k : Integer; Found, full_name, District_ID : string; DistrictArea : Array[1..20] of string; DistrictID : Array[1..20] of string; BEGIN WRITELN('Please Populate District Area Array'); FOR i := 1 to 20 DO BEGIN READLN(DistrictArea[i]); END; WRITELN('Input Values for District ID'); FOR j := 1 to 20 DO BEGIN READLN(DistrictID[j]); END; For k := 1 TO 10 DO BEGIN WRITELN('Please Enter Voter Name'); READLN(full_name); IF (full_name <> 'END') THEN BEGIN WRITELN ('Please enter District ID'); READLN(District_ID); Increment := 1; Counter := 0; Found := 'FALSE'; END; WHILE (Found = 'FALSE') AND (Increment <= Array_size) DO BEGIN IF (DistrictID[Increment] = District_ID) THEN BEGIN Found := 'TRUE'; END ELSE BEGIN Increment := Increment + 1; END END; IF (Found = 'TRUE') THEN BEGIN WRITELN (full_name,' - you have been registered to vote in - ', DistrictArea[Increment]); Counter := Counter + 1; END ELSE BEGIN Writeln ('Error! Invalid District ID'); END; IF (full_name = 'END') THEN BEGIN Writeln ('you have - ', Counter, '- registered voters. Goodbye!'); END END; END. ``` This is a final working code with the functionality I originally intended. I figured it out the remaining issues and added some enhancements. A big thank you to everyone who assisted me. Upvotes: 0
2018/03/20
1,064
4,675
<issue_start>username_0: I have a `function` called by a `UITableView` which updates a wallet value based on what it contains (hence the call by the tableView). This is part of the `WalletViewController` where the user can update what his wallet contains. I would like to be able to call this function from the `MainViewController` where the wallet value is also displayed so it is up-to-date when the user refreshes the data, for now the wallet value update function is only happening when you load the `WalletViewController` and `reloadData()` is called. What can I do to be able to refresh the value from the `MainViewController`? **WalletViewController code for the value update function:** ``` func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "Cell", for: indexPath) as! WalletTableViewCell cell.delegate = self cell.amountTextField.delegate = self updateCellValueLabel(cell, atRow: indexPath.row) return cell } func updateCellValueLabel(_ walletTableViewCell: WalletTableViewCell, atRow row: Int) { //... calculations for each wallet's object are happening here ... CoreDataHandler.editObject(editObject: selectedAmount, amount: amount, amountValue: currentAmountValue) // <--- calculations result for each object is saved to CoreData updateWalletValue() updateWalletLabel() } func updateWalletValue() { var items : [CryptosMO] = [] if CoreDataHandler.fetchObject() != nil { items = CoreDataHandler.fetchObject()! // <--- Calculations result are fetched from CoreData } total = items.reduce(0.0, { $0 + Double($1.amountValue)! } ) // <--- Objects values are added together to get the wallet's grand total WalletTableViewController.staticTotal = total } func updateWalletLabel() { walletValueLabel.text = formatter.string(from: NSNumber(value: total)) } ``` **MainViewController wallet update function so far:** ``` func updateWalletLabel() { WalletTableViewController.init().updateWalletValue() walletValue.text = formatter.string(from: NSNumber(value: WalletTableViewController.staticTotal)) } ``` I guess I should get the wallet objects' amounts from core data and redo the calculations from the `MainViewController` to get up-to-date values? Or is there a more simple solution to call the entire `tableView` functions?<issue_comment>username_1: Use this line from where you want to call for update another view. ``` NotificationCenter.default.post(name: Notification.Name("NotificationIdentifier"), object: nil) ``` Add these lines in which update will happen: ``` NotificationCenter.default.addObserver(self, selector: #selector("Your Method Name Which you want to call on change like (reloadTableView)"), name: Notification.Name("NotificationIdentifier"), object: nil) ``` Upvotes: 1 <issue_comment>username_2: There's a couple ways you could do this. @username_1's answer will work if you want to go the 'post a notification' route. The other option, which is what I usually do, is to get a reference to your `WalletViewController` from your `MainViewController` and then just make the call directly on the `WalletViewControllerInstance`: ``` class MainViewController: UIViewController { // Local variable to cache instance of WalletViewController var walletVC: WalletViewController? func functionWhereWalletViewControllerIsInstantiated() { // Save the reference to the WalletViewController instance walletVC = self.storyboard?.instantiateViewController( withIdentifier: "WalletViewControllerIdentifier") as! WalletViewController } func someFuncWhereYouWantToUpdateWalletViewController() { // Call the function on your cached WalletViewController // instance that performs the refresh walletVC.doTheUpdate() } } ``` If you instantiate `walletVC` where you push it, you can do: ``` func functionWhereWalletVcIsPushed() { if (walletVC == null) { walletVC = self.storyboard?.instantiateViewController( withIdentifier: "WalletViewControllerIdentifier") as! WalletViewController } // Push the view controller } ``` You could also make the property lazy: ``` lazy var walletVC: WalletViewController = { return self.storyboard?.instantiateViewController( withIdentifier: "WalletViewControllerIdentifier") as! WalletViewController } ``` (I'm no Swift expert, so you'll want to double check the syntax) Upvotes: 2
2018/03/20
6,406
10,879
<issue_start>username_0: When I try to open the `http://myip:8889/topologies` after running `heron-tracker` and `heron-ui` commands, the response speed of this page is very slow even I can't open the page at all. And the 304 response code showed as following: ``` [2018-03-20 07:08:48 +0000] [INFO]: 302 GET / (172.16.17.32) 0.61ms [2018-03-20 07:08:48 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 22.92ms [2018-03-20 07:09:37 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 19.46ms [2018-03-20 07:09:44 +0000] [INFO]: 302 GET / (172.16.17.32) 0.50ms [2018-03-20 07:09:44 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 20.93ms [2018-03-20 07:10:35 +0000] [INFO]: 302 GET / (172.16.17.32) 0.59ms [2018-03-20 07:10:35 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 21.38ms [2018-03-20 07:12:08 +0000] [INFO]: 302 GET / (172.16.17.32) 0.67ms [2018-03-20 07:12:08 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 26.07ms [2018-03-20 07:12:51 +0000] [INFO]: 302 GET / (172.16.17.32) 0.49ms [2018-03-20 07:12:51 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 19.26ms [2018-03-20 07:13:35 +0000] [INFO]: 302 GET / (172.16.17.32) 0.74ms [2018-03-20 07:13:35 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 20.34ms [2018-03-20 07:15:25 +0000] [INFO]: 302 GET / (172.16.17.32) 0.59ms [2018-03-20 07:15:26 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 22.73ms [2018-03-20 07:18:26 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 22.43ms [2018-03-20 07:22:17 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 24.61ms ``` When I using `heron-tracker --verbose` command to run the heron-tracker. The output as following: ``` yitian@heron04:~$ heron-tracker --verbose [2018-03-27 04:53:41 +0000] [INFO]: Connecting to zk hostports: [('heron04', 2181)] rootpath: /heron [2018-03-27 04:53:41 +0000] [DEBUG]: ZK loop started [2018-03-27 04:53:41 +0000] [DEBUG]: Skipping state change [2018-03-27 04:53:41 +0000] [INFO]: Connecting to heron04:2181 [2018-03-27 04:53:41 +0000] [DEBUG]: Using session_id: None session_passwd: <PASSWORD> [2018-03-27 04:53:41 +0000] [INFO]: Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=0, passwd='\<PASSWORD>', read_only=None) [2018-03-27 04:53:41 +0000] [DEBUG]: Read response Connect(protocol_version=0, last_zxid_seen=0, time_out=10000, session_id=99755698212438030, passwd='\<PASSWORD>', read_only=False) [2018-03-27 04:53:41 +0000] [DEBUG]: Session created, session_id: 99755698212438030 session_passwd: <PASSWORD> negotiated session timeout: 10000 connect timeout: 10000 read timeout: 6666.66666667 [2018-03-27 04:53:41 +0000] [INFO]: Zookeeper connection established, state: CONNECTED [2018-03-27 04:53:41 +0000] [INFO]: Sending request(xid=1): Exists(path='/heron/topologies', watcher=None) [2018-03-27 04:53:41 +0000] [DEBUG]: Reading for header ReplyHeader(xid=1, zxid=1596, err=0) [2018-03-27 04:53:41 +0000] [INFO]: Received response(xid=1): ZnodeStat(czxid=96, mzxid=96, ctime=1520689567390, mtime=1520689567390, version=0, cversion=26, aversion=0, ephemeralOwner=0, dataLength=0, numChildren=0, pzxid=1518) [2018-03-27 04:53:41 +0000] [INFO]: Adding children watch for path: /heron/topologies [2018-03-27 04:53:41 +0000] [INFO]: Sending request(xid=2): GetChildren(path='/heron/topologies', watcher=>) [2018-03-27 04:53:41 +0000] [DEBUG]: Reading for header ReplyHeader(xid=2, zxid=1596, err=0) [2018-03-27 04:53:41 +0000] [INFO]: Received response(xid=2): [] [2018-03-27 04:53:41 +0000] [INFO]: State watch triggered for topologies. [2018-03-27 04:53:41 +0000] [DEBUG]: Topologies: [] [2018-03-27 04:53:41 +0000] [DEBUG]: Existing topologies: [] [2018-03-27 04:53:41 +0000] [INFO]: Tracker has started [2018-03-27 04:53:41 +0000] [INFO]: Running on port: 8888 [2018-03-27 04:53:41 +0000] [INFO]: Using config file: /home/yitian/.herontools/conf/heron\_tracker.yaml [2018-03-27 04:53:41 +0000] [INFO]: Using state manager: type: zookeeper name: aurorazk hostport: heron04:2181 rootpath: /heron tunnelhost: 127.0.0.1 [2018-03-27 04:53:44 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:53:44 +0000] [DEBUG]: Received Ping [2018-03-27 04:53:47 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:53:47 +0000] [DEBUG]: Received Ping [2018-03-27 04:53:50 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:53:50 +0000] [DEBUG]: Received Ping [2018-03-27 04:53:53 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:53:53 +0000] [DEBUG]: Received Ping [2018-03-27 04:53:56 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:53:56 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:00 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:00 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:03 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:03 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:06 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:06 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:09 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:09 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:12 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:12 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:16 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:16 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:19 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:19 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:22 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:22 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:25 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:25 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:28 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:28 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:31 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:31 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:34 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:34 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:38 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:38 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:41 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:41 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:44 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:44 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:47 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:47 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:50 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:50 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:53 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:53 +0000] [DEBUG]: Received Ping [2018-03-27 04:54:57 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:54:57 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:00 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:00 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:03 +0000] [INFO]: 302 GET / (172.16.17.32) 0.47ms [2018-03-27 04:55:03 +0000] [INFO]: 200 GET /topologies (172.16.17.32) 1.11ms [2018-03-27 04:55:03 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:03 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:06 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:06 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:09 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:09 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:12 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:12 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:15 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:15 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:19 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:19 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:20 +0000] [INFO]: 200 GET /clusters (127.0.0.1) 1.13ms [2018-03-27 04:55:22 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:22 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:25 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:25 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:28 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:28 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:31 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:31 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:35 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:35 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:38 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:38 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:41 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:41 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:44 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:44 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:47 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:47 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:50 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:50 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:54 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:54 +0000] [DEBUG]: Received Ping [2018-03-27 04:55:57 +0000] [DEBUG]: Sending request(xid=-2): Ping() [2018-03-27 04:55:57 +0000] [DEBUG]: Received Ping ``` `heorn-ui`: ``` yitian@heron04:~$ heron-ui [2018-03-27 04:54:56 +0000] [INFO]: Listening at http://0.0.0.0:8889 [2018-03-27 04:54:56 +0000] [INFO]: Using tracker url: http://127.0.0.1:8888 [2018-03-27 04:54:56 +0000] [INFO]: Using base url: [2018-03-27 04:54:56 +0000] [INFO]: static/ [2018-03-27 04:55:20 +0000] [INFO]: 302 GET / (172.16.17.32) 1.10ms [2018-03-27 04:55:20 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 32.23ms [2018-03-27 04:56:18 +0000] [INFO]: 304 GET /topologies (172.16.17.32) 16.55ms ``` What's more, I can't find `LOG-FILES` of heron cluster that deployed in `Aurora`, `Zookeeper` and `Mesos`. Can you tell me it if you know it? And I don't know what's wrong with it? Thanks for your help!<issue_comment>username_1: Wondering are you running Heron in a cluster or running in local laptop? If you are running in a cluster - are you running large topologies? If you are running in local laptop - it should be fast. Can you do the following? ``` heron-tracker --verbose ``` and check the logs which might give a clue. Upvotes: 1 <issue_comment>username_2: I have solved this problem by checking the Internet connection is normal. Because I open the Heron UI in VM, there is something wrong with Internet. Upvotes: 1 [selected_answer]
2018/03/20
2,887
8,618
<issue_start>username_0: I have trying to download the `kaggle dataset` by using python. However i was facing issues by using the `request` method and the downloaded output .csv files is a corrupted html files. ``` import requests # The direct link to the Kaggle data set data_url = 'https://www.kaggle.com/crawford/gene-expression/downloads/actual.csv' # The local path where the data set is saved. local_filename = "actsual.csv" # Kaggle Username and Password kaggle_info = {'UserName': "myUsername", 'Password': "<PASSWORD>"} # Attempts to download the CSV file. Gets rejected because we are not logged in. r = requests.get(data_url) # Login to Kaggle and retrieve the data. r = requests.post(r.url, data = kaggle_info) # Writes the data to a local file one chunk at a time. f = open(local_filename, 'wb') for chunk in r.iter_content(chunk_size = 512 * 1024): # Reads 512KB at a time into memory if chunk: # filter out keep-alive new chunks f.write(chunk) f.close() ``` Output file ``` Gene expression dataset (Golub et al.) | Kaggle (function () { var originalError = window.onerror; window.onerror = function (message, url, lineNumber, columnNumber, error) { var handled = originalError && originalError(message, url, lineNumber, columnNumber, error); var blockedByCors = message && message.toLowerCase().indexOf("script error") >= 0; return handled || blockedByCors; }; })(); var appInsights=window.appInsights||function(config){ function i(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},u=document,e=window,o="script",s="AuthenticatedUserContext",h="start",c="stop",l="Track",a=l+"Event",v=l+"Page",y=u.createElement(o),r,f;y.src=config.url||"https://az416426.vo.msecnd.net/scripts/a/ai.0.js";u.getElementsByTagName(o)[0].parentNode.appendChild(y);try{t.cookie=u.cookie}catch(p){}for(t.queue=[],t.version="1.0",r=["Event","Exception","Metric","PageView","Trace","Dependency"];r.length;)i("track"+r.pop());return i("set"+s),i("clear"+s),i(h+a),i(c+a),i(h+v),i(c+v),i("flush"),config.disableExceptionTracking||(r="onerror",i("\_"+r),f=e[r],e[r]=function(config,i,u,e,o){var s=f&&f(config,i,u,e,o);return s!==!0&&t["\_"+r](config,i,u,e,o),s}),t }({ instrumentationKey:"<KEY>", disableAjaxTracking: true }); window.appInsights=appInsights; appInsights.trackPageView(); ```<issue_comment>username_1: I would recommend checking out [Kaggle API](https://github.com/Kaggle/kaggle-api) instead of using your own code. As per latest version, an example command to download dataset is `kaggle datasets download -d zillow/zecon` Upvotes: 3 <issue_comment>username_2: Just to make things easy for the next person, I combined the *fantastic* [answer from <NAME>ner](https://stackoverflow.com/a/50876207/898057) with a little bit of code that takes the raw `csv` info and puts it into a `Pandas DataFrame`, assuming that `row 0` has the column names. I used it to download the Pima Diabetes dataset from Kaggle, and it worked swimmingly. I'm sure there are more elegant ways to do this, but it worked well enough for a class I was teaching, is easily interpretable, and lets you get to analysis with minimal fuss. ``` import pandas as pd import requests import csv payload = { '__RequestVerificationToken': '', 'username': 'username', 'password': '<PASSWORD>', 'rememberme': 'false' } loginURL = 'https://www.kaggle.com/account/login' dataURL = "https://www.kaggle.com/uciml/pima-indians-diabetes-database/downloads/diabetes.csv" with requests.Session() as c: response = c.get(loginURL).text AFToken = response[response.index('antiForgeryToken')+19:response.index('isAnonymous: ')-12] #print("AntiForgeryToken={}".format(AFToken)) payload['__RequestVerificationToken']=AFToken c.post(loginURL + "?isModal=true&returnUrl=/", data=payload) download = c.get(dataURL) decoded_content = download.content.decode('utf-8') cr = csv.reader(decoded_content.splitlines(), delimiter=',') my_list = list(cr) #for row in my_list: # print(row) df = pd.DataFrame(my_list) header = df.iloc[0] df = df[1:] diab = df.set_axis(header, axis='columns', inplace=False) # to make sure it worked, uncomment this next line: # diab ``` ` Upvotes: -1 <issue_comment>username_3: Basically, if you want to use the Kaggle **python** API (the solution provided by @minh-triet is for the command line **not** for python) you have to do the following: ``` import kaggle kaggle.api.authenticate() kaggle.api.dataset_download_files('The_name_of_the_dataset', path='the_path_you_want_to_download_the_files_to', unzip=True) ``` I hope this helps. Upvotes: 5 <issue_comment>username_4: kaggle api key and usersame is available on kaggle profile page and dataset download link is available on dataset details page on kaggle ``` #Set the enviroment variables import os os.environ['KAGGLE_USERNAME'] = "xxxx" os.environ['KAGGLE_KEY'] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" !kaggle competitions download -c dogs-vs-cats-redux-kernels-edition ``` Upvotes: 3 <issue_comment>username_5: Ref <https://github.com/Kaggle/kaggle-api> **Step \_1,** Try Insatling Kaggle ``` pip install kaggle # Windows pip install --user kaggle # **Mac/Linux**. ``` **Step 2,** Update your Credentials, so that kaggle can authenticate on `.kaggle/kaggel_json` based on your token generated from Kaggle. ref: <https://medium.com/@ankushchoubey/how-to-download-dataset-from-kaggle-7f700d7f9198> **Step 3** Now Instaed of`kaggle competitions download ..` run `~/.local/bin/kaggle competitions download ..` to avoid Command Kaggle Not Found Upvotes: -1 <issue_comment>username_6: Before anything: ``` pip install kaggle ``` For the dataset: ``` import os os.environ['KAGGLE_USERNAME'] = "uname" # username from the json file os.environ['KAGGLE_KEY'] = "kaggle_key" # key from the json file !kaggle datasets download -d zynicide/wine-reviews ``` For the competitions: ``` import os os.environ['KAGGLE_USERNAME'] = "uname" # username from the json file os.environ['KAGGLE_KEY'] = "kaggle_key" # key from the json file !kaggle competitions download -c dogs-vs-cats-redux-kernels-edition ``` Some time ago I provided another [similar answer](https://stackoverflow.com/a/55937615/5884955). Upvotes: -1 <issue_comment>username_7: Full version of example Download\_Kaggle\_Dataset\_To\_Colab with explanation under Windows that start work for me ``` #Step1 #Input: from google.colab import files files.upload() #this will prompt you to upload the kaggle.json. Download from Kaggle>Kaggle API-file.json. Save to PC to PC folder and choose it here #Output Sample: #kaggle.json #kaggle.json(application/json) - 69 bytes, last modified: 29.06.2021 - 100% done #Saving kaggle.json to kaggle.json #{'kaggle.json': #b'{"username":"sergeysukhov7","key":"23d4d4abdf3bee8ba88e653cec******"}'} #Step2 #Input: !pip install -q kaggle !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ !ls ~/.kaggle !chmod 600 /root/.kaggle/kaggle.json # set permission #Output: #kaggle.json #Step3 #Input: #Set the enviroment variables import os os.environ['KAGGLE_USERNAME'] = "sergeysukhov7" #manually input My_Kaggle User_Name os.environ['KAGGLE_KEY'] = "23d4d4abdf3bee8ba88e653cec5*****" #manually input My_Kaggle Key #Step4 #!kaggle datasets download -d zillow/zecon #download dataset to default folder content/zecon.zip if I want #find kaggle dataset link (for example) https://www.kaggle.com/willkoehrsen/home-credit-default-risk-feature-tools and choose part_of_the_link - willkoehrsen/home-credit-default-risk-feature-tools #set link_from Kaggle willkoehrsen/home-credit-default-risk-feature-tools #set Colab folder download_to /content/gdrive/My Drive/kaggle/credit/home-credit-default-risk-feature-tools.zip !kaggle datasets download -d willkoehrsen/home-credit-default-risk-feature-tools -p /content/gdrive/My\ Drive/kaggle/credit #Output #Downloading home-credit-default-risk-feature-tools.zip to /content/gdrive/My Drive/kaggle/credit #100% 3.63G/3.63G [01:31<00:00, 27.6MB/s] #100% 3.63G/3.63G [01:31<00:00, 42.7MB/s] ``` Upvotes: 1 <issue_comment>username_8: I have really struggled with the Kaggle API so I use `opendatasets`. It is important to have your `kaggle.json` in the same folder as your notebook. ``` pip install opendatasets import opendatasets as od od.download("https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/data","/mypath/goes/here") ``` [Documentation](https://pypi.org/project/opendatasets/) Upvotes: 0
2018/03/20
1,545
6,834
<issue_start>username_0: I'm currently a university student, waiting to choose a great Bachelor thesis. I've been willing to create a language for a long time and since I think I am able to, I would like to hear the opinions on the following matter: I know lots of languages, including C, C++, Python, Erlang, PHP, Javascript, etc. I can pretty much choose the one I want to create a language as my base. The point is: I've seen a lot of people doing it with python, wich is great but I am best skilled in PHP. Not plain PHP of course, I'm a big laravel fan. Apparently, a community driven project called laravel zero (<http://laravel-zero.com/>) allows creating great console applications in PHP, wich made me wonder... What if I use this as my base? Few keys: I don't mind about speed, I don't mind about optimization. I am sorry for C / C++ fans but I won't choose that as a starting point. If you're into programming languages, I may ask another question: Is it better to create a compiled or interpreted language? Why? As far as I know, creating an interpreted language, will always require that "mother" language to be present somehow, since you can't self-host your interpreter unless it's in binary code. Anybody got anything of interest to share with me? I would love to hear opinions and stuff about it. For example, where's the best starting point, what should I look before entering into this subjects, etc ANYTHING would be of great help. Thanks<issue_comment>username_1: For the most part the programming language does not matter. If you want to use lexer+parser generators, you'll want to use a language for which those are available. That's the case for most languages that aren't totally obscure or domain-specific (including PHP according to a quick search), but there are certainly significant differences in quality between different generators, so you might want to take a closer look at the quality of the available tools before picking a language. Of course that's only a consideration if you do want to use lexer and/or parser generators. If you're going to write your lexer and parser yourself, any language will do. If you decide to write a compiler and you want to use LLVM as a back end, it'd be a plus if there are bindings for LLVM for your language. That does not seem to be the case for PHP (a search only brought up [this extension](https://github.com/RickySu/php-llvm_bind), which is used to call functions in LLVM-bitcode, not about generate LLVM-bitcode). On the other hand, you can always generate LLVM-assembly as text and then invoke the LLVM command line tools. And if you're writing a compiler without LLVM or an interpreter, this does not matter anyway. It helps if your language has a map data structure to define the symbol table, but most languages have that. I personally like functional languages for language implementations as immutable maps are a good way to represent symbol tables and algebraic data types are a good way to represent ASTs, but none of that is strictly necessary. Almost any language that you're comfortable with can be used to implement languages without too much trouble. > > Is it better to create a compiled or interpreted language? > > > That depends entirely on your requirements and the properties of your language. Note that "compiled" or "interpreted" aren't really properties of the language, but of the language's currently available implementations. There's the language and then there is its implementation (or implementations). The more "dynamic" features your language has (such as defining new functions or variable at run time for example), the harder it is to write a compiler, but even without those, writing an interpreter tends to be easier. So it can certainly make sense to start with an interpreter, even if you plan to eventually go with a compiler (or a JIT compiler). Most of the front-end and mid-end phases can remain untouched when switching from an interpreter to a compiler anyway. So that's not as much of a waste of existing work as you might think. > > As far as I know, creating an interpreted language, will always require that "mother" language to be present somehow, since you can't self-host your interpreter unless it's in binary code. > > > Right, if you write an interpreter and your host language also only has interpreters, you'll need your interpreter as well as an interpreter for the host language (to run your interpreter) in order to run programs written in your language. Of course, you can always rewrite your interpreter in a language for which compilers exist, which isn't any more work than self-hosting (which is a complete rewrite anyway unless your source language is supposed to be so close to your host language that you can write your interpreter in the intersection of the two languages). Until you create a self-hosting compiler, the same would be true for your compiler though: as long as your compiler is written in PHP, you'll need PHP to compile your language (though not to run the compiled programs). > > For example, where's the best starting point, what should I look before entering into this subjects, etc ANYTHING would be of great help. > > > The [tag wiki for the compiler construction tag](https://stackoverflow.com/tags/compiler-construction/info) has a list of resources about compiler construction. Much of that information is also relevant when building interpreters. Upvotes: 3 [selected_answer]<issue_comment>username_2: Creating a fully compiled programming language can be a lot of work with tons of fiddly details to worry about. And may be limited to a particular processor and OS. For that reason, creating an interpreted language can be an easier task, and it can be written to run on anything. But you will really need to write the interpreter in a compiled language, otherwise it would be hopelessly slow. If the interpreter is split into two tasks, a compiler (to some sort of byte-code) and an interpreter, then any language can be used for the compiler. Having a dependency on another language (the 'mother' language as you call it) is not important. Most languages will have some sort of dependency (so CPython is written in C). If creating a compiled language, then you may additionally need tools like assemblers and linkers, which you probably don't want to write yourself. Or you generate output in the form of an existing language (such as C source, which I believe is how C++ started out). What's important is that someone can write programs in your new language, and somehow be able to run that program by whatever means you provide. (I've created a number of languages but of only two real varieties: low-level compiled, and mid-level interpreted. Each is used to write the compiler/interpreter of the other.) Upvotes: 0
2018/03/20
326
1,089
<issue_start>username_0: I have been working to remove the dotted lines used to highlight the focused element. But seems I'm not finding any solution. It's even visible in [SAP UI5 Demo](https://sapui5.hana.ondemand.com/#/api/sap.ui.layout.Splitter/methods/Summary) kit. [Image Highlighting the issue.](https://i.stack.imgur.com/q7Nkh.png) Could you please suggest a solution. Thanks, Samee<issue_comment>username_1: If you want to remove the dotted line you can overwrite this css class: ``` .sapMLIBFocusable:focus { outline: 1px dotted #000000; outline-offset: -1px; } ``` i do not recommend overwriting this class directly, but rather add a styleclass to your element, and include that as a selector, like so. ``` .yourclass .sapMLIBFocusable:focus { outline: 2px solid red; outline-offset: -1px; } ``` Upvotes: 1 <issue_comment>username_2: You can overwrite sap css classes like this ``` html.sap-desktop .sapUiBody :focus { outline: none;} ``` This an example for class *.sapUiBody* but you can use any other sap class or even one of your's. Upvotes: -1
2018/03/20
502
1,729
<issue_start>username_0: I am trying to automate a Python script on a Google Cloud VM using Crontab. When I run `python Daily_visits.py` my code runs as expected and generates a table in BigQuery. My Crontab job is as follows: `*/2 * * * * /usr/bin/python Daily_visits.py` but generates no output. I've run `which python` and I get `/usr/bin/python` back, and I have run `chmod +x Daily_visits.sh` on my file, what else am I missing?<issue_comment>username_1: You should write absolute path for Daily\_visits.py file. Go to Daily\_visits.py file's directory and run command: `pwd` You take output like it: `/var/www/Daily_visits.py` Copy and paste it change into crontab. `*/2 * * * * /usr/bin/python /var/www/Daily_visits.py` Upvotes: 3 [selected_answer]<issue_comment>username_2: You should have written something like ``` */2 * * * * /usr/bin/python /path/Daily_visits.py ``` or you should have placed the script in the very same folder. UPDATE ------ The status of the `cron` service was stopped since you are working on a Google Cloud Shell and not on a proper virtual machine. As @MertSimsek told you it was enough to start the cron service and then everything works as expected. ``` $ sudo service cron start ``` > > Notice that since merely the [home of the user](https://cloud.google.com/shell/) is persistent all the change will be lost after you close it and open it again. > > > It is intended as a tool to run commands/small scripts not to develop and to be used as a normal virtual machine > > Cloud Shell instances are provisioned on a per-user, per-session basis. The instance persists while your Cloud Shell session is active and terminates after a hour of inactivity. > > > Upvotes: 1
2018/03/20
2,365
8,298
<issue_start>username_0: I'm new to Java and I'm making a Tetris game on Android Studio. I'm trying to do a "Pause/Start" button and it doesn't work. When I press the button PAUSE, the music and the game stops (it works as it should). But when I press Start again, the music continues but the game doesn't resume. I've made a function "pause" with a `sleep()` inside a `while`, and a function "restart" with a boolean that becomes false (this sets the condition of the while) Here is my code : ``` private void pause() { ps=true; while(ps) { try { sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } } } private void restart() { ps=false; } ``` Here is a part of my `MainActivity`: ``` public void pauseButton (View view) { if (ps == false) { ps=true; GameState.getInstance().setUserAction(UserAction.PAUSE); mediaPlayer.pause(); } else { ps=false; GameState.getInstance().setUserAction(UserAction.START); mediaPlayer.start(); } } ``` And here is what "Useraction.Pause/Start" is doing ``` case PAUSE: pause(); break; case START: restart(); break; ``` how to make the function restart works ? I need to make the "while" stop or maybe a sleep stops in this function and I cannot.. Thanks for your help! ``` public class GameThread extends Thread{ private Shape shape; private Set remainingBlockSet = new HashSet<>(); private boolean isRunning = true; private long time; private char[] shapeTypes={'O', 'I', 'J', 'L', 'Z', 'S','T'}; private boolean ps = true; GameThread(){ //La première pièce du jeu apparait spawnNewShape(); //Initialisation du temps this.time = System.currentTimeMillis(); } //---------------------------------------------------------------------------------------------- // METHODES DE GESTION DE LA THREAD //---------------------------------------------------------------------------------------------- /\*\* \*Boucle événementielle \*/ @Override public void run() { while (isRunning) { //On bouge la pièce en fonction des demande utilisateur (bouton préssé) processUserAction(); //La pièce tombe processMoveDown(); //On enlève les lignes remplie removeCompletedRows(); //On effectue le rendu graphique RenderManager.getInstance().render(shape, remainingBlockSet); } } /\*\* \* Arret de la boucle événementielle \*/ void stopRunning(){ isRunning = false; } //---------------------------------------------------------------------------------------------- // METHODES DE GESTION DU JEU //---------------------------------------------------------------------------------------------- /\*\* \* Traitement des actions utilisateur \*/ private void processUserAction() { //On recupère la dernière action UserAction action = GameState.getInstance().getUserAction(); switch (action) { case LEFT: moveLeft(); break; case RIGHT: moveRight(); break; case ROTATE: rotate(); break; case FALL: fall(); break; case PAUSE: pause(); break; case RESUME: restart(); break; } //On définit la dernière action à nulle GameState.getInstance().setUserAction(UserAction.NONE); } // met en pause le jeu public void pause() { ps = true; while(ps) { try { sleep(5000); } catch (InterruptedException e) { e.printStackTrace(); } } } // permet de relancer le jeu public void restart() { ps=false; } private void fall(){ while(shape.canMoveDown(remainingBlockSet)){ shape.moveDown(); } } private void rotate(){ shape.rotate(remainingBlockSet); } /\*\* \* Décale la pièce d'une case vers la gauche \*/ private void moveLeft(){ shape.moveLeft(remainingBlockSet); } /\*\* \* Décale la pièce d'une case vers la droite \*/ private void moveRight(){ shape.moveRight(remainingBlockSet); } /\*\* \* Décale la pièce d'une case vers le bas \*/ private void processMoveDown(){ //Si le laps de temps n'est pas écoulé on ne fait rien long now = System.currentTimeMillis(); if(now-time < GameConstants.TIME\_LAPS){ return; } //Sinon la pièce descend if(shape.canMoveDown(remainingBlockSet)){ shape.moveDown(); } else{ processShapeCollision(); } //On redéfinit le temps comme le temps actuel time = System.currentTimeMillis(); } private void processShapeCollision(){ remainingBlockSet.addAll(shape.getBlockList()); spawnNewShape(); } /\*\* \* A partir d'un caractère choisis aléatoirement, cette methode permet de générer la pièce qui \* lui est associé. \*/ private void spawnNewShape(){ int x = (GameConstants.COLUMNS/2)-2; int y = 0; char shapeType = chooseRandomShape(); switch (shapeType){ case 'O': this.shape = new OShape(x,y); break; case 'T': this.shape = new TShape(x,y); break; case 'S': this.shape = new SShape(x,y); break; case 'I': this.shape = new IShape(x,y); break; case 'L': this.shape = new LShape(x,y); break; case 'Z': this.shape = new ZShape(x,y); break; case 'J': this.shape = new JShape(x,y); break; default: //imposible break; } } /\*\* \* Cette methode génère aléatoirement un entier entre 0 et 7. Ce dernier permet ensuite de \* séléctioner le caractère de la pièce à faire appraitre. \* @return un caractère correspondant à la prochaine pièce à faire apparaitre. \*/ private char chooseRandomShape(){ int x = (int) (Math.random() \* shapeTypes.length); return shapeTypes[x]; } private void removeCompletedRows() { for(int y = GameConstants.ROWS-1; y>0; y--){ List line = getBlockLine(y); if (isLineCompleted(line)){ //Log.i(TAG,"la ligne"+y+"est pleine :"+line); removeLine(line); moveDownAboveBlocks(y); } } } private List getBlockLine(int y) { List line = new ArrayList<>(); for(Block block : remainingBlockSet){ if(y == block.getY()){ line.add(block); } } return line; } private boolean isLineCompleted(List line) { for(int x = 0; x< GameConstants.COLUMNS; x++){ boolean found = false; for(Block block : line){ if (block.getX() == x){ found = true; break; } } if(!found) { return false; } } return true; } private void removeLine(List line){ remainingBlockSet.removeAll(line); } private void moveDownAboveBlocks(int y){ for(Block block : remainingBlockSet){ if (block.getY() ``` }<issue_comment>username_1: You haven't said it explicitly but I assume that UI works in a different thread, so you can press pause/restart button without being blocked by the running 'while' loop Your code does not show either how you define `ps` variable - is it volatile? What we can see these code is NOT synchronized so does not put any memory barrier, in this case (when ps is not volatile) this variable might be 'cached' by the thread which runs the while loop and never re-read from memory again. Please make sure that it is volatile - or AtomicBoolean, or put some synchronization in place (which would be good to have anyway here unless all operations in pausing/starting are idempotent) Please refer here for more details: <https://www.concretepage.com/java/thread-communication-using-volatile-in-java> <https://jorosjavajams.wordpress.com/volatile-vs-synchronized/> Upvotes: 1 <issue_comment>username_2: Your problem is probably that you "Consume" the thread that you pause with which is almost certainly the GUI thread (Yes I said "the" gui thread because there is only one, as long as you have it, nothing else can happen on the GUI). The second button can never be pushed because the first is still using the thread. Any time you get any notification from the system, only take a very quick action and return the thread to the system immediately. (The exception is the thread passed to "main" which you can keep forever) the loop also won't do anything to pause your game unless you are doing the entire game on the gui thread. In other words if the game is on your main thread, locking up the gui thread won't stop it. When they press the pause button, don't have the button call pause(), instead have it set ps=true and return. Have your game loop (the thread that is actually running the game) check the ps flag every iteration and block as long as ps is true (The way you are doing it is fine, it just needs to be in other code). Also ps should absolutely be volatile but that's never the problem... Upvotes: 0
2018/03/20
439
1,593
<issue_start>username_0: I am trying to hide/show a `div` on anchor click, but that doesn't work. I have repeater, and in it there are so many posts. There are bound images, descriptions and comments (anchor tag). When I click on a comment, the associated `div` will be displayed. ``` [Show Search](#) Content goes here ``` It works perfectly outside of repeater item template, but when I placed it inside the repeater, then it doesn't work<issue_comment>username_1: I think the cleanest way to implement this is in the `ItemDataBound` event. Markup ``` [Show Search](#) Content goes here ``` Event ``` protected void repeater_ItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { var link = e.Item.FindControl("showSearch") as HtmlControl; link.Attributes["onclick"] = $"$('{e.Item.FindControl("divSearch").ClientID}').toggle('medium'); return false;"; } } ``` Upvotes: 0 <issue_comment>username_2: Inside the template control, you need to use `<%#` instead of `<%=`: ``` [').toggle('medium');return false;">Show Search](#) ``` Upvotes: 0 <issue_comment>username_3: The other 2 answers given are forgetting that there are more that one `divSearch` in a Repeater, so you cannot directly access them in the aspx. For that you need to use FindControl, this can be done inline. You have to find the Panel inside the Container and get that ClientID. ``` <%# Eval("value") %> [').toggle('medium');return false;">Show](#) ``` Upvotes: 2 [selected_answer]
2018/03/20
2,852
6,275
<issue_start>username_0: Here is the query, ``` db.getCollection('_ad.insight').aggregate([ { $match:{ date: { $lte: ISODate('2018-12-31'), $gte: ISODate('2017-01-01') } } }, { $project:{ _id: 0, year: {$year: "$date"}, month: {$month: "$date"}, day: {$dayOfMonth: "$date"}, date: 1, clicks: 1, reach: 1 } }, { $group:{ _id: { month: "$month", year: "$year" }, clicks: {$sum: "$clicks"}, reach: {$sum: "$reach"}, date: {$addToSet: "$date"} } }, { $project:{ _id: 0, month: "$_id.month", year: "$_id.year", clicks: 1, reach: 1, date: 1 } } ]); ``` And the response I am getting, ``` /* 1 */ { "clicks" : 1096, "reach" : 33288, "date" : [ ISODate("2018-01-01T00:00:00.000+05:00"), ISODate("2017-12-31T00:00:00.000+05:00"), ISODate("2017-12-28T00:00:00.000+05:00"), ISODate("2017-12-26T00:00:00.000+05:00"), ISODate("2017-12-24T00:00:00.000+05:00"), ISODate("2017-12-23T00:00:00.000+05:00"), ISODate("2017-12-25T00:00:00.000+05:00"), ISODate("2017-12-29T00:00:00.000+05:00"), ISODate("2017-12-22T00:00:00.000+05:00"), ISODate("2017-12-21T00:00:00.000+05:00"), ISODate("2017-12-30T00:00:00.000+05:00"), ISODate("2017-12-20T00:00:00.000+05:00"), ISODate("2017-12-27T00:00:00.000+05:00") ], "month" : 12, "year" : 2017 }, /* 2 */ { "clicks" : 1629, "reach" : 98113, "date" : [ ISODate("2018-01-05T00:00:00.000+05:00"), ISODate("2018-01-04T00:00:00.000+05:00"), ISODate("2018-01-03T00:00:00.000+05:00"), ISODate("2018-01-07T00:00:00.000+05:00"), ISODate("2018-01-08T00:00:00.000+05:00"), ISODate("2018-01-02T00:00:00.000+05:00"), ISODate("2018-01-06T00:00:00.000+05:00") ], "month" : 1, "year" : 2018 } ``` Sample Collection: Its a flat structure, contains around 400 fields, but i am showing only those which i am using in query. ``` { "_id" : ObjectId("5akjbrd51f193455adtrf6fc"), "clicks" : 5, "reach" : 10 "date" : ISODate("2018-01-06T00:00:00.000+05:00"), "post_engagement" : 127, "post_reactions" : 1, "post_shares" : 0, "qualificationfailed" : 0, "qualificationfailed_conversion_value" : 0 } ``` Desired Output: ``` /* 1 */ { "clicks" : 1096, "reach" : 33288, "date" : [ ISODate("2018-01-01T00:00:00.000+05:00"),//this shouldn't be here ISODate("2017-12-31T00:00:00.000+05:00"), ISODate("2017-12-28T00:00:00.000+05:00"), ISODate("2017-12-26T00:00:00.000+05:00"), ISODate("2017-12-24T00:00:00.000+05:00"), ISODate("2017-12-23T00:00:00.000+05:00"), ISODate("2017-12-25T00:00:00.000+05:00"), ISODate("2017-12-29T00:00:00.000+05:00"), ISODate("2017-12-22T00:00:00.000+05:00"), ISODate("2017-12-21T00:00:00.000+05:00"), ISODate("2017-12-30T00:00:00.000+05:00"), ISODate("2017-12-20T00:00:00.000+05:00"), ISODate("2017-12-27T00:00:00.000+05:00") ], "month" : 12, "year" : 2017 }, /* 2 */ { "clicks" : 1629, "reach" : 98113, "date" : [ // ISODate("2018-01-01T00:00:00.000+05:00") this should be in this group ISODate("2018-01-05T00:00:00.000+05:00"), ISODate("2018-01-04T00:00:00.000+05:00"), ISODate("2018-01-03T00:00:00.000+05:00"), ISODate("2018-01-07T00:00:00.000+05:00"), ISODate("2018-01-08T00:00:00.000+05:00"), ISODate("2018-01-02T00:00:00.000+05:00"), ISODate("2018-01-06T00:00:00.000+05:00") ], "month" : 1, "year" : 2018 } ``` The issue is, ISODate("2018-01-01T00:00:00.000+05:00") as you can see in output document 1, in date array the above mentioned date is on first index. It shows the "month" : 12 and "year" : 2017 as i am grouping by month and year. So my concern is, ISODate("2018-01-01T00:00:00.000+05:00") should belong to group number 2, that is 2 output document but its showing up in group 1. I don't know what i am doing wrong, as its a simple pipeline. Please help!!<issue_comment>username_1: Take note that `ISODate("2018-01-01T00:00:00.000+05:00")` is in **UTC** + 5. This means that this entry has this date 2017-12-31T19:00:00 on UTC time. Mongo is grouping the dates according to **UTC**. You might want to check this post for dealing with different timezones [How to aggregate by year-month-day on a different timezone](https://stackoverflow.com/questions/18852095/how-to-aggregate-by-year-month-day-on-a-different-timezone) Upvotes: 3 [selected_answer]<issue_comment>username_2: As username_1 mentioned, $year is by default using UTC time. You can now mention the timezone when you are converting date to string. Refer to <https://docs.mongodb.com/manual/reference/operator/aggregation/month/index.html> Below will do the job for you. ``` db.getCollection('_ad.insight').aggregate([ { $match:{ date: { $lte: ISODate('2018-12-31T00:00:00.000+05:00'), $gte: ISODate('2017-01-01T00:00:00.000+05:00') } } }, { $project:{ _id: 0, year: {$year: {date: "$date",timezone: "+0500"}}, month: {$month: {date: "$date",timezone: "+0500"}}, day: {$dayOfMonth: {date: "$date",timezone: "+0500"}}, date: 1, clicks: 1, reach: 1 } }, { $group:{ _id: { month: "$month", year: "$year" }, clicks: {$sum: "$clicks"}, reach: {$sum: "$reach"}, date: {$addToSet: "$date"} } }, { $project:{ _id: 0, month: "$_id.month", year: "$_id.year", clicks: 1, reach: 1, date: 1 } } ]); ``` Upvotes: 1
2018/03/20
802
2,790
<issue_start>username_0: How to set a trap in my program that when I push the purchase button and the quantity to be purchased exceeds the amount remaining on my database it will give an error that there's no enough amount left. Currently it only goes to a negative value. Sql server does not support unsigned values and I do not know if what I want to do is even possible. Here's my code. ``` SqlConnection con = new SqlConnection( @"Data Source=DESKTOP-39SPLT0\SQLEXPRESS;Initial Catalog=posDB;Integrated Security=True"); string Query = "UPDATE tblProducts SET qty = qty - @quantity where pName = @name"; using (SqlCommand cmd = new SqlCommand(Query, con)) { cmd.Parameters.AddWithValue("@quantity", int.Parse(txBurger.Text)); cmd.Parameters.AddWithValue("@name", label1.Text); con.Open(); cmd.ExecuteNonQuery(); con.Close(); } using (SqlCommand cmd = new SqlCommand(Query, con)) { cmd.Parameters.AddWithValue("@quantity", int.Parse(txCheese.Text)); cmd.Parameters.AddWithValue("@name", label5.Text); con.Open(); cmd.ExecuteNonQuery(); con.Close(); } ```<issue_comment>username_1: You could set a `CONSTRAINT` on the value of `qty` on the table. For example: ``` CREATE TABLE test (qty int); ALTER TABLE test ADD CONSTRAINT PosQty CHECK (qty >= 0); GO INSERT INTO test VALUES(3); GO UPDATE test SET qty = qty - 2; --Will work (3 - 2 = 1) GO UPDATE test SET qty = qty - 3; --Will fail (1 - 3 = -2) GO UPDATE test SET qty = qty - 1; --Will work (1 - 1 = 0) GO --Clean up DROP TABLE test; ``` Of course, this may not be suitable, depending on your scenario. you definitely need to ensure you error handle appropriately. Upvotes: 4 [selected_answer]<issue_comment>username_2: Perhaps you can use a stored proc call and wrap the whole piece of code into a transaction. The following isn't complete code, but you should be able to take it from here. ``` declare @newQty int begin tran updateQty UPDATE tblProducts SET qty = qty - @quantity where pName = @name select @newQty qty = qty from tblProducts where pName = @name if (@newQty < 0) Begin rollback tran updateQty THROW 51000, 'The quantity is too low', 1; END else BEGIN commit tran updateQty END ``` Upvotes: 0 <issue_comment>username_3: Here is how I would do this: First, change your query: ``` UPDATE tblProducts SET qty = qty - @quantity where pName = @name AND qty >= @quantity; SELECT @@ROWCOUNT; ``` Then, instead of using `ExecuteNonQuery` use `ExecuteScalar` and check if the number of records modified is `0` it means that the `@quantity` is bigger than the value of `qty`. Also, I would recommend adding a check constraint as shown in Larnu's [answer.](https://stackoverflow.com/a/49387106/3094533) Upvotes: 2
2018/03/20
582
1,953
<issue_start>username_0: How can I append the player-id to input field on page load? ``` Get OneSignal player id function getOneSignalToken() { //alert("RegisterId:" + android.getOneSignalRegisteredId()); $('#player-id').val($('#player-id').val() + android.getOneSignalRegisteredId()); }; ```<issue_comment>username_1: You could set a `CONSTRAINT` on the value of `qty` on the table. For example: ``` CREATE TABLE test (qty int); ALTER TABLE test ADD CONSTRAINT PosQty CHECK (qty >= 0); GO INSERT INTO test VALUES(3); GO UPDATE test SET qty = qty - 2; --Will work (3 - 2 = 1) GO UPDATE test SET qty = qty - 3; --Will fail (1 - 3 = -2) GO UPDATE test SET qty = qty - 1; --Will work (1 - 1 = 0) GO --Clean up DROP TABLE test; ``` Of course, this may not be suitable, depending on your scenario. you definitely need to ensure you error handle appropriately. Upvotes: 4 [selected_answer]<issue_comment>username_2: Perhaps you can use a stored proc call and wrap the whole piece of code into a transaction. The following isn't complete code, but you should be able to take it from here. ``` declare @newQty int begin tran updateQty UPDATE tblProducts SET qty = qty - @quantity where pName = @name select @newQty qty = qty from tblProducts where pName = @name if (@newQty < 0) Begin rollback tran updateQty THROW 51000, 'The quantity is too low', 1; END else BEGIN commit tran updateQty END ``` Upvotes: 0 <issue_comment>username_3: Here is how I would do this: First, change your query: ``` UPDATE tblProducts SET qty = qty - @quantity where pName = @name AND qty >= @quantity; SELECT @@ROWCOUNT; ``` Then, instead of using `ExecuteNonQuery` use `ExecuteScalar` and check if the number of records modified is `0` it means that the `@quantity` is bigger than the value of `qty`. Also, I would recommend adding a check constraint as shown in Larnu's [answer.](https://stackoverflow.com/a/49387106/3094533) Upvotes: 2
2018/03/20
628
2,011
<issue_start>username_0: Suppose to have two file, `home1.php` and `home2.php` in different folders. This is folder disposition: ``` root folder1 home1.php folder2 home2.php ``` I need to include `home1.php` in `home2.php` so in home2.php I do: ``` php include('../folder1/home1.php'); ? ``` But it give me `file not found`.Anyone can help me?<issue_comment>username_1: You could set a `CONSTRAINT` on the value of `qty` on the table. For example: ``` CREATE TABLE test (qty int); ALTER TABLE test ADD CONSTRAINT PosQty CHECK (qty >= 0); GO INSERT INTO test VALUES(3); GO UPDATE test SET qty = qty - 2; --Will work (3 - 2 = 1) GO UPDATE test SET qty = qty - 3; --Will fail (1 - 3 = -2) GO UPDATE test SET qty = qty - 1; --Will work (1 - 1 = 0) GO --Clean up DROP TABLE test; ``` Of course, this may not be suitable, depending on your scenario. you definitely need to ensure you error handle appropriately. Upvotes: 4 [selected_answer]<issue_comment>username_2: Perhaps you can use a stored proc call and wrap the whole piece of code into a transaction. The following isn't complete code, but you should be able to take it from here. ``` declare @newQty int begin tran updateQty UPDATE tblProducts SET qty = qty - @quantity where pName = @name select @newQty qty = qty from tblProducts where pName = @name if (@newQty < 0) Begin rollback tran updateQty THROW 51000, 'The quantity is too low', 1; END else BEGIN commit tran updateQty END ``` Upvotes: 0 <issue_comment>username_3: Here is how I would do this: First, change your query: ``` UPDATE tblProducts SET qty = qty - @quantity where pName = @name AND qty >= @quantity; SELECT @@ROWCOUNT; ``` Then, instead of using `ExecuteNonQuery` use `ExecuteScalar` and check if the number of records modified is `0` it means that the `@quantity` is bigger than the value of `qty`. Also, I would recommend adding a check constraint as shown in Larnu's [answer.](https://stackoverflow.com/a/49387106/3094533) Upvotes: 2
2018/03/20
679
2,317
<issue_start>username_0: I have a file with global variables: ``` @Injectable() export class Globals { public baseURL:string; public loginURL:string; public proxyURL:string; public servicesURL:string; constructor(platformLocation: PlatformLocation) { this.baseURL = (platformLocation as any).location.href; this.loginURL = this.baseURL + 'rest/login'; this.proxyURL = this.baseURL + 'rest/proxy'; this.servicesURL = this.baseURL + 'rest/serviceRegistry'; } } ``` At the moment my API-Calls fail because the variables aren't set yet. Is there a way to only inject this service when the constructor is run or do I have to use Observables?<issue_comment>username_1: You could set a `CONSTRAINT` on the value of `qty` on the table. For example: ``` CREATE TABLE test (qty int); ALTER TABLE test ADD CONSTRAINT PosQty CHECK (qty >= 0); GO INSERT INTO test VALUES(3); GO UPDATE test SET qty = qty - 2; --Will work (3 - 2 = 1) GO UPDATE test SET qty = qty - 3; --Will fail (1 - 3 = -2) GO UPDATE test SET qty = qty - 1; --Will work (1 - 1 = 0) GO --Clean up DROP TABLE test; ``` Of course, this may not be suitable, depending on your scenario. you definitely need to ensure you error handle appropriately. Upvotes: 4 [selected_answer]<issue_comment>username_2: Perhaps you can use a stored proc call and wrap the whole piece of code into a transaction. The following isn't complete code, but you should be able to take it from here. ``` declare @newQty int begin tran updateQty UPDATE tblProducts SET qty = qty - @quantity where pName = @name select @newQty qty = qty from tblProducts where pName = @name if (@newQty < 0) Begin rollback tran updateQty THROW 51000, 'The quantity is too low', 1; END else BEGIN commit tran updateQty END ``` Upvotes: 0 <issue_comment>username_3: Here is how I would do this: First, change your query: ``` UPDATE tblProducts SET qty = qty - @quantity where pName = @name AND qty >= @quantity; SELECT @@ROWCOUNT; ``` Then, instead of using `ExecuteNonQuery` use `ExecuteScalar` and check if the number of records modified is `0` it means that the `@quantity` is bigger than the value of `qty`. Also, I would recommend adding a check constraint as shown in Larnu's [answer.](https://stackoverflow.com/a/49387106/3094533) Upvotes: 2
2018/03/20
589
2,147
<issue_start>username_0: Let the following C function ``` void underTest(){ static int i = 0; i++; if (i > 50*1000*1000) { /* I want to test this */ } } ``` I need to unit-test this function, including the if block. Among solutions I can foresee, there is: * Do whatever it takes to reach all cases using the expected interface (including calling the function a lot of times in that example), which may be a real hassle * Reading the memory map after test compilation to access `i` from its memory address + either modifying the object code + or changing the source and hoping the recompilation will be deterministic * Forget testing and use peer review instead, which is no good solution either given the cost **Note**: because this is code to be embedded in an aircraft, with very high requirements on test representativity, any form of code modification (including a `#define static extern` trick) is unacceptable. Also, modifying the code to remove the issue (e.g. making `i` global) would cost an arm. Does somebody know a trick out of this annoying situation? No matter how dirty it is, as long as source code is untouched.<issue_comment>username_1: If by *modifying object code* you mean modifying the generated .elf, that will not work since `i` is allocated by the loader at *launch-time*... (surely I misunderstood it, just wanted to clarify it). You will need to find out the address of the `i` variable in the .bss section once loaded and access it directly. You do not say which compiler you are using, but, for example, AFAIK gcc has extensions to specify the address of a variable/symbol at source code level and at linking time, yours? Another solution is to place the .bss at known address throuhg linker script. Upvotes: 2 [selected_answer]<issue_comment>username_2: Refactor you code to be testable. Transform ``` void underTest(){ static int i = 0; i++; if (i > 50*1000*1000) { /* I want to test this */ } } ``` to ``` void alsoUnderTest(){ /* I want to test this */ } void underTest(){ static int i = 0; i++; if (i > 50*1000*1000) { alsoUnderTest() } } ``` Upvotes: -1
2018/03/20
828
3,172
<issue_start>username_0: I'm developing a java client/server architecture where a client sends a message to the server using jackson. Exchanged data is defined by the *Message* class: ``` public class Message { private Header header; //Object that contains only String private Object pdu; public Message() { } // Get & Set [...] } ``` This class can contain any object thanks to the *pdu* field. For example, a *Data* object can be instantiated and added as message. ``` public class Data{ private String name; private String type; public Data() { } // Get & Set [...] } ``` On the server side, when the message is received, I would like to retrieve the nested object (Data). However, the following exception occurs "com.fasterxml.jackson.databind.node.ObjectNode cannot be cast to Model.Data" when I try to cast the pdu into Data object. How can I perform that for any object. Here is the server snippet code: ``` Socket socket = serverSocket.accept(); is = new DataInputStream(socket.getInputStream()); os = new DataOutputStream(socket.getOutputStream()); BufferedReader in = new BufferedReader(new InputStreamReader(is)); ObjectMapper mapper = new ObjectMapper(); Message message = mapper.readValue(in.readLine(), Message.class); Data pdu = (Data) message.getPdu(); // Exception here ``` And here the client snippet code: ``` Message msg = new Message(header, new Data("NAME", "TYPE")); ObjectMapper mapper = new ObjectMapper(); String jsonStr = mapper.writeValueAsString(msg); PrintWriter pw = new PrintWriter(os); pw.println(jsonStr); pw.flush(); ``` **Note**: The message sent by the client and received by the server is formatted as follow: `Message{header=Header{type='TYPE', senderAddr='ADDR', senderName='NAME'}, pdu={"name":"NAME","type":"TYPE"}}`<issue_comment>username_1: There is no way to figure out what `pdu` Java type is just from just `{"name":"NAME", "type":"TYPE"}` JSON. If `pdu` can store multiple different object types (currently it's declared as `Object`) a JSON field has to be used to tell Jackson what is the actual Java type e.g. by using `@JsonTypeInfo`: ``` @JsonTypeInfo(use = Id.NAME, include = As.PROPERTY, property = "type") @JsonSubTypes({ @JsonSubTypes.Type(value = DataA.class, name = "data-a"), @JsonSubTypes.Type(value = DataB.class, name = "data-b") }) ``` Another approach would be to write a custom serializer/deserializer for `pdu` field as explained [here](https://stackoverflow.com/questions/7161638/how-do-i-use-a-custom-serializer-with-jackson). Upvotes: 2 <issue_comment>username_2: I know this is an old question, but you can get Jackson to do what Karol described under the hood, using the object class as the type source, which saves you from having to enumerate the types. You still have to be sure that any object you use will be serializable, it's not magic. Anyway solution is in the javadoc for @JsonTypeInfo, declare your field as: ``` @JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, property="pduClass") Object getPdu(); ``` This will add a "pduClass" field to the json representation, but won't affect the java model Upvotes: 0
2018/03/20
578
2,014
<issue_start>username_0: Something is wrong with my code, because I get a fatal error when I try to call to method `getCollectionNames()`. `listCollections()` works perfectly. This is my code. ``` require '../vendor/autoload.php'; $client = new MongoDB\client; $database = $client->test; $colections = $database->getCollectionNames(); foreach ($colections as $col) { var_dump($col); } ``` ...and this is the error. > > Fatal error: Uncaught Error: Call to undefined method > MongoDB\Database::getCollectionNames() in > (some route) Stack trace: #0 > {main} thrown in (some route) on > line (some line) > > > Does anyone know what I am doing wrong?<issue_comment>username_1: There is no way to figure out what `pdu` Java type is just from just `{"name":"NAME", "type":"TYPE"}` JSON. If `pdu` can store multiple different object types (currently it's declared as `Object`) a JSON field has to be used to tell Jackson what is the actual Java type e.g. by using `@JsonTypeInfo`: ``` @JsonTypeInfo(use = Id.NAME, include = As.PROPERTY, property = "type") @JsonSubTypes({ @JsonSubTypes.Type(value = DataA.class, name = "data-a"), @JsonSubTypes.Type(value = DataB.class, name = "data-b") }) ``` Another approach would be to write a custom serializer/deserializer for `pdu` field as explained [here](https://stackoverflow.com/questions/7161638/how-do-i-use-a-custom-serializer-with-jackson). Upvotes: 2 <issue_comment>username_2: I know this is an old question, but you can get Jackson to do what Karol described under the hood, using the object class as the type source, which saves you from having to enumerate the types. You still have to be sure that any object you use will be serializable, it's not magic. Anyway solution is in the javadoc for @JsonTypeInfo, declare your field as: ``` @JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, property="pduClass") Object getPdu(); ``` This will add a "pduClass" field to the json representation, but won't affect the java model Upvotes: 0
2018/03/20
571
1,790
<issue_start>username_0: Consider the code: ``` #include #include struct stru { int a{}; int b{}; }; int main() { std::atomic as; auto s = as.load(); std::cout << s.a << ' ' << s.b << std::endl; } ``` Note that although `stru` has default member initializer, it still qualifies as an aggregate type since C++14. `std::atomic` has a trivial default constructor. According to the standard, should the members of `as` be initialized to zero? clang 6.0.0 doesn't do this (see [here](https://wandbox.org/permlink/qHGGR66seZOkmH7c)), while gcc 7.2.0 seems so (see [here](https://wandbox.org/permlink/nlKKeEWJyy50Yzn4)).<issue_comment>username_1: Strictly speaking, I think both compilers are right, in that your program exhibits undefined behavior. To quote n4140 (C++14), [[atomics.types.operations.req]](https://timsong-cpp.github.io/cppwp/n4140/atomics.types.operations.req), emphasis mine: > > In the following operation definitions: > > > * an A refers to one of the atomic types. > > > [...] > > > > ``` > A::A() noexcept = default; > > ``` > > *Effects*: leaves the atomic object in an **uninitialized** state. [ Note: These semantics ensure compatibility with C. — end note ] > > > `as` is uninitialized before the load. So the usual spiel about undefined behavior must follow. Upvotes: 3 [selected_answer]<issue_comment>username_2: According to [cppreference](http://en.cppreference.com/w/cpp/atomic/atomic/atomic) the `std::atomic::atomic()` constructor doesn't initialize the obj: > > > ``` > atomic() noexcept = default; > > ``` > > 1) The default constructor is trivial: no initialization takes place > other than zero initialization of static and thread-local objects. > std::atomic\_init may be used to complete initialization. > > > Upvotes: 0
2018/03/20
1,484
4,491
<issue_start>username_0: Please help me to redesign the below query for improving the performance - ``` select LT.id, LT.SalesAmount, RT.DiscountAmount, (LT.SalesAmount - isnull(RT.DiscountAmount,0.00)) as FinalAmount from @LeftTable as LT left join @RightTable as RT on RT.id = LT.id where (LT.SalesAmount - isnull(RT.DiscountAmount,0.00)) > 0 ``` Note - Above query is not exactly that query which is creating performance issue but i simplified it to explain it better here. I found that issue is, when we use ISNULL in where clause with right table column i.e., `isnull(RT.DiscountAmount,0.00)`. So, I am looking for alternate to above query where we can remove `isnull` from where clause. Input data - ``` declare @LeftTable table (id int, SalesAmount decimal(10,2)) declare @RightTable table (id int, DiscountAmount decimal(10,2)) insert into @LeftTable (id, SalesAmount) select 1, 10.00 union all select 2, 20.00 union all select 3, 50.00 insert into @RightTable (id, DiscountAmount) select 3, 5.00 union all select 5, 10.00 ``` Output required - ``` id SalesAmount DiscountAmount FinalAmount 1 10.00 NULL 10.00 2 20.00 NULL 20.00 3 50.00 5.00 45.00 ``` Actual query is like - ``` select col1, col2,..... from Table1 T1 inner join Table2 T2 on T2.id = T1.id inner join dbo.functionName(@variable1) f1 on f1.id = T2.id ................... left join (select col1, col2 from table3 T3 inner join dbo.functionName(@variable2) f2 on f2.id = T3.id) T4 ...................... where T2.col1 + isnull(t4.col2, 0.0) > 0 and ................. ``` Hope so I have mentioned all details here because that is what I have (Actually my colleague is facing this issue and I am trying to help him).<issue_comment>username_1: Here's one easy way: ``` where (RT.DiscountAmount IS NOT NULL AND LT.SalesAmount - RT.DiscountAmount > 0) OR (RT.DiscountAmount IS NULL AND LT.SalesAmount > 0) ``` Upvotes: 0 <issue_comment>username_2: You can try something like this, it eliminates the ISNULL function, but it will introduce a UNION like behavior. ``` ( (LT.SalesAmount > 0 AND RT.DiscountAmount IS NULL) OR (LT.SalesAmount - RT.DiscountAmount > 0) ) ``` Upvotes: 1 <issue_comment>username_3: You can try something like the following: ``` ;WITH JoinResults AS ( SELECT LT.id, LT.SalesAmount, RT.DiscountAmount, LT.SalesAmount, RT.DiscountAmount, LT.SalesAmount - RT.DiscountAmount as FinalAmount FROM @LeftTable as LT left join @RightTable as RT on RT.id = LT.id ) SELECT * FROM JoinResults AS J WHERE (J.DiscountAmount IS NULL AND J.SalesAmount > 0) OR J.FinalAmount > 0 ``` Upvotes: 0 <issue_comment>username_4: Try ``` declare @LeftTable table (id int primary key, SalesAmount decimal(10,2)); declare @RightTable table (id int primary key, DiscountAmount decimal(10,2)); insert into @LeftTable (id, SalesAmount) values (1, 10.00) , (2, 20.00) , (3, 50.00); insert into @RightTable (id, DiscountAmount) values (3, 5.00) , (2, 25.00) , (5, 10.00); select lt.id, lt.SalesAmount, rt.DiscountAmount, lt.SalesAmount - isnull(rt.DiscountAmount, 0) as net from @LeftTable lt left join @RightTable rt on lt.id = rt.id where (LT.SalesAmount - isnull(RT.DiscountAmount,0.00)) > 0; select lt.id, lt.SalesAmount, rt.DiscountAmount, lt.SalesAmount - rt.DiscountAmount as net from @LeftTable lt join @RightTable rt on lt.id = rt.id where (LT.SalesAmount - RT.DiscountAmount) > 0 union all select lt.id, lt.SalesAmount, null, lt.SalesAmount as net from @LeftTable lt left join @RightTable rt on lt.id = rt.id where LT.SalesAmount > 0 and rt.id is null; ``` Upvotes: 0 <issue_comment>username_5: Replacing sub-query with variable table and using variable table to hold sub-query data improves the performance. This is our approach - ``` declare @xyz table (col1 int, col2 int) insert into @xyz (col1, col2) select col1, col2 from table3 T3 inner join dbo.functionName(@variable2) f2 on f2.id = T3.id select col1, col2,..... from Table1 T1 inner join Table2 T2 on T2.id = T1.id inner join dbo.functionName(@variable1) f1 on f1.id = T2.id ................... left join @xyz T4 ...................... where T2.col1 + isnull(t4.col2, 0.0) > 0 and ................. ``` Thank you @Xedni for comments. Upvotes: 1 [selected_answer]
2018/03/20
1,270
4,937
<issue_start>username_0: This is my first question in stack overflow so forgive me if I am going against the rules. I want to take user input as string and reverse every nth word of the string. The n value is also entered by the user. If the user inputs invalid value the program should respond the user accordingly. Until now, I am able to take a string input and reverse the whole string. I need help to take nth value as an input and reverse the string with the n value. And the program should run without using "String.reverse" or any other string function. Hopefully, I elaborated every aspect of the problem. Thanks :) Sample I/O should look like: *User Input = "Hello World! Programming is fun"* *User Inputs n value = "2"* *Expected Output = "Hello !dlroW Programming si fun"* *User inputs n-value "2", which means every second word should be reversed.* Following is the program that I wrote till now: ``` import java.util.*; public class StringReverse { public static void main(String[] args) { // TODO Auto-generated method stub Scanner in = new Scanner(System.in); String original, reverse=""; System.out.print("Please enter a sentence to reverse: "); original= in.nextLine(); for (int i=original.length()-1; i>=0; i--) { reverse += original.charAt(i); } System.out.println(reverse); } } ```<issue_comment>username_1: Iterate your `String` and append (`+`) every character in a separated `String` until you reach a whitespace (`' '`). If a whitespace occures you have a word which you can store and count. Reverse the n-th word by iterating the word from behind (`i--` like in your actual code) and add every character to a `String` so you have a reversed word by simply adding the characters. I don't know whether concatenating (`+`) of `String` is allowed in your situation? Sounds like homework so i just give you some inspiration. Happy coding! :) Upvotes: 1 <issue_comment>username_2: You can try with following code. ``` import java.util.Scanner; public class March21th { private static Scanner sc; public static void main(String[] args) { sc = new Scanner(System.in); System.out.print("Please enter a sentence to reverse: "); String str = sc.nextLine(); System.out.print("Please enter nth value: "); int n = sc.nextInt(); String[] strArr = str.split(" "); int len = strArr.length; String str2 = strArr[n-1]; //System.out.println("str2: "+str2); String strrev2 = strArr[len-n]; //System.out.println("strrev2: "+strrev2); char temp; char[] str2CharArr = str2.toCharArray(); char[] strrev2CharArr = strrev2.toCharArray(); int str2CharArrLen = str2CharArr.length; int strrev2CharArrLen = strrev2CharArr.length; //System.out.println("str2CharArrLen: "+str2CharArrLen); //System.out.println("strrev2CharArrLen: "+strrev2CharArrLen); for(int i=0,j=str2CharArrLen-1;i ``` input: ``` String str = "Hello World! Programming is fun"; ``` output: ``` Hello !dlroW Programming si fun ``` If you don't want to use split() function then take help form [this page](https://stackoverflow.com/questions/2939691/i-want-to-split-string-without-using-split-function) Upvotes: 2 [selected_answer]<issue_comment>username_1: I just had some time to implement another version of reversing a sentence with n-th value. Just a good practice. ;) ``` public static void main(String[] args) { sc = new Scanner(System.in); System.out.print("Please enter a sentence to reverse: "); String sentence = sc.nextLine(); System.out.print("Please enter nth value: "); int nthValue = sc.nextInt(); System.out.println(reverseSentence(sentence, nthValue)); } private static String reverseSentence(String sentence, int nthValue) { String reversedSentence = ""; String word = ""; int wordCount = 1; char[] sentenceChar = sentence.toCharArray(); for (int i=0; i < sentenceChar.length; i++) { char letter = sentenceChar[i]; // letter is whitespace or end of sentence? if ((letter == ' ') || i == sentenceChar.length - 1) { // match n-th value? if (wordCount % nthValue == 0) { reversedSentence += reverseWord(word) + ' '; } else { reversedSentence += word + ' '; } wordCount++; word = ""; } else { word += letter; } } return reversedSentence; } private static String reverseWord(String word) { String reversedWord = ""; char[] wordChar = word.toCharArray(); for (int i=wordChar.length - 1; i >= 0; i--) { reversedWord += wordChar[i]; } return reversedWord; } ``` Upvotes: 0
2018/03/20
2,001
7,630
<issue_start>username_0: ``` Animal myAnimal = new Animal(); ``` I have this code above. As far as I know, it will do these things : 1. An animal object will be created on heap memory 2. The reference to that object will be passed to the reference variable "myAnimal" In other words, "myAnimal" variable holds the memory address of "Animal" object on heap. What I don't understand is that 1. How does the memory address value look like? 2. Is that one address or numerous address value? If only one, how can myAnimal have accesses to all object fields of Animal object like myAnimal.name, myAnimal.height,...? Can anyone explain this please? Thanks in advance.<issue_comment>username_1: This is an oversimplification: 1. It's just a number that represents some address location; how big that number is depends on your computer's architecture (32 bit or 64 bit) 2. It is one address value; it is the address location of the place where your object representation in memory **starts**. You can compare it with the address of your house. It has one address. All rooms in your house (fields) have a different location, but you have to enter through your front door (the 'start' location). Room locations are relative to your front door. I admit this example is a bit contrived, but you get the point... Upvotes: 3 <issue_comment>username_2: Addresses are just `long` numbers to denote the memory location of your object. And they depends on the bit's of the machine. It is 64 bit long on 64 bit machine and 32 bit long on 32 bit machine. > > How does the memory address value look like? > > > If you are really enthusiastic to see those big numbers in your console, you can actually explore the [Unsafe API](http://www.docjar.com/docs/api/sun/misc/Unsafe.html) ``` public native long getAddress(long address) ``` > > Fetches a native pointer from a given memory address. If the address is > zero, or does not point into a block obtained from #allocateMemory , the results are undefined. > > > If the native pointer is less than 64 bits wide, it is extended as > an unsigned number to a Java long. The pointer may be indexed by any > given byte offset, simply by adding that offset (as a simple integer) to > the long representing the pointer. The number of bytes actually read > from the target address maybe determined by consulting #addressSize . > > > --- > > Is that one address or numerous address value? If only one, how can myAnimal have accesses to all object fields of Animal object like myAnimal.name, myAnimal.height,...? > > > Well it should be **one** on top level and that address location may contain address locations of others (I am not really sure) I have not really tried running this on my machine. Upvotes: 2 <issue_comment>username_3: When an object is created, java does not share its `Actual Memory Address` with refrence variables instead of that Java creates an `index number` for your `object`,which is passed to the `reference variable` of that object. So, you are able to access your object with help of that `index number`. Further your object hold the `refrences` of its `children` and help you to access them. Upvotes: 0 <issue_comment>username_4: The Java Virtual Machine Specification [states](http://docs.oracle.com/javase/specs/jvms/se8/html/jvms-2.html#jvms-2.4) > > There are three kinds of reference types: class types, array types, > and interface types. **Their values are references** to dynamically > created class instances, arrays, or class instances or arrays that > implement interfaces, respectively. > > > and [clarifies](https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-2.html#jvms-2.2) > > Values of type *reference* can be thought of **as pointers to objects**. > > > So the variable `myAnimal` in ``` Animal myAnimal = new Animal(); ``` is storing a pointer to the `Animal` object on the heap. You ask > > How does the memory address value look like? > > > A memory address is typically just a numerical value that's an offset into the process' allocated memory. When a process reads that value, it can directly address that location and read from or write to it (think of it as an offset index in an array). The object itself is more than just its address. In fact, its address can change multiple times over the lifetime of the JVM process as the garbage collector moves objects around. However, the [JVMS does not specify](https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-2.html#jvms-2.7) the internal structure of an object. > > The Java Virtual Machine does not mandate any particular internal > structure for objects. > > > *In some of Oracle’s implementations of the Java Virtual Machine, a > reference to a class instance is a pointer to a handle that is itself > a pair of pointers: one to a table containing the methods of the > object and a pointer to the Class object that represents the type of > the object, and the other to the memory allocated from the heap for > the object data.* > > > This doesn't help greatly, but we can assume that the memory allocated for the object data has to be large enough to contain all the object's fields and the data needs to be accessible quickly, ie. in constant time, as opposed to proportionally to the amount of data. The typical solution is to again use offsets. Take this class for example ``` class Animal { private String name; private byte height; } ``` The field `name` is of type `String`, a reference type, which we know essentially just stores a pointer. If we assume our JVM only needs 32 bits to store a pointer, we know we only need 32 bits for this field. The field `height` is of type `byte`, which is specified to only need 8 bits. So each `Person` object really only needs `32+8` bits, 5 bytes for its data. The JVM will most likely allocate more than that for its internal organization, but let's simplify to just those 5 bytes. In your example, the JVM will allocate the first 4 bytes for the `name` and the next byte for the `age`. Something like ``` 0 8 16 24 32 40 +--------+--------+--------+--------+--------+ | name | height | ``` You ask > > Is that one address or numerous address value? If only one, how can > myAnimal have accesses to all object fields of Animal object like > myAnimal.name, myAnimal.height,...? > > > `myAnimal` holds only one address, but an expression like ``` myAnimal.height ``` can be thought of as reading 1 byte (because we know `height` is of type byte) from memory at the address determined by adding 4 (because the location of the data for `height` is offset by the 4 bytes needed for `name`) to the value stored in `myAnimal`. Consider `myAnimal` storing a pointer to a memory address of `12`, initialized with a `name` pointing to a String at memory address `1345` and a `height` value of 6. In memory, that could look like ``` myAnimal | v Heap: ...12.......13.......14.......15.......16.......17 Object offsets: ....0 1 2 3 4 5 ....+--------+--------+--------+--------+--------+ | 1345 | 6 | ``` To read `myAnimal.height`, the JVM would calculate `12 + 4 = 16` and read the byte (`6`) at that offset. To assign a new value to `myAnimal.name`, the JVM would calculate `12 + 0`, and write 4 bytes representing some the new pointer value, overwriting the `1345`. Upvotes: 3 [selected_answer]