date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/19
460
1,552
<issue_start>username_0: It says I have invalid syntax at `Sum += 1`. If my code is incorrect what is a better way to go about counting how many even numbers are in a list? ``` def countEvens(listOfInts): ''' - Returns an integer value representing the number of even numbers that exist in listOfInts. - Return 0 if listOfInts is not a list type or if no even number exists in listOfInts. - Note: elements in listOfInts can contain any data type. ''' Sum = 0 for x in listOfInts: if x % 2 == 0: return Sum += 1 if type(listOfInts) != list: return 0 ```<issue_comment>username_1: In Python you cannot `return` assignments. And `Sum += 1` is an assignment, it assigns `Sum + 1` to `Sum`. In fact the `return` isn't just a SyntaxError it's also wrong (in a logical sense), so just remove it: ``` Sum = 0 for x in listOfInts: if x % 2 == 0: Sum += 1 return Sum ``` Alternatively you can use `sum` with a generator: ``` return sum(1 for value in listOfInts if value % 2 == 0) ``` Upvotes: 2 <issue_comment>username_2: The syntax error comes from this line, as you say ``` return Sum += 1 ``` That's because (Sum += 1) is not a valid value to return from a function. It's a separate statement Keeping your code as close as possible, try this ``` Sum += 1 return Sum ``` or, more simply ``` return Sum+1 ``` As for a more pythonic approach ``` def countEvens(listOfInts): return sum( x % 2 == 0 for x in listOfInts ) ``` does the entire thing Upvotes: 0
2018/03/19
1,723
6,855
<issue_start>username_0: I've got problem with renaming column and migrating changes to database. Migration: ``` protected override void Up(MigrationBuilder migrationBuilder) { migrationBuilder.RenameColumn( name: "int", schema: "Gamgoo.More", table: "Rating", newName: "GivenRating"); } protected override void Down(MigrationBuilder migrationBuilder) { migrationBuilder.RenameColumn( name: "GivenRating", schema: "Gamgoo.More", table: "Rating", newName: "int"); } ``` Commands that I'm using are (from Package Manager Console / Powershell): Add-Migration RatingFix -p Gamgoo.Data.Context -c GamgooContext Update-Database And the error message: ``` Applying migration '20180319172151_RatingFix'. Microsoft.EntityFrameworkCore.Migrations[200402] Applying migration '20180319172151_RatingFix'. fail: Microsoft.EntityFrameworkCore.Database.Command[200102] Failed executing DbCommand (31ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] EXEC sp_rename N'Gamgoo.More.Rating.int', N'GivenRating', N'COLUMN'; System.Data.SqlClient.SqlException (0x80131904): Either the parameter @objname is ambiguous or the claimed @objtype (COLUMN) is wrong. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite, String methodName) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.Execute(IRelationalConnection connection, DbCommandMethod executeMethod, IReadOnlyDictionary`2 parameterValues) ClientConnectionId:ba25aa03-122d-4c55-9673-4bd3358f2f83 Error Number:15248,State:1,Class:11 Failed executing DbCommand (31ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] EXEC sp_rename N'Gamgoo.More.Rating.int', N'GivenRating', N'COLUMN'; System.Data.SqlClient.SqlException (0x80131904): Either the parameter @objname is ambiguous or the claimed @objtype (COLUMN) is wrong. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite, String methodName) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.Execute(IRelationalConnection connection, DbCommandMethod executeMethod, IReadOnlyDictionary`2 parameterValues) at Microsoft.EntityFrameworkCore.Storage.Internal.RelationalCommand.ExecuteNonQuery(IRelationalConnection connection, IReadOnlyDictionary`2 parameterValues) at Microsoft.EntityFrameworkCore.Migrations.MigrationCommand.ExecuteNonQuery(IRelationalConnection connection, IReadOnlyDictionary`2 parameterValues) at Microsoft.EntityFrameworkCore.Migrations.Internal.MigrationCommandExecutor.ExecuteNonQuery(IEnumerable`1 migrationCommands, IRelationalConnection connection) at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.Migrate(String targetMigration) at Microsoft.EntityFrameworkCore.Design.Internal.MigrationsOperations.UpdateDatabase(String targetMigration, String contextType) at Microsoft.EntityFrameworkCore.Design.OperationExecutor.UpdateDatabase.<>c__DisplayClass0_1.<.ctor>b__0() at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action) ClientConnectionId:ba25aa03-122d-4c55-9673-4bd3358f2f83 Error Number:15248,State:1,Class:11 Either the parameter @objname is ambiguous or the claimed @objtype (COLUMN) is wrong. ``` I've checked ef core github, forum and stackoverflow for similar problems, but those answers did not helped me. I would like to avoid removing all migrations and updating the database, because I already have quite a lot of data in other tables.<issue_comment>username_1: This may have been fixed by PR [#11161](https://github.com/aspnet/EntityFrameworkCore/pull/11161). You can try [the nightly builds](https://dotnet.myget.org/gallery/aspnetcore-dev). You can work around the issue by rewriting the sp\_rename call: ``` // UNDONE: SQL generated by EF Core is missing schema identifier quotes //migrationBuilder.RenameColumn( // name: "int", // schema: "Gamgoo.More", // table: "Rating", // newName: "GivenRating"); migrationBuilder.Sql( "EXEC sp_rename N'[Gamgoo.More].[Rating].[int]', N'GivenRating', N'COLUMN';"); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I got this same issue while trying to rename a column that I had added to the identity User. After checking the database, I realized that I had created the migration to generate the original column, but never actually updated the database. Earlier in some frustration I had deleted the migration that would have generated the column had I actually updated the database. Fairly easy fix - manually created the original column and then the current migration with the rename worked as expected. Hopefully my answer may help someone else. Upvotes: 0 <issue_comment>username_3: It can occur if you have renamed the column by hand (manually) from your database, so it cannot find that column. So try to roll back the name from the database and change the column name to the previous one and apply the migration again. Upvotes: -1
2018/03/19
852
2,214
<issue_start>username_0: I have a pandas dataframe indexed by date and and ID. I would like to: 1. Identify the ID of additions and deletions between dates 2. Add the ID to another dataframe with the date of the addition/deletion. ``` date ID value 12/31/2010 13 -0.124409 9 0.555959 1 -0.705634 2 -3.123603 4 0.725009 1/31/2011 13 0.471078 9 0.276006 1 -0.468463 22 1.076821 11 0.668599 ``` Desired output: ``` date ID flag 1/31/2011 22 addition 1/31/2011 11 addition 1/31/2011 2 deletion 1/31/2011 4 deletion ``` I have tried [Diff between two dataframes in pandas](https://stackoverflow.com/questions/47131361/diff-between-two-dataframes-in-pandas?rq=1). I cannot get this to work on a grouped dataframe. I am unsure how to loop over each group, and compare to the previous group.<issue_comment>username_1: You can using `duplicated`, to find the distinct value ``` s=df[~df.index.get_level_values(1).duplicated(keep=False)] pd.DataFrame({'date':['1/31/2011']*len(s),'ID':s.index.get_level_values(1),'flag':(s.index.get_level_values(0)=='1/31/2011')}).replace({False:'deletion',True:'addition'}) Out[529]: ID date flag 0 2 1/31/2011 deletion 1 4 1/31/2011 deletion 2 22 1/31/2011 addition 3 11 1/31/2011 addition ``` Upvotes: 1 <issue_comment>username_2: I created a helper function that shifts the first level of a `pandas.MultiIndex`. With this, I can difference it with the original index to determine additions and deletions. ``` def shift_level(idx): level = idx.levels[0] mapping = dict(zip(level[:-1], level[1:])) idx = idx.set_levels(level.map(mapping.get), 0) return idx[idx.get_level_values(0).notna()].remove_unused_levels() idx = df.index fidx = shift_level(idx) additions = fidx.difference(idx) deletions = idx[idx.labels[0] > 0].difference(fidx) pd.Series('+', additions).append( pd.Series('-', deletions)).rename('flag').reset_index() date ID flag 0 2011-01-31 2 + 1 2011-01-31 4 + 2 2011-01-31 11 - 3 2011-01-31 22 - ``` Upvotes: 1 [selected_answer]
2018/03/19
1,250
3,048
<issue_start>username_0: I must be missing something really obvious here. I can decode this sample bit of data using online tools like <http://asn1-playground.oss.com/>, but am having trouble with basic usage of Perl's Convert::ASN1. Any idea what I'm missing? ```perl use strict; use warnings; use Convert::ASN1; use feature 'say'; # example from: # http://www.oss.com/asn1/resources/asn1-made-simple/introduction.html my $hex_data = '3018800A4A6F686E20536D697468810A39383736353433323130'; my $bin_data = join '', pack 'H*', $hex_data; Convert::ASN1::asn_dump($bin_data); # prints: # 0000 24: SEQUENCE { # 0002 10: [CONTEXT 0] # 0004 : 4A 6F 68 6E 20 53 6D 69 74 68 __ __ __ __ __ __ <NAME> # 000E 10: [CONTEXT 1] # 0010 : 39 38 37 36 35 34 33 32 31 30 __ __ __ __ __ __ 9876543210 # 001A : } my $asn = Convert::ASN1->new; $asn->prepare(<error; Contact ::= SEQUENCE { name VisibleString, phone NumericString } ASN1 my $asn1\_node = $asn->find('Contact') or die $asn->error; my $payload = $asn1\_node->decode($bin\_data) or die "can't decode Contact: ".$asn1\_node->error; # prints: # can't decode Contact: decode error 80<=>1a 2 4 name ``` Supporting YaFred's answer below, this is where that 80 and 81 are in that encoded string: ``` SEQ length=24 ** l=10 J o h n S m i t h ** l=10 9 8 7 6 5 4 3 2 1 0 30 18 80 0A 4A 6F 686E20536D697468 81 0A 39383736353433323130 ```<issue_comment>username_1: I'm not sure where you got your hex string from...but if you use the `Convert::ASN1::encode` method, you get a slightly different hex string which can get decoded correctly: ``` my $res = $asn->encode({ name => '<NAME>', phone => 9876543210 }); my $res_hex = unpack 'H*', $res; print "res_hex after encode : $res_hex\n"; print "original hex_data : " . lc($hex_data) . "\n"; print "\n"; my payload = $asn1_node->decode($res) or die $asn1_node->error; use Data::Dumper; print Dumper($payload); ``` **output** ``` res_hex after encode : 30181a0a4a6f686e20536d697468120a39383736353433323130 original hex_data : 3018800a4a6f686e20536d697468810a39383736353433323130 $VAR1 = { 'name' => '<NAME>', 'phone' => '9876543210' }; ``` Upvotes: 0 <issue_comment>username_2: May be this is as easy as ``` $asn->prepare(<error; My-Module DEFINITIONS AUTOMATIC TAGS ::= BEGIN Contact ::= SEQUENCE { name VisibleString, phone NumericString } END ASN1 ``` It's a bit long to explain if you start with ASN.1 ... You are not giving the tagging context (the type Contact should be part of a module). So, the tools are making choices ... The hexa you show is the result of encoding with AUTOMATIC TAGS The tags of the 2 strings are '80' (Context tag 0 = 1000 0000) and '81' (Context tag 1 = 1000 0001) @username_1 gets something different because the encoding was performed as EXPLICIT TAGS The tags of the 2 strings are '1a' (universal tag for VisibleString) and '12' (universal tag for NumericString) Upvotes: 2 [selected_answer]
2018/03/19
1,228
4,189
<issue_start>username_0: In a *n-depth* `dict` where values are set in the deepest level of a hierarchy: ``` { "name": "root", "value": None, # expected value to be 80 "children": [ { "name": "a", "value": None, # expected value to be 30 "children": [ { "name": "a.1", "value": 10 }, { "name": "a.2", "value": 20 } ] }, { "name": "b", "value": None, # expected value to be 50 "children": [ { "name": "b.1", "value": 25 }, { "name": "b.2", "value": None, # expected value to be 25 "children": [ {"name": "b.2.1", "value": 5}, {"name": "b.2.2", "value": 5}, {"name": "b.2.3", "value": 5}, {"name": "b.2.4", "value": 5}, {"name": "b.2.5", "value": 5} ] } ] } ] } ``` What could be the approach to recursively set each parent `value` based on the result of an operation perfomed with its children `value` (i.e. sum)?<issue_comment>username_1: Given your datastructure and a list of values to update, you can use `next` in recursion: ``` def update(d, targets): return {a:[update(i, targets) for i in b] if isinstance(b, list) else update(b, targets) if isinstance(b, dict) else next(targets) if not b else b for a, b in d.items()} targets = [80, 30, 50, 25] results = update(nlist, iter(targets)) ``` Output: ``` {'children': [{'children': [{'name': 'a.1', 'value': 10}, {'name': 'a.2', 'value': 20}], 'name': 'a', 'value': 30}, {'children': [{'name': 'b.1', 'value': 25}, {'children': [{'name': 'b.2.1', 'value': 5}, {'name': 'b.2.2', 'value': 5}, {'name': 'b.2.3', 'value': 5}, {'name': 'b.2.4', 'value': 5}, {'name': 'b.2.5', 'value': 5}], 'name': 'b.2', 'value': 25}], 'name': 'b', 'value': 50}], 'name': 'root', 'value': 80} ``` Upvotes: 0 <issue_comment>username_2: I finally managed to do it using the iterative [level order traversal pattern (BFS)](https://en.wikipedia.org/wiki/Breadth-first_search), I was missing just a couple of details. This approach works because the depth iteration order is guaranteed, so once we are getting to a node **wich has children**, all its sub-level children are already calculated. The solution: ``` def reverseTraversal(obj): def parentOperation(node): out = 0 for child in node['children']: out = out + child['value'] return out if obj is None: return queue = [] stack = [] queue.append(obj) while len(queue) > 0: temp = queue.pop(0) stack.append(temp) if 'children' in temp and len(temp['children']) > 0: for child in temp['children']: queue.append(child) while len(stack)>0: node = stack.pop() if 'children' in node and len(node['children']) > 0: node['value'] = parentOperation(node) # obj is the original dict obj = reverseTraversal(obj) print(obj) ``` Results in: ``` { "name": "root", "value": 80, "children": [ { "name": "a", "value": 30, "children": [ {"name": "a.1","value": 10}, {"name": "a.2","value": 20} ] }, { "name": "b", "value": 50, "children": [ {"name": "b.1","value": 25}, { "name": "b.2", "value": 25, "children": [ {"name": "b.2.1","value": 5}, {"name": "b.2.2","value": 5}, {"name": "b.2.3","value": 5}, {"name": "b.2.4","value": 5}, {"name": "b.2.5","value": 5} ] } ] } ] } ``` Upvotes: 2 [selected_answer]
2018/03/19
1,291
4,156
<issue_start>username_0: I have this assignment: a. concatenation: generate a report listing vendor name and location as ‘name (location)’ b. the list above contains trailing spaces in both fields; trim the spaces and display results. c. assign a name for the newly derived virtual column in the database d. as a reminder, retrieve the number of products offered by vendor ‘DLL01’ e. retrieve the number of products offered by each vendor and display the results in two columns by ‘vend\_id’ and ‘num\_products’ I wrote some codes but it doesn't run. ``` a. select [vend_name] +'('+[vend_address] + [vend_city]+')' as 'Name (Location)' from [dbo].[Vendors] b. select Ltrim(Rtrim([vend_name] +'('+[vend_address] + [vend_city]+ ')' )as 'Name (Location)' from [dbo].[Vendors] c.select Ltrim(Rtrim([vend_name] +'('+[vend_address] + [vend_city]+')')),' ',' '+ char(7)), char(7),'') as 'vend information' from [dbo].[Vendors] d.select [vend_name] +'('+[vend_address] + [vend_city]+')' as 'Name (Location)' from [dbo].[Vendors] e. select [vend_id]='DLL01', count([vend_id]) as num products from [dbo].[Vendors] where [vend_id]='DLL01' ``` I think it is just syntax issues. Can you help me? Thanks!!!<issue_comment>username_1: Given your datastructure and a list of values to update, you can use `next` in recursion: ``` def update(d, targets): return {a:[update(i, targets) for i in b] if isinstance(b, list) else update(b, targets) if isinstance(b, dict) else next(targets) if not b else b for a, b in d.items()} targets = [80, 30, 50, 25] results = update(nlist, iter(targets)) ``` Output: ``` {'children': [{'children': [{'name': 'a.1', 'value': 10}, {'name': 'a.2', 'value': 20}], 'name': 'a', 'value': 30}, {'children': [{'name': 'b.1', 'value': 25}, {'children': [{'name': 'b.2.1', 'value': 5}, {'name': 'b.2.2', 'value': 5}, {'name': 'b.2.3', 'value': 5}, {'name': 'b.2.4', 'value': 5}, {'name': 'b.2.5', 'value': 5}], 'name': 'b.2', 'value': 25}], 'name': 'b', 'value': 50}], 'name': 'root', 'value': 80} ``` Upvotes: 0 <issue_comment>username_2: I finally managed to do it using the iterative [level order traversal pattern (BFS)](https://en.wikipedia.org/wiki/Breadth-first_search), I was missing just a couple of details. This approach works because the depth iteration order is guaranteed, so once we are getting to a node **wich has children**, all its sub-level children are already calculated. The solution: ``` def reverseTraversal(obj): def parentOperation(node): out = 0 for child in node['children']: out = out + child['value'] return out if obj is None: return queue = [] stack = [] queue.append(obj) while len(queue) > 0: temp = queue.pop(0) stack.append(temp) if 'children' in temp and len(temp['children']) > 0: for child in temp['children']: queue.append(child) while len(stack)>0: node = stack.pop() if 'children' in node and len(node['children']) > 0: node['value'] = parentOperation(node) # obj is the original dict obj = reverseTraversal(obj) print(obj) ``` Results in: ``` { "name": "root", "value": 80, "children": [ { "name": "a", "value": 30, "children": [ {"name": "a.1","value": 10}, {"name": "a.2","value": 20} ] }, { "name": "b", "value": 50, "children": [ {"name": "b.1","value": 25}, { "name": "b.2", "value": 25, "children": [ {"name": "b.2.1","value": 5}, {"name": "b.2.2","value": 5}, {"name": "b.2.3","value": 5}, {"name": "b.2.4","value": 5}, {"name": "b.2.5","value": 5} ] } ] } ] } ``` Upvotes: 2 [selected_answer]
2018/03/19
622
2,232
<issue_start>username_0: I am creating a multi-step form using Bootstrap and jQuery. It has three sections and includes a progress bar to show the user's current status in the form process. The last two pages of the form work fine, however, the first section of the form (the email section) is not allowing the user to click the email input section a second time after the autofocus is removed once the page is loaded. For example, if they accidentally click on another part of the page and the autofocus is gone, they can't click on the field again to type. The other pages have the same code and are working fine with autofocus. I'm not sure why this is happening! Here is a code pen: [Code Pen](https://codepen.io/adurante95/pen/eMgpOg/) ([@adurante95](https://codepen.io/adurante95)) . and then here is some of the code for the email section: ``` Email ``` I'm just not sure why the field is not editable. I tried the contenteditable="true" attribute and that did not work either.<issue_comment>username_1: Your problem is your CSS. The input is refocused fine when using tab button. But label covers all the input field, so you are not able to click it. Remove `display: block;` and `width: 100%;` from the `.form-label-group > label` style. **EDIT** Another way is to focus input when label is clicked using jQuery. ``` $(".form-label-group label").click(function() { $(this).parent().find("input").focus(); }); ``` Upvotes: 0 <issue_comment>username_2: This is because your label has position absolute and your input is display block meaning that the label element is above the input blocking access. ``` .form-label-group > label { position: absolute; top: 0; display: block; width: 100%; margin-bottom: 0; /* Override default `` margin \*/ line-height: 1.5; color: #495057; border: 1px solid transparent; border-radius: 0.25rem; transition: all 0.1s ease-in-out; pointer-events: none; } ``` but by adding `pointer-events: none;` you're telling the browser to ignore any user input on this element and thus letting you access the input > > <https://developer.mozilla.org/en-US/docs/Web/CSS/pointer-events> > > > <https://codepen.io/anon/pen/GxrpWb> working example Upvotes: 2
2018/03/19
628
2,425
<issue_start>username_0: So this is probably an easy one, but I'm just not doing it right. My goal is to **send the user input from this textbox**: **Into this Div**: ``` ``` --- I'm trying to make it appear in real time, and so far I used this to try and do so, but it doesn't work: ``` document.getElementById("code_output").innerHTML += document.getElementById("formNameInput").value; ``` Why doesn't it show? Does my code need something to trigger the Javascript?<issue_comment>username_1: You're close, but the issue is that you're not using an event handler. The script is executing your code once, as soon as possible (before you have the chance to enter anything into the text input). So, you have to add some sort of event listener so that the copying happens at the appropriate time. Something like below: ```js document.getElementById('formNameInput').addEventListener('keyup', copyToDiv); function copyToDiv() { document.getElementById("code_output").innerHTML = document.getElementById("formNameInput").value; } ``` ```html ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to do that whenever the value of `formNameInput` changes. For that you need an [event](https://developer.mozilla.org/en/docs/Web/API/Event). Your code should look like: ``` document.getElementById("formNameInput").addEventListener('input', function () { document.getElementById("code_output").innerHTML += this.value; }); ``` Upvotes: 0 <issue_comment>username_3: You need to attach an event listener to your input that executes a function any time an `input` event occurs on the field: ```js formNameInput.addEventListener('input', function(e) { code_output.textContent = e.target.value }) ``` ```html ``` Please note that the above code takes advantage of the fact that browsers automatically create a global variable for each element with a unique id attribute value, and this variable has the same name as the value of the id. If the concept of events is new to you, this might be a good place to get started: > > <https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Events> > > > Upvotes: 0 <issue_comment>username_4: ``` function change() { document.getElementById("code_output").innerHTML = document.getElementById("formNameInput").value; } document.getElementById('formNameInput').onkeyup = change ``` maybe this is what you are trying? Upvotes: 0
2018/03/19
956
3,725
<issue_start>username_0: [IDR](https://www.aldeid.com/wiki/IDR-Interactive-Delphi-Reconstructor) is a good tool for decompiling Delphi applications, but hHow do I know the ID number of Windows messages assigned to message handlers? For example, from IDR decompiling, I see: ``` procedure sub_004D2398(var Msg: TMsg);dynamic; ``` The original source code is: ``` procedure Name_procedure(var Msg: TMsg); message 1028; ``` How do I know the message number 1028 while reverse-engineering the code in IDR?<issue_comment>username_1: Normally to find the message declaration corresponding to a given message number you would just look in any header file (C/C++) or unit (Delphi) that declares message constants. From memory I think in (older?) Delphi versions this is in the **Windows** unit, or possibly **Messages**. In the case of Delphi you will find a bunch of declarations similar to: ``` const WM_LBUTTONDOWN = 513; ``` Or if in hex: ``` const WM_LBUTTONDOWN = $0201; ``` Just find the declaration for the **WM\_** constant with a value of 1028 (or the hex equivalent, $0404). **However** *you almost certainly will not find one*! Private Messages ---------------- 1028 is greater than 1024, and 1024 is the constant for the "special message": [WM\_USER](https://msdn.microsoft.com/en-us/library/windows/desktop/ms644931(v=vs.85).aspx). This is not intended to be used as a message directly but rather indicates the first message number that an application can use for its own, custom/private messages. i.e. the message with value 1028 has no standard meaning across all Windows applications. Rather it is the 4th "custom" or private message (or possibly 5th - some people start with `WM_USER+1` as documented, others start with `WM_USER`) used by, and meaningful to, only *that application*. In the absence of the original declared constant name for the message, whatever it means and is used for can only be determined by inspecting the code and identifying its use within that code. Upvotes: 0 <issue_comment>username_2: A given procedure doesn't know whether it is a `message` handler or not, because that information is not stored in the procedure itself where a decompiler can easily access it (it in available in RTTI, though). Every class has its own dispatch table that the compiler generates to hold pointers to all of the class's `dynamic` and `message` methods (and in the case of `message`, also their message IDs). So, when you declare a class procedure as a `message` handler, the compiler inserts an entry for that message ID in that class's dispatch table along with a pointer to the handler. When a UI control receives a message, the message first goes to the control's `WindowProc` (which the app can subclass directly). If the message is not handled, it goes to the control's `WndProc()` method. If the message is still not handled, it goes to the `TObject.Dispatch()` method, which looks up the message ID in the control's dispatch table and calls the associated procedure if one is found. Otherwise, the message goes to the control's `DefaultHandler()` method. So, unless IDR is smart enough to decompile a procedure, determine which class it belongs to, and find and decompile that class's dispatch table or RTTI to determine the message ID belonging to the procedure, then you will have to do this manually while you are analyzing the decompiled output. A class's VMT contains pointers to the class's dispatch table and RTTI (amongst other things). Once you are able to determine the message ID that belongs to a given `message` handler, only then can you research and figure out what kind of message that ID might refer to, as described in username_1' answer. Upvotes: 3
2018/03/19
1,148
4,478
<issue_start>username_0: I found solutions telling me to use something like this (in system.web) to a make custom error page: ``` ``` and this (in system.webServer): ``` ``` I tried both of these methods and none of them seemed to work. I'm not sure if I have to delete things in in `system.web` and `system.webServer, or if I'm suppose to make Routes to the controller in which I want the error messages to be displayed in, so I am going to show you what I have **Web.config:** ``` ``` and ``` ``` **web.config:** ``` ``` and ``` ``` **HomeController:** ``` public class HomeController : Controller { ... [Route("~/page_not_found")] public ActionResult PageNotFound() { Response.StatusCode = 404; return View(); } [Route("~/internal_server_error")] public ActionResult InternalServerError() { Response.StatusCode = 500; return View(); } } ``` My question is this: "What do I do to get my program to redirect to the error pages using the solution above?" Side Note: if you need my Route.Config, here it is: ``` public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); } ```<issue_comment>username_1: Normally to find the message declaration corresponding to a given message number you would just look in any header file (C/C++) or unit (Delphi) that declares message constants. From memory I think in (older?) Delphi versions this is in the **Windows** unit, or possibly **Messages**. In the case of Delphi you will find a bunch of declarations similar to: ``` const WM_LBUTTONDOWN = 513; ``` Or if in hex: ``` const WM_LBUTTONDOWN = $0201; ``` Just find the declaration for the **WM\_** constant with a value of 1028 (or the hex equivalent, $0404). **However** *you almost certainly will not find one*! Private Messages ---------------- 1028 is greater than 1024, and 1024 is the constant for the "special message": [WM\_USER](https://msdn.microsoft.com/en-us/library/windows/desktop/ms644931(v=vs.85).aspx). This is not intended to be used as a message directly but rather indicates the first message number that an application can use for its own, custom/private messages. i.e. the message with value 1028 has no standard meaning across all Windows applications. Rather it is the 4th "custom" or private message (or possibly 5th - some people start with `WM_USER+1` as documented, others start with `WM_USER`) used by, and meaningful to, only *that application*. In the absence of the original declared constant name for the message, whatever it means and is used for can only be determined by inspecting the code and identifying its use within that code. Upvotes: 0 <issue_comment>username_2: A given procedure doesn't know whether it is a `message` handler or not, because that information is not stored in the procedure itself where a decompiler can easily access it (it in available in RTTI, though). Every class has its own dispatch table that the compiler generates to hold pointers to all of the class's `dynamic` and `message` methods (and in the case of `message`, also their message IDs). So, when you declare a class procedure as a `message` handler, the compiler inserts an entry for that message ID in that class's dispatch table along with a pointer to the handler. When a UI control receives a message, the message first goes to the control's `WindowProc` (which the app can subclass directly). If the message is not handled, it goes to the control's `WndProc()` method. If the message is still not handled, it goes to the `TObject.Dispatch()` method, which looks up the message ID in the control's dispatch table and calls the associated procedure if one is found. Otherwise, the message goes to the control's `DefaultHandler()` method. So, unless IDR is smart enough to decompile a procedure, determine which class it belongs to, and find and decompile that class's dispatch table or RTTI to determine the message ID belonging to the procedure, then you will have to do this manually while you are analyzing the decompiled output. A class's VMT contains pointers to the class's dispatch table and RTTI (amongst other things). Once you are able to determine the message ID that belongs to a given `message` handler, only then can you research and figure out what kind of message that ID might refer to, as described in username_1' answer. Upvotes: 3
2018/03/19
2,077
4,770
<issue_start>username_0: I have the following working example in `Python` which takes a string, uses a dict comprehension and a regular expression on it and finally generates a dataframe from it: ``` import re, pandas as pd junk = """total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;""" rx = re.compile(r'(?P\w+)=(?P[^;]+)') records = [{m.group('key'): m.group('value') for m in rx.finditer(line)} for line in junk.split("\n")] df = pd.DataFrame(records) print(df) ``` This yields ``` buffers cached free shared total used 0 304MB 1059MB 5711MB 0MB 7871MB 2159MB 1 30MB 1059MB 71MB 3159MB 5751MB 5MB 2 30MB 1059MB 109MB 3159MB 5751MB 5MB ``` --- Now how the ... can I do the same thing in `R` ? I messed around with `lapply` and `regmatches` but to no avail. Additionally, how would I do this with missing values?<issue_comment>username_1: A purrr option: ```r library(purrr) 'total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;' %>% strsplit('\n') %>% .[[1]] %>% # separate lines into character vector strsplit(';') %>% # separate each line into a list of key-value pairs map(strsplit, '=') %>% # split key-value pairs into length-2 sublists map(transpose) %>% # flip list of key-value pairs to list of keys and values map_dfr(~set_names(.x[[2]], .x[[1]])) # set names of values to keys and simplify to data frame #> # A tibble: 3 x 6 #> total free used shared buffers cached #> #> 1 7871MB 5711MB 2159MB 0MB 304MB 1059MB #> 2 5751MB 71MB 5MB 3159MB 30MB 1059MB #> 3 5751MB 109MB 5MB 3159MB 30MB 1059MB ``` or a more data-frame-centric option: ```r library(tidyverse) # put text in data frame data_frame(text = 'total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;') %>% separate_rows(text, sep = '\n') %>% # separate lines into separate rows rowid_to_column('line') %>% # add index for each line to help spreading later separate_rows(text, sep = ';') %>% # separate each line into key-value pairs filter(text != '') %>% # drop extra entries from superfluous semicolons separate(text, c('key', 'value')) %>% # separate keys and values into columns spread(key, value) %>% # reshape to wide form select(-line) # drop line index column #> # A tibble: 3 x 6 #> buffers cached free shared total used #> #> 1 304MB 1059MB 5711MB 0MB 7871MB 2159MB #> 2 30MB 1059MB 71MB 3159MB 5751MB 5MB #> 3 30MB 1059MB 109MB 3159MB 5751MB 5MB ``` If you want to avoid packages, you can hack it through `read.dcf`, which reads Debian Control Format (like R package DESCRIPTION files), which is just key-value pairs. DCF uses `:` instead of `=` and `\n` instead of `;`, though, so you'll need to do a little `gsub`ing first: ```r junk <- 'total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;' junk <- gsub('=', ':', junk) junk <- gsub(';', '\n', junk) mat <- read.dcf(textConnection(junk)) mat #> total free used shared buffers cached #> [1,] "7871MB" "5711MB" "2159MB" "0MB" "304MB" "1059MB" #> [2,] "5751MB" "71MB" "5MB" "3159MB" "30MB" "1059MB" #> [3,] "5751MB" "109MB" "5MB" "3159MB" "30MB" "1059MB" ``` It returns a matrix, but it's well-formed and easy to convert to a proper data.frame: ```r df <- as.data.frame(mat, stringsAsFactors = FALSE) df #> total free used shared buffers cached #> 1 7871MB 5711MB 2159MB 0MB 304MB 1059MB #> 2 5751MB 71MB 5MB 3159MB 30MB 1059MB #> 3 5751MB 109MB 5MB 3159MB 30MB 1059MB ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Working example: ``` junk <- "total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;" names <- unique(strsplit(gsub("[(?\\n=\\d+MB;)]", " ", a, perl=TRUE), "(\\s+)")[[1]]) dataset <- read.table(text=gsub("[^(\\d+)]", " ", a, perl=TRUE), header = FALSE, col.names=names) ``` Upvotes: 2
2018/03/19
1,696
3,963
<issue_start>username_0: In R how to delete rows that have missing values for **all** variables?. I want to keep the rest of the rows that have records with some missing values. I have tried the code posted here previously and it is not working.<issue_comment>username_1: A purrr option: ```r library(purrr) 'total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;' %>% strsplit('\n') %>% .[[1]] %>% # separate lines into character vector strsplit(';') %>% # separate each line into a list of key-value pairs map(strsplit, '=') %>% # split key-value pairs into length-2 sublists map(transpose) %>% # flip list of key-value pairs to list of keys and values map_dfr(~set_names(.x[[2]], .x[[1]])) # set names of values to keys and simplify to data frame #> # A tibble: 3 x 6 #> total free used shared buffers cached #> #> 1 7871MB 5711MB 2159MB 0MB 304MB 1059MB #> 2 5751MB 71MB 5MB 3159MB 30MB 1059MB #> 3 5751MB 109MB 5MB 3159MB 30MB 1059MB ``` or a more data-frame-centric option: ```r library(tidyverse) # put text in data frame data_frame(text = 'total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;') %>% separate_rows(text, sep = '\n') %>% # separate lines into separate rows rowid_to_column('line') %>% # add index for each line to help spreading later separate_rows(text, sep = ';') %>% # separate each line into key-value pairs filter(text != '') %>% # drop extra entries from superfluous semicolons separate(text, c('key', 'value')) %>% # separate keys and values into columns spread(key, value) %>% # reshape to wide form select(-line) # drop line index column #> # A tibble: 3 x 6 #> buffers cached free shared total used #> #> 1 304MB 1059MB 5711MB 0MB 7871MB 2159MB #> 2 30MB 1059MB 71MB 3159MB 5751MB 5MB #> 3 30MB 1059MB 109MB 3159MB 5751MB 5MB ``` If you want to avoid packages, you can hack it through `read.dcf`, which reads Debian Control Format (like R package DESCRIPTION files), which is just key-value pairs. DCF uses `:` instead of `=` and `\n` instead of `;`, though, so you'll need to do a little `gsub`ing first: ```r junk <- 'total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;' junk <- gsub('=', ':', junk) junk <- gsub(';', '\n', junk) mat <- read.dcf(textConnection(junk)) mat #> total free used shared buffers cached #> [1,] "7871MB" "5711MB" "2159MB" "0MB" "304MB" "1059MB" #> [2,] "5751MB" "71MB" "5MB" "3159MB" "30MB" "1059MB" #> [3,] "5751MB" "109MB" "5MB" "3159MB" "30MB" "1059MB" ``` It returns a matrix, but it's well-formed and easy to convert to a proper data.frame: ```r df <- as.data.frame(mat, stringsAsFactors = FALSE) df #> total free used shared buffers cached #> 1 7871MB 5711MB 2159MB 0MB 304MB 1059MB #> 2 5751MB 71MB 5MB 3159MB 30MB 1059MB #> 3 5751MB 109MB 5MB 3159MB 30MB 1059MB ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Working example: ``` junk <- "total=7871MB;free=5711MB;used=2159MB;shared=0MB;buffers=304MB;cached=1059MB; free=71MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;cached=1059MB; cached=1059MB;total=5751MB;shared=3159MB;used=5MB;buffers=30MB;free=109MB;" names <- unique(strsplit(gsub("[(?\\n=\\d+MB;)]", " ", a, perl=TRUE), "(\\s+)")[[1]]) dataset <- read.table(text=gsub("[^(\\d+)]", " ", a, perl=TRUE), header = FALSE, col.names=names) ``` Upvotes: 2
2018/03/19
594
1,409
<issue_start>username_0: I have a data frame with accumulated numbers having missing values: ``` data.frame(a=1:9, b=c(14,17,NA,20,25,29,NA,NA,41)) ``` I want to fill that gaps with **integers** as uniform/distributed as possible. For instance: * If we have 3 spaces between 5 and 10, we can have (6, 8, 9); * If we have 4 spaces between 4 and 11, we can have (5, 7, 8, 10) or (5, 7, 9, 10). I know this function doesn't solve: ``` seq(30, 40 ,length.out = 2) # [1] 30 40 ``` So, the expected output from data frame could be: ``` a b 1 14 2 17 3 19 4 20 5 25 6 29 7 33 8 37 9 41 ``` How can I proceed for all missing values?<issue_comment>username_1: Taking into account missing values on the data frame beginning/ending have special treatment, we can use `approx` with an unique coordinate: ``` df <- data.frame(a=1:9, b=c(14,17,NA,20,25,29,NA,NA,41)) missing.pos <- which(is.na(df$b)) l <- approx(x=df$b, xout = missing.pos, method = "linear") df$b[missing.pos] <- ceiling(l$y) df$b # [1] 14 17 19 20 25 29 33 37 41 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You could just use `na.interpolation` from **`imputeTS`** oder `na.approx` from **`zoo`** Example: ``` library("imputeTS") x <- data.frame(a=1:9, b=c(14,17,NA,20,25,29,NA,NA,41)) ceiling(na.interpolation(x)) ``` The ceiling is just needed, because no decimals are wanted from the question starter. Upvotes: 0
2018/03/19
2,213
7,507
<issue_start>username_0: Firstly please excuse the amount of code Im going to dump. **Current Behaviour**: When you click on a li, the data-id is placed into or removed from an array and the table rows with the classes from the array are shown and all other table rows are hidden. The purpose, to filter out rows from the table that dont match the users selection. **Problem**: If I select the location "Cannes" from ul.location and I then select "Villa" and "Hotel" from ul.propertytype I may end up showing rows in the table that are not neceserilly Villas and Hotels in Cannes, I'll get Villas and Hotels in Cannes and any other property row that contains the Villa, Hotel or Cannes classes. **Desired Behaviour**: I want to be able to see Villas and Hotels ONLY in Cannes. Or for example, if I select "Cannes" and "Eze" from locations and "Villa" and "Hotel" from propertytypes, I only want to see Villas and Hotels in Cannes and Eze Ive toyed with :visible selectors and foreach loops etc but this is getting a little beyond me. **So here are my Filters** There are 3 other filters as well but for keeping things simple I have only included 2, the others are no difference in code. Locations (there are over 30 in the live code) ``` * Cannes * Cap d'Antibes & Juan les Pins * Eze ``` Property Types (there are over 30 in the live code) ``` * Villa * Hotel * Hotel Villas ``` **The Table of Property Rows** Super Simplified (over 1200 in the live code) ``` | | | --- | | some stuff | | some stuff | | some stuff | | some stuff | | some stuff | ``` **Example situation and desired behaviour** If I click on the "Cannes" and "Eze" locations (ul.locations) and I click on the "Villa" and "Hotel" property types (ul.propertytype), I should only see tr#1, tr#2 and tr#3. See below the script Im currently using and hitting a wall with. ``` //For the location filters jQuery('.elegant_filters ul li').on('click', function(e){ jQuery(this).toggleClass('selected'); var filters = []; jQuery('.elegant_filters ul li.selected').each(function(){ var val = jQuery(this).attr('data-id'); filters.push('.'+val); }); console.log(filters); if (jQuery(filters).length < 1) { jQuery('.elegant_list_properties tr.propertyrow').show(); } else { jQuery('.elegant_list_properties tr.propertyrow').hide(); jQuery(filters.join(', ')).show(); } }) ```<issue_comment>username_1: Instead of **data-id** you can use classes. The first class group is related to **location** while the second is related to **others**. Getting selected elements you can create two selector groups: the first in order to fetch all elements having those classes. The second in order to filter elements having also a class belonging to the second group: ```js jQuery('.elegant_filters ul li').on('click', function (e) { jQuery(this).toggleClass('selected'); var filtersLocation = []; var filtersOthers = []; jQuery('.elegant_filters ul.location li.selected').each(function () { var val = this.textContent.toLowerCase().replace(/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, "\\$1"); filtersLocation.push('.' + val); }); jQuery('.elegant_filters ul.others li.selected').each(function () { var val = this.textContent.toLowerCase().replace(/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, "\\$1"); filtersOthers.push('.' + val); }); jQuery('.elegant_list_properties tr.propertyrow') .hide() .filter(filtersLocation.length > 0 ? filtersLocation.join(', ') : '*') .filter(filtersOthers.length > 0 ? filtersOthers.join(', ') : '*').show(); }) ``` ```css .selected { background-color: yellow; } ``` ```html * Cannes * Cap d'Antibes & <NAME> * Eze * Villa * Hotel * Hotel Villas | | | --- | | ....propertyrow cannes villa.... | | ...propertyrow cannes hotel... | | ...propertyrow eze villa.... | | ....propertyrow london villa... | | ....propertyrow paris hotel... | ``` Upvotes: 1 <issue_comment>username_2: You can create two groups (one for `locations` and other for `prototype others`) and search each `data-id` in the `| | |`s classes (in case of `locations`, the text). ```js $('.location li, .propertytype li').on('click', function(e){ $(this).toggleClass("selected"); //toggle the class selected, easy way to 'turn it on/off' showTbInfo(); //go to the function to show | }); function showTbInfo(){ //arrays of the two - let locations = [], properties = []; //loop through - with .selected $('.location, .propertytype').find('.selected').each(function(e){ //check if the - has .location if($(this).parent().hasClass('location')){ //get the first word of tag text, in lower case let text = ($(this).text().indexOf(" ") > 0 ? $(this).text().substring(0, $(this).text().indexOf(" ")) : $(this).text()).toLowerCase(); //add the word to locations array locations.push("." + text); //check if the - has .propertytype }else if($(this).parent().hasClass('propertytype')){ //get the data-id attribute properties.push("." + $(this).data('id')); } }); //if the arrays are empty, show everything if(locations.length <= 0 && properties.length <= 0){ $('.elegant\_list\_properties tr').show(); }else{ //start hiding everything $('.elegant\_list\_properties tr').hide(); //show every location. Example: $('.cannes, .eze').show(); if(locations.length > 0){ $(locations.join(", ")).show(); } //hide every shown element that is visible but doesn't have any of the properties in .propertytype if(properties.length > 0){ $('.elegant\_list\_properties tr:visible:not(' + properties.join(", ") + ')').hide(); } } } ``` ```html * Cannes * <NAME>'Antibes & <NAME> * Eze * Villa * Hotel * Hotel Villas | | | --- | | cannes villa | | cannes hotel | | eze villa | | london villa | | paris hotel | ``` --- Edit ---- If you need to check more classes, there's what you can do: Assuming you have `.agentname li`, `.bedrooms li` and `.saleorrent li`, create one array for each: ``` agentname = [], bedrooms = [], saleorrent = []; ``` In the [.each()](https://api.jquery.com/each/) function you will need to add these selectors too: ``` $('.location li.selected, .propertytype li.selected, .agentname li.selected, .bedrooms li.selected, .saleorrent li.selected').each(function(e){ //... ``` Or even: ``` $('.location, .propertytype, .agentname, .bedrooms, .saleorrent').find('.selected').each(function(e){ //... ``` After that, inside the `function(e){ }`, you need to check which array you will fill with the tag information (can use [.hasClass()](https://api.jquery.com/hasclass/) to verify). ``` else if($(this).parent().hasClass('agentname')){ /*fill agentname array*/ } else if($(this).parent().hasClass('bedrooms')){ /*fill bedrooms array*/ } //to all the others ``` In the end, you check if the array is empty and use your function depending in the which `if/else` statement will be true: ``` if(/*array*/.length > 0){ /*do what you need*/ }else{ /*if array is empty*/ } ``` *Remember: if you need to check all the classes, remember to fill the arrays with the names starting by a `.`(dot), and use [.join(", ")](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/join) to return all the class names in jQuery selector.* Upvotes: 1 [selected_answer]
2018/03/19
1,845
7,256
<issue_start>username_0: I've been tasked with hosting our Python API docs for our customers to access. ReadTheDocs.com was recommended by a colleague. However, I've had some challenges with it: 1. The default approach is to give ReadTheDocs full access to our code repo, in which the documentation is just one sub-folder. This is a non-starter and not possible. 2. So my next thought was to make a copy of the Docs folder into a separate repo and allow ReadTheDocs to access it. The problem here is that documentation is auto-generated from our code so this approach leaves large swaths of the documentation incomplete. 3. ReadTheDocs doesn't appear to be able to host the built documentation website (ie. index.html et al) but perhaps I'm mistaken about this? I'm seeking help from others who have encountered a similar Use Case. Did you find a way to get ReadTheDocs to work as you required or did you turn to another approach to host your documentation? If the latter, what approach did you use? We would require versioning (ie. 1.0.1, 1.0.2, etc.) and the export to a PDF file would be ideal. Sincerely, <NAME>.<issue_comment>username_1: For API docs you can use [swagger](https://swagger.io) or [apidoc](http://apidocjs.com). Upvotes: 0 <issue_comment>username_2: A guide to ReadTheDocs ====================== I've used ReadTheDocs for many of my own projects, and it really is a useful platform. As far as I've gathered from your question, you are trying to host HTML files from your repository (GitHub repo?). However, ReadTheDocs isn't made for hosting HTML - it actually builds ReStructuredText or Markdown files with [Sphinx](http://www.sphinx-doc.org/en/master/) (a documentation building system written in Python). Here is a typical scenario for setting up ReadTheDocs to host your documentation: Initialise files ---------------- 1. Firstly, install Sphinx with `pip` - read [this](http://www.sphinx-doc.org/en/stable/tutorial.html) for a guide on how to do so. 2. Next, go into the cloned repository on your computer and run `sphinx-quickstart` inside the `docs` folder (**must be an empty folder**). 3. The command should ask you some questions. Choose the following answers: * `Seperate source and build directories?`: **n** * `Project name`: a neat public-facing name for your project * `Author name(s)`: Name(s) of the developers who made the API * `Project release`: Current version of your APIThe rest can stay as default (press enter to take default option) 4. Commit the created files to your GitHub repo. 5. Sign up for a ReadTheDocs account and import your repository. By default, it will either build anything it sees in the root directory of your repo or in its `docs` folder (it will automatically determine which). If all is successful, you should be able to open your documentation page and see a demo page. Write and edit docs ------------------- You should now be able to edit your files to create your documentation. RTD's design is based off 'Themes', and most pages use <https://github.com/rtfd/sphinx_rtd_theme>. Theme repos usually give decent installation docs. To edit your page, you need to edit `docs/index.rst`. RST stands for `ReStructuredText` and is similar to Markdown. You can find cheat sheets for it on the internet. Here is what the auto-generated file looks like: ``` .. Test documentation master file, created by sphinx-quickstart on Mon Mar 19 18:24:58 2018. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Test's documentation! ================================ .. toctree:: :maxdepth: 2 :caption: Contents: Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ``` You can remove the 'Indices and tables' section from the bottom - I'm not entirely sure of its purpose. The `.. toctree::` is a universal menu - you only need to define it in `index.rst` and you can leave it out on other pages. To create a new page of documentation, create a new `*.rst` file. What you call it will correspond to the `.html` file that it is rendered into. For example, `parameters.rst` will be accessible through `http://mydocs.readthedocs.org/en/latest/parameters.html`. To add the `parameters.rst` page to your menu, it needs to look like this: ``` .. toctree:: :maxdepth: 2 :captions: Contents: parameters ``` Essentially, you need to add the name of your `.rst` file (without the extension) to the `.. toctree` in your `index.rst` file (and nowhere else). Applying changes ---------------- To apply changes that you've made and publish them to your ReadTheDocs page, you simply need to commit your new `.rst` files to the `master` branch on GitHub, and RTD will automatically build and publish for you. In case you haven't quite understood, RTD **does not take .html files**. You shouldn't commit *any* .html files to GitHub, just .rst files. The .rst files will be built by RTD and published. Versions -------- You can use Git tags to manage versions of your docs. For more detail, see <http://docs.readthedocs.io/en/latest/versions.html> (official ReadTheDocs documentation). Hope this is useful! Upvotes: 2 <issue_comment>username_3: If your project is on GitHub, your requirement can be met using [Github Actions](https://github.com/features/actions) along with a static site generator (SSG) of your choice supporting PDF generation. In its simplest form, create a GH action to generate the static site folder of the branch/release, then push the folder to corresponding folder in the branch pointed to by GH pages, say *gh-pages*. One of the branches/releases should be pushed to root. [GitHub Pages Deploy Action](https://github.com/marketplace/actions/deploy-to-github-pages) can be helpful. Add a dropdown list of versions to your static website pointing to the matching folder. Example: * [GitHub action](https://github.com/bcgov/NotifyBC/blob/main/.github/workflows/ghPages.yml) * [dropdown list implemented by Vue component](https://github.com/bcgov/NotifyBC/blob/main/docs/.vuepress/components/versions.vue) * [rendered site powered by GH pages](https://bcgov.github.io/NotifyBC/) Benefits over ReadTheDocs free plan: * No ads * Hosted entirely on GitHub, no 3rd party service nor authorization required. Upvotes: 0 <issue_comment>username_4: My approach was to write a script to copy the documentation source (rst) plus *only* the source code files needed to create the documentation (rather than the whole source tree). These were copied to a separate (private) GitHub repo that is used by Read the Docs to build the documents. **In all cases, we *only* copy files to the GitHub repo that are part of the shipped product, so there should be no IP risk.** * For C++ APIs (using `breathe`) this is just the public header files. * For Python APIs it is either the Python sources or the built wheel files. For the Python wheels, some of these need supporting object files (libs, etc). This meant using `git-lfs`, which is not officially supported by Read the Docs. It was difficult to get it all to build initially (working out all the dependencies by trial and error), and it is still quite fragile, but it works. Upvotes: 0
2018/03/19
1,676
6,592
<issue_start>username_0: I have a tableview of 10000 cell and each cell contains text and image url to show. I would like to scroll tableview very fast to the end of the tableview, and for each row send the async request to downloading the image, it means 10000 downloading thread request calls, but I need to only download the visible cells images not the all scrolled cells image. I like to optimise only visible cells image request call not all the table.<issue_comment>username_1: For API docs you can use [swagger](https://swagger.io) or [apidoc](http://apidocjs.com). Upvotes: 0 <issue_comment>username_2: A guide to ReadTheDocs ====================== I've used ReadTheDocs for many of my own projects, and it really is a useful platform. As far as I've gathered from your question, you are trying to host HTML files from your repository (GitHub repo?). However, ReadTheDocs isn't made for hosting HTML - it actually builds ReStructuredText or Markdown files with [Sphinx](http://www.sphinx-doc.org/en/master/) (a documentation building system written in Python). Here is a typical scenario for setting up ReadTheDocs to host your documentation: Initialise files ---------------- 1. Firstly, install Sphinx with `pip` - read [this](http://www.sphinx-doc.org/en/stable/tutorial.html) for a guide on how to do so. 2. Next, go into the cloned repository on your computer and run `sphinx-quickstart` inside the `docs` folder (**must be an empty folder**). 3. The command should ask you some questions. Choose the following answers: * `Seperate source and build directories?`: **n** * `Project name`: a neat public-facing name for your project * `Author name(s)`: Name(s) of the developers who made the API * `Project release`: Current version of your APIThe rest can stay as default (press enter to take default option) 4. Commit the created files to your GitHub repo. 5. Sign up for a ReadTheDocs account and import your repository. By default, it will either build anything it sees in the root directory of your repo or in its `docs` folder (it will automatically determine which). If all is successful, you should be able to open your documentation page and see a demo page. Write and edit docs ------------------- You should now be able to edit your files to create your documentation. RTD's design is based off 'Themes', and most pages use <https://github.com/rtfd/sphinx_rtd_theme>. Theme repos usually give decent installation docs. To edit your page, you need to edit `docs/index.rst`. RST stands for `ReStructuredText` and is similar to Markdown. You can find cheat sheets for it on the internet. Here is what the auto-generated file looks like: ``` .. Test documentation master file, created by sphinx-quickstart on Mon Mar 19 18:24:58 2018. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Test's documentation! ================================ .. toctree:: :maxdepth: 2 :caption: Contents: Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ``` You can remove the 'Indices and tables' section from the bottom - I'm not entirely sure of its purpose. The `.. toctree::` is a universal menu - you only need to define it in `index.rst` and you can leave it out on other pages. To create a new page of documentation, create a new `*.rst` file. What you call it will correspond to the `.html` file that it is rendered into. For example, `parameters.rst` will be accessible through `http://mydocs.readthedocs.org/en/latest/parameters.html`. To add the `parameters.rst` page to your menu, it needs to look like this: ``` .. toctree:: :maxdepth: 2 :captions: Contents: parameters ``` Essentially, you need to add the name of your `.rst` file (without the extension) to the `.. toctree` in your `index.rst` file (and nowhere else). Applying changes ---------------- To apply changes that you've made and publish them to your ReadTheDocs page, you simply need to commit your new `.rst` files to the `master` branch on GitHub, and RTD will automatically build and publish for you. In case you haven't quite understood, RTD **does not take .html files**. You shouldn't commit *any* .html files to GitHub, just .rst files. The .rst files will be built by RTD and published. Versions -------- You can use Git tags to manage versions of your docs. For more detail, see <http://docs.readthedocs.io/en/latest/versions.html> (official ReadTheDocs documentation). Hope this is useful! Upvotes: 2 <issue_comment>username_3: If your project is on GitHub, your requirement can be met using [Github Actions](https://github.com/features/actions) along with a static site generator (SSG) of your choice supporting PDF generation. In its simplest form, create a GH action to generate the static site folder of the branch/release, then push the folder to corresponding folder in the branch pointed to by GH pages, say *gh-pages*. One of the branches/releases should be pushed to root. [GitHub Pages Deploy Action](https://github.com/marketplace/actions/deploy-to-github-pages) can be helpful. Add a dropdown list of versions to your static website pointing to the matching folder. Example: * [GitHub action](https://github.com/bcgov/NotifyBC/blob/main/.github/workflows/ghPages.yml) * [dropdown list implemented by Vue component](https://github.com/bcgov/NotifyBC/blob/main/docs/.vuepress/components/versions.vue) * [rendered site powered by GH pages](https://bcgov.github.io/NotifyBC/) Benefits over ReadTheDocs free plan: * No ads * Hosted entirely on GitHub, no 3rd party service nor authorization required. Upvotes: 0 <issue_comment>username_4: My approach was to write a script to copy the documentation source (rst) plus *only* the source code files needed to create the documentation (rather than the whole source tree). These were copied to a separate (private) GitHub repo that is used by Read the Docs to build the documents. **In all cases, we *only* copy files to the GitHub repo that are part of the shipped product, so there should be no IP risk.** * For C++ APIs (using `breathe`) this is just the public header files. * For Python APIs it is either the Python sources or the built wheel files. For the Python wheels, some of these need supporting object files (libs, etc). This meant using `git-lfs`, which is not officially supported by Read the Docs. It was difficult to get it all to build initially (working out all the dependencies by trial and error), and it is still quite fragile, but it works. Upvotes: 0
2018/03/19
1,698
6,664
<issue_start>username_0: we want data place holders in excel. We have input data in json format. How can we use aspose.cell to map the data to placeholder at runtime in the provided xlsx and generate excel output? is there way to define template similarly for pdf which can load data from json? The transformation routine using aspose translates json to xls format and pdf format in few lines of code. i.e. 1. create a workbook 2. mapping data to workbook is there a sample i can refer for aspose place holder in xlsx and pdf?<issue_comment>username_1: For API docs you can use [swagger](https://swagger.io) or [apidoc](http://apidocjs.com). Upvotes: 0 <issue_comment>username_2: A guide to ReadTheDocs ====================== I've used ReadTheDocs for many of my own projects, and it really is a useful platform. As far as I've gathered from your question, you are trying to host HTML files from your repository (GitHub repo?). However, ReadTheDocs isn't made for hosting HTML - it actually builds ReStructuredText or Markdown files with [Sphinx](http://www.sphinx-doc.org/en/master/) (a documentation building system written in Python). Here is a typical scenario for setting up ReadTheDocs to host your documentation: Initialise files ---------------- 1. Firstly, install Sphinx with `pip` - read [this](http://www.sphinx-doc.org/en/stable/tutorial.html) for a guide on how to do so. 2. Next, go into the cloned repository on your computer and run `sphinx-quickstart` inside the `docs` folder (**must be an empty folder**). 3. The command should ask you some questions. Choose the following answers: * `Seperate source and build directories?`: **n** * `Project name`: a neat public-facing name for your project * `Author name(s)`: Name(s) of the developers who made the API * `Project release`: Current version of your APIThe rest can stay as default (press enter to take default option) 4. Commit the created files to your GitHub repo. 5. Sign up for a ReadTheDocs account and import your repository. By default, it will either build anything it sees in the root directory of your repo or in its `docs` folder (it will automatically determine which). If all is successful, you should be able to open your documentation page and see a demo page. Write and edit docs ------------------- You should now be able to edit your files to create your documentation. RTD's design is based off 'Themes', and most pages use <https://github.com/rtfd/sphinx_rtd_theme>. Theme repos usually give decent installation docs. To edit your page, you need to edit `docs/index.rst`. RST stands for `ReStructuredText` and is similar to Markdown. You can find cheat sheets for it on the internet. Here is what the auto-generated file looks like: ``` .. Test documentation master file, created by sphinx-quickstart on Mon Mar 19 18:24:58 2018. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Test's documentation! ================================ .. toctree:: :maxdepth: 2 :caption: Contents: Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ``` You can remove the 'Indices and tables' section from the bottom - I'm not entirely sure of its purpose. The `.. toctree::` is a universal menu - you only need to define it in `index.rst` and you can leave it out on other pages. To create a new page of documentation, create a new `*.rst` file. What you call it will correspond to the `.html` file that it is rendered into. For example, `parameters.rst` will be accessible through `http://mydocs.readthedocs.org/en/latest/parameters.html`. To add the `parameters.rst` page to your menu, it needs to look like this: ``` .. toctree:: :maxdepth: 2 :captions: Contents: parameters ``` Essentially, you need to add the name of your `.rst` file (without the extension) to the `.. toctree` in your `index.rst` file (and nowhere else). Applying changes ---------------- To apply changes that you've made and publish them to your ReadTheDocs page, you simply need to commit your new `.rst` files to the `master` branch on GitHub, and RTD will automatically build and publish for you. In case you haven't quite understood, RTD **does not take .html files**. You shouldn't commit *any* .html files to GitHub, just .rst files. The .rst files will be built by RTD and published. Versions -------- You can use Git tags to manage versions of your docs. For more detail, see <http://docs.readthedocs.io/en/latest/versions.html> (official ReadTheDocs documentation). Hope this is useful! Upvotes: 2 <issue_comment>username_3: If your project is on GitHub, your requirement can be met using [Github Actions](https://github.com/features/actions) along with a static site generator (SSG) of your choice supporting PDF generation. In its simplest form, create a GH action to generate the static site folder of the branch/release, then push the folder to corresponding folder in the branch pointed to by GH pages, say *gh-pages*. One of the branches/releases should be pushed to root. [GitHub Pages Deploy Action](https://github.com/marketplace/actions/deploy-to-github-pages) can be helpful. Add a dropdown list of versions to your static website pointing to the matching folder. Example: * [GitHub action](https://github.com/bcgov/NotifyBC/blob/main/.github/workflows/ghPages.yml) * [dropdown list implemented by Vue component](https://github.com/bcgov/NotifyBC/blob/main/docs/.vuepress/components/versions.vue) * [rendered site powered by GH pages](https://bcgov.github.io/NotifyBC/) Benefits over ReadTheDocs free plan: * No ads * Hosted entirely on GitHub, no 3rd party service nor authorization required. Upvotes: 0 <issue_comment>username_4: My approach was to write a script to copy the documentation source (rst) plus *only* the source code files needed to create the documentation (rather than the whole source tree). These were copied to a separate (private) GitHub repo that is used by Read the Docs to build the documents. **In all cases, we *only* copy files to the GitHub repo that are part of the shipped product, so there should be no IP risk.** * For C++ APIs (using `breathe`) this is just the public header files. * For Python APIs it is either the Python sources or the built wheel files. For the Python wheels, some of these need supporting object files (libs, etc). This meant using `git-lfs`, which is not officially supported by Read the Docs. It was difficult to get it all to build initially (working out all the dependencies by trial and error), and it is still quite fragile, but it works. Upvotes: 0
2018/03/19
717
3,185
<issue_start>username_0: My question is strictly related to transactional emails and common best practices like using different email providers for transactional and marketing emails, relevant Subject, From and Reply-To addresses, etc. is assumed. **Would using multiple transactional email providers with a different affinity to the destination domain or even full email address increase my deliverability?** e.g. using let's say mandrill to send to gmail and postmark for everything else or even mandrill for <EMAIL> and postmark for <EMAIL> **Is it worth splitting transactional emails in first class ones (reset password, verify email, etc) and second class ones (merely informative) and using different email providers for each?** e.g. using mandrill for password reset and postmark for the welcome (registration successful) email **What else is advisable?**<issue_comment>username_1: Yes, you got the thread rightly. For transactional emails, it is always recommended to go with a highly reputed service provider who doesn't deal with spam or promotional emails. This is the first most step. If the vendor supports both then there is a high probability that they will end up messing the reputation of your domain/IP addresses. As you have selected a good provider, so you can be now rest assured your emails are following the best delivery practices and are compliance with guidelines. Now, the second step is to have a separate account for the marketing emails. This can be with the current vendor (only if the vendor providers different envelope and IP addresses for this account) or with a new service provider. Now, the third step is to have a separate domain/sub-domain (preferably sub-domain, so that you should not look spammy to the world) for marketing emails.e.g. if you are using example.com for your transactional account, then use mailer-example.com for your marketing account. Another important note, having a separate account doesn't mean you now save to send any type of promo/marketing, send relevant customer engaging emails else by doing all these hacks also, you will end up losing the reputation of your sender domain and delivery IP addresses. Upvotes: 1 <issue_comment>username_2: So all of the major transactional email providers offer similar deliverability abilities, they (obviously) take that stuff very seriously. I don't think splitting between them is worth it, but there are some things you can do to increase your delivery rates and inbox placement regardless of ESP: 1. By default you share a delivery IP with other ESP customers, if you pay more you can get a dedicated IP which will improve things over the long term (assuming your data is good) 2. Reduce bounces -- using [an email verification service](https://kickbox.com) when you collect an email to make sure it is a real, active email address will reduce bounce rates and improve the deliverability of your emails. I work for [Kickbox](https://kickbox.com), which offers exactly this. 3. Reducing spam complaints -- make sure users know the emails are coming from your web app and you make it clear how to turn the emails off. Hope that helps. Upvotes: 0
2018/03/19
904
3,699
<issue_start>username_0: We are running a WebApp with 3 instances. We would like to restart these instances at night individually. Im trying to find a Powershell or REST API solution to work along with a cron job but I'm only coming up with solutions to restart the entire WebApp. We are aware of the manual process to restart them (screenshot link below) but we would like to automate the process. [Screenshot link.jpg](https://i.stack.imgur.com/NsZ7U.png)<issue_comment>username_1: If you're looking at Windows Azure PowerShell Cmdlets, the Command you want to use is Reset-AzureRoleInstance ([http://msdn.microsoft.com/en-us/library/jj152835.aspx](https://learn.microsoft.com/en-us/powershell/module/Azure/Reset-AzureRoleInstance?view=azuresmps-4.0.0)) Upvotes: 1 <issue_comment>username_2: Yes, you could use Azure Power Shell to do this, please check this [answer](https://stackoverflow.com/questions/43810248/powershell-for-an-advanced-application-restart-on-an-azure-web-app). According to your description, I suggest you could firstly find each instance's process in your web app by using `Get-AzureRmResource` command. Then you could use `Remove-AzureRmResource` to stop these processes. Then when you access the azure web application, azure will automatic create new instance's process to run your application. More details, you could refer to below powershell codes: ``` Login-AzureRmAccount Select-AzureRmSubscription -SubscriptionId '{your subscriptionid}' $siteName = "{sitename}" $rgGroup = "{groupname}" $webSiteInstances = @() #This gives you list of instances $webSiteInstances = Get-AzureRmResource -ResourceGroupName $rgGroup -ResourceType Microsoft.Web/sites/instances -ResourceName $siteName -ApiVersion 2015-11-01 $sub = (Get-AzureRmContext).Subscription.SubscriptionId foreach ($instance in $webSiteInstances) { $instanceId = $instance.Name "Going to enumerate all processes on {0} instance" -f $instanceId # This gives you list of processes running # on a particular instance $processList = Get-AzureRmResource ` -ResourceId /subscriptions/$sub/resourceGroups/$rgGroup/providers/Microsoft.Web/sites/$sitename/instances/$instanceId/processes ` -ApiVersion 2015-08-01 foreach ($process in $processList) { if ($process.Properties.Name -eq "w3wp") { $resourceId = "/subscriptions/$sub/resourceGroups/$rgGroup/providers/Microsoft.Web/sites/$sitename/instances/$instanceId/processes/" + $process.Properties.Id $processInfoJson = Get-AzureRmResource -ResourceId $resourceId -ApiVersion 2015-08-01 # is_scm_site is a property which is set # on the worker process for the KUDU $computerName = $processInfoJson.Properties.Environment_variables.COMPUTERNAME if ($processInfoJson.Properties.is_scm_site -ne $true) { $computerName = $processInfoJson.Properties.Environment_variables.COMPUTERNAME "Instance ID" + $instanceId + "is for " + $computerName "Going to stop this process " + $processInfoJson.Name + " with PID " + $processInfoJson.Properties.Id # Remove-AzureRMResource finally STOPS the worker process $result = Remove-AzureRmResource -ResourceId $resourceId -ApiVersion 2015-08-01 -Force if ($result -eq $true) { "Process {0} stopped " -f $processInfoJson.Properties.Id } } } } } ``` Upvotes: 3 [selected_answer]
2018/03/19
1,498
4,008
<issue_start>username_0: I have a class that has different member function signatures. Based upon some requirement (to optimize execution time specifically) I need to call one the above methods at a specific time. I have plan to create a structure of below type: ``` #include class A { public: void Show() {std::cout << "Called 0" << std::flush << std::endl;} int Show1() {std::cout << "Called 1" << std::flush << std::endl;} double Show2(char z) {std::cout << "Called 2" << std::flush << std::endl;} float Show3(int op, float x) {std::cout << "Called 3" << std::flush << std::endl;} }; struct details { int type ; /\* methods to be called resp : 0 =Show,1=Show1,2=Show2,3=Show3\*/ union { void (A::\*fn)(); int (A::\*fn1)(); double (A::\*fn2)(char z); float (A::\*fn3)(int op, float x); }fnptr; }; int main() { struct details d1 [4] ; d1[0].type = 0; d1[0].fnptr.fn = &A::Show; A a1; (a1.\*(d1[0].fnptr.fn))(); d1[0].type = 1; d1[0].fnptr.fn1 = &A::Show1; (a1.\*(d1[0].fnptr.fn1))(); d1[0].type = 1; d1[0].fnptr.fn2 = &A::Show2; (a1.\*(d1[0].fnptr.fn2))('a'); d1[0].type = 4; d1[0].fnptr.fn3 = &A::Show3; (a1.\*(d1[0].fnptr.fn3))(2,3.14); } MINGW64 /c/work $ c++ try.cpp -std=c++11 MINGW64 /c/work $ ./a.exe Called 0 Called 1 Called 2 Called 3 ``` However I am not able to initialize the array : ``` struct details d1 [4] = {{0, &A::Show}, {1, &A::Show1}, {2, &A::Show2}, {3, &A::Show3}}; ``` It states compilation error ``` int main() { struct details d1 [4] = {{0, &A::Show}, {1, &A::Show1}, {2, &A::Show2}, {3, &A::Show3}}; } try.cpp: In function 'int main()': try.cpp:26:87: error: cannot convert 'int (A::*)()' to 'void (A::*)()' in initialization struct details d1 [4] = {{0, &A::Show}, {1, &A::Show1}, {2, &A::Show2}, {3, &A::Show3}}; ^ try.cpp:26:87: error: cannot convert 'double (A::*)(char)' to 'void (A::*)()' in initialization try.cpp:26:87: error: cannot convert 'float (A::*)(int, float)' to 'void (A::*)()' in initialization ``` How can I initialize the structure properly?<issue_comment>username_1: > > However I am not able to initialize the array : > > > > ``` > struct details d1 [4] = {{0, &A::Show}, {1, &A::Show1}, {2, &A::Show2}, {3, &A::Show3}}; > > ``` > > When a `union` is initialized using that syntax, the value must correspond to its first member. In your case, the first member is `fn`, whose type is `void (A::*)()`. That line is equivalent to: ``` struct details d1 [4] = {{0}, {1}, {2}, {3}}; d1[0].fn = &A::Show; d1[1].fn = &A::Show1; d1[2].fn = &A::Show2; d1[3].fn = &A::Show3; ``` That explains the compiler error. [Documentation from the standard](https://timsong-cpp.github.io/cppwp/n3337/dcl.init.aggr#15): > > When a union is initialized with a brace-enclosed initializer, the braces shall only contain an initializer-clause for the first non-static data member of the union. [ *Example:* > > > > ``` > union u { int a; const char* b; }; > u a = { 1 }; > u b = a; > u c = 1; // error > u d = { 0, "asdf" }; // error > u e = { "asdf" }; // error > > ``` > > — *end example* ] > > > Upvotes: 4 [selected_answer]<issue_comment>username_2: Currently, in C++, when a union is [*aggregate initialized*](http://en.cppreference.com/w/cpp/language/aggregate_initialization), only the first *non-static data member* (can be, or) is initialized. you cannot work around that. Good news is that, from [**C++20**](http://en.cppreference.com/w/cpp/language/aggregate_initialization#Designated_initializers), you'll be able to [select an active member during *aggregate initialization*](http://en.cppreference.com/w/cpp/language/aggregate_initialization#Designated_initializers). So, this syntax should work in few years time. :-) ``` details d1 [4] = {{0, {.fn = &A::Show}}, {1, {.fn1 = &A::Show1}}, {2, { .fn2 = &A::Show2}}, {3, { .fn3 = &A::Show3}}}; ``` Upvotes: 2
2018/03/19
659
1,669
<issue_start>username_0: I left a question for finding the column with the lowest value and here is the link [find the column with lowest value in r](https://stackoverflow.com/questions/49203020/find-the-column-with-lowest-value-in-r/49203047#49203047) This perfectly works for me. However, I have a similar but different problem. If I want to find the column with 2nd lowest, 3rd lowest value ,, and nth lowest value. How to do it?<issue_comment>username_1: Here is an example of getting the column numbers of the three lowest columns in each row. ``` set.seed(1234) M = matrix(sample(20,20), ncol=4) M [,1] [,2] [,3] [,4] [1,] 3 10 7 9 [2,] 12 1 5 17 [3,] 11 4 20 16 [4,] 18 8 15 19 [5,] 14 6 2 13 t(apply(M, 1, function(x) head(order(x),3))) [,1] [,2] [,3] [1,] 1 3 4 [2,] 2 3 1 [3,] 2 1 4 [4,] 2 3 1 [5,] 3 2 4 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: A tidyverse solution: ``` library(dplyr) library(tidyr) df <- as.data.frame(matrix(sample(20,20), ncol = 4)) df %>% tidyr::gather(column, value) %>% arrange(value) %>% filter(row_number() == 2) %>% pull(column) ``` Upvotes: 0 <issue_comment>username_3: A solution which allows for multiple possible columns having the $k$-th lowest value: Use `which` with `arr.ind=T` ``` set.seed(1234) M = matrix(sample(50, 100, replace=T), ncol=4) ## columns with the 6th lowest value, e.g. 6 (two instances in cols 1 & 4) which(M == unique(sort(M))[6], arr.ind = T) ``` Gives: ``` > which(M == unique(sort(M))[6], arr.ind = T) row col [1,] 1 1 [2,] 20 4 ``` Upvotes: 0
2018/03/19
264
831
<issue_start>username_0: How can I change `Nokogiri::XML('813').children` to return `813` (because it only returns `8`) Thanks !<issue_comment>username_1: Is it possible to wrap your XML like so? `813` The result of: `Nokogiri::XML('813').children` will then be: `813` Upvotes: 0 <issue_comment>username_2: [Nokogiri::XML](http://www.nokogiri.org/tutorials/parsing_an_html_xml_document.html) parses an XML *document*. A valid XML document can have [*only one* root element](https://en.wikipedia.org/wiki/XML#Well-formedness_and_error-handling) so Nokogiri tries to make your document valid by ignoring the second (invalid) element. You need to first make your document valid by wrapping it in a root element, then retrieve the children of that root element. ``` Nokogiri::XML("#{myxml}").root.children ``` Upvotes: 2
2018/03/19
628
1,898
<issue_start>username_0: I am trying to connect mongodb and node.js for a project I'm working on. I have set up a database as follows: [click here to view the status of the database](https://i.stack.imgur.com/PxZZR.png) and I have a file called index.js in D:\node.JS. Here's the content of the file: ``` const MongoClient = require('mongodb').MongoClient; var url = "mongodb://localhost:27017/test"; MongoClient.connect(url, function(err, db){ if (err) throw err; console.log("Database created!"); db.close(); }); ``` And this is the output I'm getting: ``` D:\node.JS>node index.js D:\node.JS\index.js:6 mongoClient.connect(url,(err,databse) ^ SyntaxError: missing ) after argument list at createScript (vm.js:80:10) at Object.runInThisContext (vm.js:139:10) at Module._compile (module.js:616:28) at Object.Module._extensions..js (module.js:663:10) at Module.load (module.js:565:32) at tryModuleLoad (module.js:505:12) at Function.Module._load (module.js:497:3) at Function.Module.runMain (module.js:693:10) at startup (bootstrap_node.js:188:16) at bootstrap_node.js:609:3 ``` Where can I have gone wrong?<issue_comment>username_1: Is it possible to wrap your XML like so? `813` The result of: `Nokogiri::XML('813').children` will then be: `813` Upvotes: 0 <issue_comment>username_2: [Nokogiri::XML](http://www.nokogiri.org/tutorials/parsing_an_html_xml_document.html) parses an XML *document*. A valid XML document can have [*only one* root element](https://en.wikipedia.org/wiki/XML#Well-formedness_and_error-handling) so Nokogiri tries to make your document valid by ignoring the second (invalid) element. You need to first make your document valid by wrapping it in a root element, then retrieve the children of that root element. ``` Nokogiri::XML("#{myxml}").root.children ``` Upvotes: 2
2018/03/19
1,457
2,722
<issue_start>username_0: I have a data of 131 columns. The first column is my Y. I have 130 Xs. I want to have 130 linear regressions which are lm(y ~ x1), lm(y ~ x2), lm(y ~ x3 ) ....lm(y ~x130). Then get the p-value of every of these fit. How can I make it faster? for loop or apply?<issue_comment>username_1: If your data looks something like this (only larger) ``` > library(dplyr) > tbl <- data.frame( + A = rnorm(10), + B = rnorm(10), + C = rnorm(10) + ) %>% mutate( + y = 2 * A + rnorm(10, .1) + ) > tbl A B C y 1 -1.3430281 0.06457155 -0.31477796 -3.54276780 2 -0.8045598 0.55160502 -0.04486946 -0.17595827 3 0.6432380 -0.38036302 0.30313165 2.71317260 4 0.9282322 0.92453929 1.52828109 1.41677569 5 -0.2104841 -0.31510189 -1.32938820 -0.02714028 6 -1.8264372 0.92910256 0.16072524 -5.09970701 7 0.9568248 0.42829255 -0.28423084 1.58072449 8 -1.2061661 -1.10672961 0.69626390 -3.19605711 9 0.6173230 2.74964116 0.67350556 1.78849532 10 -1.1575590 -0.01747244 -0.10611764 -3.09733526 ``` you can use `tidyr` to make it into a form that is easier to work with ``` > tidy_tbl <- tbl %>% tidyr::gather(var, x, -y) > head(tidy_tbl) y var x 1 -3.54276780 A -1.3430281 2 -0.17595827 A -0.8045598 3 2.71317260 A 0.6432380 4 1.41677569 A 0.9282322 5 -0.02714028 A -0.2104841 6 -5.09970701 A -1.8264372 ``` Then, you can use `broom` to fit a model per `var` group ``` > library(broom) > fitted <- tidy_tbl %>% + group_by(var) %>% + do(model = lm(y ~ x, data = .)) > fitted Source: local data frame [3 x 2] Groups: # A tibble: 3 x 2 var model \* 1 A 2 B 3 C ``` You can use `tidy` to move the fitted models from nested lists in the data frame to summaries of them: ``` > fitted %>% tidy(model) # A tibble: 6 x 6 # Groups: var [3] var term estimate std.error statistic p.value 1 A (Intercept) 0.0744 0.305 0.244 0.814 2 A x 2.46 0.288 8.54 0.0000271 3 B (Intercept) -1.05 0.945 -1.11 0.298 4 B x 0.750 0.891 0.842 0.424 5 C (Intercept) -0.842 0.920 -0.915 0.387 6 C x 0.610 1.26 0.485 0.641 ``` Upvotes: 1 <issue_comment>username_2: Using base R only this can be done with a series of `*apply` instructions. First, I will make up some data since you have posted none. ``` set.seed(7637) # Make the results reproducible n <- 100 dat <- as.data.frame(replicate(11, rnorm(n))) names(dat) <- c("Y", paste0("X", 1:10)) ``` Now, for the regressions. ``` lm_list <- lapply(dat[-1], function(x) lm(Y ~ x, dat)) lm_smry <- lapply(lm_list, summary) lm_pval <- sapply(lm_smry, function(x) x$coefficients[, "Pr(>|t|)"]) ``` Upvotes: 3 [selected_answer]
2018/03/19
1,843
4,053
<issue_start>username_0: I have a problem using the any function for strings in a if/else statement. Note that the print ("A") in the function is just an example. I need to perform a series of operation if the column contains certain values. Randomly generated data ``` level=c("Strongly Agree", "Agree", "Neither agree or disagree","Disagree", "Strongly disagree",NA) df <- data.frame(pre_1=as.character(sample(c("Yes","No", NA), 30, replace = T)), pre_2=as.character(sample(level, 30, replace = T)), post_1=as.character(sample(level, 30, replace = T)), post_2=as.character(sample(c("<90%", "0-80%", ">90", NA), 30, replace = T)), stringsAsFactors=T) ``` Select the part of the dataframe needed ("post\_") and printing a statement based on the values of specific columns. In this case, i need to print "A" for the columns that contain specific rows values: `"Strongly Agree", "Agree", "Neither agree or disagree","Disagree", "Strongly disagree"` ``` select(df, starts_with("post_")) %>% length() %>% seq(1,.,1) %>% for (i in .){ if (any(c("Neither agree or disagree") == (select(df, starts_with("post_"))[i]))){ print ("A") } else {print ("B")} } ``` This gives the error ``` Error in if (any(c("Neither agree or disagree") == (select(df, starts_with("post_"))[i]))) { : missing value where TRUE/FALSE needed ``` Note that if i run the code here under it correctly works ``` if (any(c("Neither agree or disagree","Agree") == df[3])){print ("A")} else { print ("B")} ``` Any help appreciated<issue_comment>username_1: If your data looks something like this (only larger) ``` > library(dplyr) > tbl <- data.frame( + A = rnorm(10), + B = rnorm(10), + C = rnorm(10) + ) %>% mutate( + y = 2 * A + rnorm(10, .1) + ) > tbl A B C y 1 -1.3430281 0.06457155 -0.31477796 -3.54276780 2 -0.8045598 0.55160502 -0.04486946 -0.17595827 3 0.6432380 -0.38036302 0.30313165 2.71317260 4 0.9282322 0.92453929 1.52828109 1.41677569 5 -0.2104841 -0.31510189 -1.32938820 -0.02714028 6 -1.8264372 0.92910256 0.16072524 -5.09970701 7 0.9568248 0.42829255 -0.28423084 1.58072449 8 -1.2061661 -1.10672961 0.69626390 -3.19605711 9 0.6173230 2.74964116 0.67350556 1.78849532 10 -1.1575590 -0.01747244 -0.10611764 -3.09733526 ``` you can use `tidyr` to make it into a form that is easier to work with ``` > tidy_tbl <- tbl %>% tidyr::gather(var, x, -y) > head(tidy_tbl) y var x 1 -3.54276780 A -1.3430281 2 -0.17595827 A -0.8045598 3 2.71317260 A 0.6432380 4 1.41677569 A 0.9282322 5 -0.02714028 A -0.2104841 6 -5.09970701 A -1.8264372 ``` Then, you can use `broom` to fit a model per `var` group ``` > library(broom) > fitted <- tidy_tbl %>% + group_by(var) %>% + do(model = lm(y ~ x, data = .)) > fitted Source: local data frame [3 x 2] Groups: # A tibble: 3 x 2 var model \* 1 A 2 B 3 C ``` You can use `tidy` to move the fitted models from nested lists in the data frame to summaries of them: ``` > fitted %>% tidy(model) # A tibble: 6 x 6 # Groups: var [3] var term estimate std.error statistic p.value 1 A (Intercept) 0.0744 0.305 0.244 0.814 2 A x 2.46 0.288 8.54 0.0000271 3 B (Intercept) -1.05 0.945 -1.11 0.298 4 B x 0.750 0.891 0.842 0.424 5 C (Intercept) -0.842 0.920 -0.915 0.387 6 C x 0.610 1.26 0.485 0.641 ``` Upvotes: 1 <issue_comment>username_2: Using base R only this can be done with a series of `*apply` instructions. First, I will make up some data since you have posted none. ``` set.seed(7637) # Make the results reproducible n <- 100 dat <- as.data.frame(replicate(11, rnorm(n))) names(dat) <- c("Y", paste0("X", 1:10)) ``` Now, for the regressions. ``` lm_list <- lapply(dat[-1], function(x) lm(Y ~ x, dat)) lm_smry <- lapply(lm_list, summary) lm_pval <- sapply(lm_smry, function(x) x$coefficients[, "Pr(>|t|)"]) ``` Upvotes: 3 [selected_answer]
2018/03/19
961
4,464
<issue_start>username_0: This feels like it should be a simple thing, but I'm still pretty new to SpringBoot, and the whole Servlet ecosystem, so it's not readily apparent. I would love an interface similar to HandlerInterceptor that allows me to modify the request and response object once I'm done in a controller. Even better would be to decorate mapping annotation, so I can specify which controllers need the operation. The problem I'm solving right now, though I anticipate expanding this in the future, is that I have an encrypted header coming into my application that I would like to decrypt for use in the controller and then encrypt again on the way out. EDIT: For clarity. I have a rest controller, something like: ``` @RestController public class PojoService { @GetMapping(value = "/path/to/resource") public ResponseEntity getLocationData( @RequestHeader(value = "EncryptedHeader", required = false) String ecryptedHeaderValue) { DecryptionObject decryptedHeader = new DecryptionObject(pageHeaderValue); SomePojo result = getResult(); return decryptedHeader.decorateResponseWithEncryptedHeader(result); } } ``` I would love to not have the DecryptionObject on every mapping, but rather, before I even get to the mapping, I decrypt the header via some filter or hook and then re-encrypt the header on the way out. Then my code would look something like: ``` @RestController public class PojoService { @GetMapping(value = "/path/to/resource", decryptHeader="EncryptedHeader") public ResponseEntity getLocationData( @RequestHeader(value = "EncryptedHeader", required = false) String decryptedHeaderValue) { SomePojo result = getResult(); return result; } } ``` I found that the HandlerInterceptor doesn't work because I cannot modify the request or response in the interceptor. Hope that clarifies the issue.<issue_comment>username_1: You can still use HandlerInterceptor. Create your class implementing HandlerInterceptor, and then register it using another class which extends [WebMvcConfigurer](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/servlet/config/annotation/WebMvcConfigurer.html#addInterceptors-org.springframework.web.servlet.config.annotation.InterceptorRegistry-). ``` @EnableWebMvc @Configuration @ComponentScan public class MyWebConfig implements WebMvcConfigurer { public void addInterceptors(InterceptorRegistry registry) { registry.addInterceptor(new [...]); //Apply to all controllers registry.addInterceptor(new [...]).addPathPatterns("path1","path2"); //Apply to specific paths to restrict to some controllers. } } ``` You also could do it using a Filter - create your Filter class and register it by declaring a `@Bean` of type FilterRegistrationBean - this also allows you to restrict to some paths. UPDATE: You could do this with request attributes which can be set by interceptors (`request.setAttribute("decryptedHeaderValue",)`. Or if you're specific about using headers, a filter would be more suitable for your purpose. Create a new wrapped request type that wraps the incoming request and does whatever you want, and pass this wrapper to the next filter in chain. ``` public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) { [...] HttpServletRequestWrapper decryptedRequest = new HttpServletRequestWrapper((HttpServletRequest) request) { public String getHeader(String name) { if (name.equals("DecryptedHeader")) { String encrypted = super.getHeader("EncryptedHeader"); String decrypted = decrypt(encrypted); return decrypted; } return super.getHeader(name); //Default behavior } } chain.doFilter(decryptedRequest, response); //Pass on the custom request down } ``` Then any class down the line (other filters, controllers etc) can just call `request.getHeader("DecryptedHeader")` to retrieve the decrypted header. This is just one of many similar approaches. You can restrict the paths for which this filter executes when registering it. For response, there is a similar class HttpServletResponseWrapper which you can use for customization. Upvotes: 3 [selected_answer]<issue_comment>username_2: We can do this via addingAttribute in the `interceptor` httpServletRequest.setAttribute(,); Upvotes: 1
2018/03/19
543
1,946
<issue_start>username_0: I'm working with `Vuetify` and `Stylus` on this snipped of HTML ``` Language ``` Is there a `CSS`/`Stylus` way to edit `input-group__details` based on what the status of `input[readonly]` is? Something like: ``` if (.input-group > input has readonly) .input-group__details height: 0px else .input-group__details height: 5px ``` Basically, how do I change a class based on the sibling's child attribute?<issue_comment>username_1: Well not possible with the provided markup, but if you allowed to change some markup you can get this...try to make the `.input-group__details` next sibling of `input`.. Also you don't need to assign a value to readonly...just use `readonly` ```css input[readonly]+.input-group__details { color: red; } ``` ```html Language Welcome Language Welcome ``` Upvotes: 1 <issue_comment>username_2: You can bind class. ``` Language ``` Now in your vue script: ``` data: { trueFalse: false, }, methods: { someClassName() { //condition of your input field //if condition true make 'trueFalse' to true else to false this.trueFalse = true } } ``` at last in your css: ``` .className { //add your style with !important } ``` Upvotes: 0 <issue_comment>username_3: Unfortunately as of now, this cannot be achieved in CSS, and as all CSS preprocessors need to generate CSS, it also cannot be done with any pre- or post-processing whatsoever. You will either have to change your HTML structure (make sure the targeted element comes after the readonly input, and they share the parent element), or resort to Javascript. If you have enough time, you can also wait for **[selectors level 4](http://developer.mozilla.org/en-US/docs/Web/CSS/:has)** to arrive. which would solve your problem with this `.input-group__input:has(input[readonly]) + .input-group__details { ... }` Upvotes: 3 [selected_answer]
2018/03/19
1,517
5,100
<issue_start>username_0: I am working on using opencv to get the total number of people in a video stream. The problem is that my code only captures the number of people in frame without taking into account all of the frames in the stream. I thought about extracting all the faces detected from a video or webcam and then comparing them. My question here is how do I get the exact count of people by comparing the faces which are extracted? Or is there any other approach to get the total count? This is the function which detects faces and gives gender and count(but its only for that frame ) ``` def start_webcam(model_gender, window_size, window_name='live', update_time=50): cv2.namedWindow(window_name, WINDOW_NORMAL) if window_size: width, height = window_size cv2.resizeWindow(window_name, width, height) video_feed = cv2.VideoCapture(0) video_feed.set(3, width) video_feed.set(4, height) read_value, webcam_image = video_feed.read() delay = 0 init = True while read_value: read_value, webcam_image = video_feed.read() webcam_image=cv2.flip(webcam_image,1,0) faces = face_cascade.detectMultiScale(webcam_image) for normalized_face, (x, y, w, h) in find_faces(webcam_image): if init or delay == 0: init = False gender_prediction = model_gender.predict(normalized_face) if (gender_prediction[0] == 0): cv2.rectangle(webcam_image, (x,y), (x+w, y+h), (0,0,255), 2) cv2.putText(webcam_image, 'female', (x,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2) else: cv2.rectangle(webcam_image, (x,y), (x+w, y+h), (255,0,0), 2) cv2.putText(webcam_image, 'male', (x,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (255,0,0), 2) delay += 1 delay %= 20 cv2.putText(webcam_image, "Number of faces detected: " + str(len(faces)), (0,webcam_image.shape[0] -10), cv2.FONT_HERSHEY_TRIPLEX, 0.7, (255,255,255), 1) cv2.imshow(window_name, webcam_image) key = cv2.waitKey(update_time) if key == ESC: break cv2.destroyWindow(window_name) ```<issue_comment>username_1: Try hashing all of the faces for each frame. Then store each hash in a set and get its size to find the number of faces in the video feed. Upvotes: 0 <issue_comment>username_2: If I understood your question, in a nutshell you problem can be divided in the following pieces: 0 - detection: for each frame detect zero or more faces. The output of this step is a sequence of "events". Each event is a face and the coordinate of the region where the face was detected in the image: ``` evts = {{face0, (x0,y0,w0,h0)}, {face1, (x1,y1,w1,h1)}, ..., {faceN, (xN,yN,wN,hN)}} ``` for N + 1 detected faces. 1 - identification: the objective of this step is provide an ID for each event (face/region) detected in the previous step. So, for each face in evts or: I. The face is a "new face" so, a new ID is generated and assigned to the face II. The face is the same face detected in one of the previous frames, so you should assign the previous same ID for that face. The output of this step is a collection of assigned ID's: ``` ids = {id0, id1, id2, ..., idM} ``` 2 - count: repeat step 1 and 2 up to the last frame. The size of the ids collection is the count of different faces in the video stream **The real problem** The real problem is: how to determine if an event (a face in this case) in frame X is the "same" face in the frame Y? Yes, this is the key problem. In your case, you should use a mix of approaches: * perform the FACE RECOGNITION (face recognition is a different thing than face detection). Luckily last years there are a lot of improvements in this field and you can more or less easily use [openface](https://github.com/cmusatyalab/openface "openface") or similar API's in your code to achieve your needs. Don't waste your time trying the [Viola's based algorithms](https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_api.html#FaceRecognizer%20:%20public%20Algorithm) for face recognition (they were introduced in 2001 and maybe no so accurate for practical needs today). * take in consideration the spatial and temporal locality principle and maximize the plausibility to find the same face in a neighbourhood region for successive frames Giving the issues on poses changing, lighting and occlusion, the use of the position of previous detected face to identify the current can be more robust than the face recognition algorithm. This depends on your video and scenes. There are several operational problems to implement a robust solution for this problem: * Same faces in different poses * Faces in different scales * Occlusion (is always a problem) * Realtime requirements And all sort of CV related challenges. So, be ready to handle false positives/negatives rates. Tips: * Try your solution against several different videos to avoid overfitting. * If in the videos the faces are moving the [Kalman Estimator](https://docs.opencv.org/trunk/dd/d6a/classcv_1_1KalmanFilter.html) can be useful. I wrote so much. Hope I actually have understood your question. Upvotes: 2
2018/03/19
473
1,439
<issue_start>username_0: How can i have my SVG image on the same line of my text in an inline-block container ? In the above example, I want my picture let at the right of my text, on the same line but browser automaticaly break line. ```css .container{ display: inline-block; background:orange; } ``` ```html text![](https://res.cloudinary.com/aquis/image/upload/v1521481254/site-2018/megaphone-picto.svg) ```<issue_comment>username_1: Remove `display: inline-block` from your `.container` class and set the `inline-block` on your image. Since it is an `svg`, you will also want to give the `svg` an explicit height and width. It's recommended to wrap the `img` in a `span` tag, instead of setting it to the `img` html element, which would set the same small size on all image items on your site. I gave the span the classname of `svg-image`, but you could name it anything else. ```css .container{ background:orange; } .svg-image { display: inline-block; height:20px; width: 20px; } ``` ```html text![](https://res.cloudinary.com/aquis/image/upload/v1521481254/site-2018/megaphone-picto.svg) ``` Upvotes: 1 <issue_comment>username_2: Simply specify a width and/or a height to the image: ```css .container{ display: inline-block; background:orange; } ``` ```html text![](https://res.cloudinary.com/aquis/image/upload/v1521481254/site-2018/megaphone-picto.svg) ``` Upvotes: 3 [selected_answer]
2018/03/19
577
2,078
<issue_start>username_0: I'm fairly new to php oop and I'm starting to get the hang of things. Though I have extended procedural experience, I've only recently started to build my own MVC framework, mainly for learning purposes. I got to the stage where I know how to route url's to controllers and models and render views based on what's requested. What I struggle with is a conceptual thing: My application has users, which have to log in in order to use it. If you're not logged in, nothing happens. These users are objects and have methods like login(), logout(), auth() etc. At the same time, I would like to have a "module" in my application where certain users can manage users, like root for example. A page where I can see all users, add new ones, remove existing ones, reset passwords etc. ... While I know how to build the latter, I'm confused about where the login() and logout() actions should go. Are these part of the UsersController or should I have a separate class for them, despite technically dealing with the same object type? Thank you!<issue_comment>username_1: Remove `display: inline-block` from your `.container` class and set the `inline-block` on your image. Since it is an `svg`, you will also want to give the `svg` an explicit height and width. It's recommended to wrap the `img` in a `span` tag, instead of setting it to the `img` html element, which would set the same small size on all image items on your site. I gave the span the classname of `svg-image`, but you could name it anything else. ```css .container{ background:orange; } .svg-image { display: inline-block; height:20px; width: 20px; } ``` ```html text![](https://res.cloudinary.com/aquis/image/upload/v1521481254/site-2018/megaphone-picto.svg) ``` Upvotes: 1 <issue_comment>username_2: Simply specify a width and/or a height to the image: ```css .container{ display: inline-block; background:orange; } ``` ```html text![](https://res.cloudinary.com/aquis/image/upload/v1521481254/site-2018/megaphone-picto.svg) ``` Upvotes: 3 [selected_answer]
2018/03/19
1,082
3,929
<issue_start>username_0: I'm no back-end developer. So perspective is always appreciated. I have written a script which requests from an API and creates this huge JSON file I want to save in firebase, how can I accomplish this? And would it be possible to filter this json with python for example; when I add region=eu in the url this returns the objects which have Europe as region or do I absolutely need to request the entire json file and parse in my code (java android) ?<issue_comment>username_1: Since there are a few parts to your question: You can save JSON to Firebase and the data will be mapped to child locations: > > Using PUT, we can write a string, number, boolean, array or any JSON object to our Firebase database...When a JSON object is saved to the database, the object properties are automatically mapped to child locations in a nested fashion. > > > * <https://firebase.google.com/docs/database/rest/save-data> And for your next question: > > And would it be possible to filter this json with python for example; when I add region=eu in the url this returns the objects which have Europe as region > > > Looks like you should be able to jimmy something together with Firebase's filters, `startAt` and `endAt`: > > We can combine startAt and endAt to limit both ends of our query. > > > * <https://firebase.google.com/docs/database/rest/retrieve-data#section-rest-filtering> For your example you might do something like this: ``` curl 'https://yourfirebase.firebaseio.com/yourendpoint.json?orderBy="$REGION_NAME"&startAt="EU"&endAt="EU"&print=pretty' ``` > > ...or do I absolutely need to request the entire json file and parse in my code (java android) ? > > > The facts that JSON objects are stored hierarchically in Firebase and that you can filter based on those object values makes me think you do not, in fact, have to request the entire JSON file. However, I don't have personal experience with this particular aspect of Firebase, so give it a shot! --- As [@username_2](https://stackoverflow.com/users/2895249/username_2) mentions in the comments, you can also use the `equalTo` query (<https://firebase.google.com/docs/reference/js/firebase.database.Query#equalTo>): `curl 'https://yourfirebase.firebaseio.com/yourendpoint.json?orderBy="$REGION_NAME"&equalTo="EU"&print=pretty'` Upvotes: 2 [selected_answer]<issue_comment>username_2: It really depends on how you are structuring your JSON. It's generally recommended to make your JSON tree as shallow as possible since all children are loaded when you have a matching query. **FIREBASE DATA**: ``` { "-id1": { "region": "eu" // bear in mind queries are case sensitive "title": "Foo" "nested": { "city": "berlin" } }, "-id2": { "region": "other" "title": "Bar" "nested": { "city": "berlin" } }, "-id3": { "region": "eu" "title": "Baz" "nested": { "city": "paris" } } } ``` Querying with (using the Android API) `.orderByChild("region").equalTo("eu")` would return `"-id1"` and `"-id3"` with `.orderByChild("nested/city").equalTo("berlin")` would return `"-id1"` and `"-id2"` > > The REST API Returns Unsorted Results: JSON interpreters do not enforce any ordering on the result set. While orderBy can be used in combination with startAt, endAt, limitToFirst, or limitToLast to return a subset of the data, the returned results will not be sorted. Therefore, it may be necessary to manually sort the results if ordering is important. > > > --- If you're using a more complex structure I recommend watching this video <https://www.youtube.com/watch?v=vKqXSZLLnHA> I'd also recommend using the firebase library for Android <https://firebase.google.com/docs/android/setup> And Firebase-UI, It does a lot for you. <https://firebaseopensource.com/projects/firebase/firebaseui-android/> Upvotes: 0
2018/03/19
687
1,936
<issue_start>username_0: I am trying to create a RegExp in oracle to match a string with the following criteria, Length 11 characters. The 2,5,8,9 characters are letters [A-Z ] except ( S,L, O,I,B and Z). The 1,4,7,10,11 characters are numeric [0-9]. 3rd and 6th will b either a number or a letter.<issue_comment>username_1: You'll want to use the following regex with `REGEXP_LIKE()`, `REGEXP_SUBSTR()`, etc: ``` ^[0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y]{2}[0-9]{2}$ ``` Hope this helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: **Make a fancy Character List** I just make a fancy character list excluding the alphabetical upper case letter you cited. This is similar to username_1's answer. Here is my fancy character list: -'`[AC-HJKMNPQRT-Y]`' -Oracle's documentation states that the hyphen is special in that it forms a range when in this character list. To make this pattern succinct, I noticed that for the most part, this string follows a pattern of digit, alphabet, alphabet pattern. Consequently, I placed this in a subexpression grouping which occurs 2 times (quantifier follows). ``` SCOTT@db>WITH smple AS ( 2 SELECT 3 '123456789ab' tst 4 FROM 5 dual 6 UNION ALL 7 SELECT 8 '1CC4DD7EE01' 9 FROM 10 dual 11 UNION ALL 12 SELECT 13 '1CB4DD7EE01' 14 FROM 15 dual 16 UNION ALL 17 SELECT 18 '1C44D67EE01' 19 FROM 20 dual 21 ) SELECT 22 smple.tst, 23 regexp_substr(smple.tst,'^(\d[AC-HJKMNPQRT-Y](\d|[AC-HJKMNPQRT-Y])){2}\d[AC-HJKMNPQRT-Y]{2}\d{2}$') matching 24 FROM 25 smple; TST MATCHING ------------------------- 123456789ab 1CC4DD7EE01 1CC4DD7EE01 1CB4DD7EE01 1C44D67EE01 1C44D67EE01 ``` Upvotes: 1
2018/03/19
786
2,580
<issue_start>username_0: On many modern browsers (for example both Safari and Chrome on MacOS and iOS), it's possible to scroll past the actual html page to explore what lies beyond it (for example, on this very page, there's a white field below the dark footer on the bottom). I've been trying to figure out the general principle behind how to modify these areas. Specifying a background color of the `body` element in CSS makes it possible to choose a unified look for everything that's outside the actual page, but this doesn't work if one wants different things to show up at different places (for example, on this very page, it would be natural for the footer color to extend to the left and to the right only *below* its own border, but not above where the obvious choice would be white). So, how should one approach modifying the content outside of the actual page (that is, content that's not really existing/being shown on certain browsers)?<issue_comment>username_1: You'll want to use the following regex with `REGEXP_LIKE()`, `REGEXP_SUBSTR()`, etc: ``` ^[0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y]{2}[0-9]{2}$ ``` Hope this helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: **Make a fancy Character List** I just make a fancy character list excluding the alphabetical upper case letter you cited. This is similar to username_1's answer. Here is my fancy character list: -'`[AC-HJKMNPQRT-Y]`' -Oracle's documentation states that the hyphen is special in that it forms a range when in this character list. To make this pattern succinct, I noticed that for the most part, this string follows a pattern of digit, alphabet, alphabet pattern. Consequently, I placed this in a subexpression grouping which occurs 2 times (quantifier follows). ``` SCOTT@db>WITH smple AS ( 2 SELECT 3 '123456789ab' tst 4 FROM 5 dual 6 UNION ALL 7 SELECT 8 '1CC4DD7EE01' 9 FROM 10 dual 11 UNION ALL 12 SELECT 13 '1CB4DD7EE01' 14 FROM 15 dual 16 UNION ALL 17 SELECT 18 '1C44D67EE01' 19 FROM 20 dual 21 ) SELECT 22 smple.tst, 23 regexp_substr(smple.tst,'^(\d[AC-HJKMNPQRT-Y](\d|[AC-HJKMNPQRT-Y])){2}\d[AC-HJKMNPQRT-Y]{2}\d{2}$') matching 24 FROM 25 smple; TST MATCHING ------------------------- 123456789ab 1CC4DD7EE01 1CC4DD7EE01 1CB4DD7EE01 1C44D67EE01 1C44D67EE01 ``` Upvotes: 1
2018/03/19
791
2,231
<issue_start>username_0: I am wondering if it's possible to number format a BigInteger in Java by converting to 2 decimal places and adding a Suffix? E.g 1000 = 1.00k, 1000000000000 = 1.00t I am currently using the following code which is perfect for formatting longs... ``` public static String withSuffix (long count) { if (count < 1000) return "" + count; int exp = (int) (Math.log(count) / Math.log(1000)); return String.format("%.2f %c", count / Math.pow(1000, exp), "kMBTab".charAt(exp-1)); } ``` Is there a way to do something like this, but for BigIntegers? Thanks in advance!<issue_comment>username_1: You'll want to use the following regex with `REGEXP_LIKE()`, `REGEXP_SUBSTR()`, etc: ``` ^[0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y]{2}[0-9]{2}$ ``` Hope this helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: **Make a fancy Character List** I just make a fancy character list excluding the alphabetical upper case letter you cited. This is similar to username_1's answer. Here is my fancy character list: -'`[AC-HJKMNPQRT-Y]`' -Oracle's documentation states that the hyphen is special in that it forms a range when in this character list. To make this pattern succinct, I noticed that for the most part, this string follows a pattern of digit, alphabet, alphabet pattern. Consequently, I placed this in a subexpression grouping which occurs 2 times (quantifier follows). ``` SCOTT@db>WITH smple AS ( 2 SELECT 3 '123456789ab' tst 4 FROM 5 dual 6 UNION ALL 7 SELECT 8 '1CC4DD7EE01' 9 FROM 10 dual 11 UNION ALL 12 SELECT 13 '1CB4DD7EE01' 14 FROM 15 dual 16 UNION ALL 17 SELECT 18 '1C44D67EE01' 19 FROM 20 dual 21 ) SELECT 22 smple.tst, 23 regexp_substr(smple.tst,'^(\d[AC-HJKMNPQRT-Y](\d|[AC-HJKMNPQRT-Y])){2}\d[AC-HJKMNPQRT-Y]{2}\d{2}$') matching 24 FROM 25 smple; TST MATCHING ------------------------- 123456789ab 1CC4DD7EE01 1CC4DD7EE01 1CB4DD7EE01 1C44D67EE01 1C44D67EE01 ``` Upvotes: 1
2018/03/19
1,147
2,842
<issue_start>username_0: I am trying to get test token validation working for an Outlook office add-in. Decoded String: VNNAnf36IrkyUVZlihQJNdUUZlYFEfJOeldWBtd3IM= Encoded String: %3Cr%3E%3Ct%20aid%3D%22WA900006056%22%20pid%3D%22bd1fedd2-ff5f-4b8e-ac48-c2b47ee0ce91%22%20oid%3D%223DBFC30C-DBE9-419E-A5FB-1DB48BEDEC1B%22%20did%3D%22xxxxxxx.onmicrosoft.com%22%20et%3D%22Trial%22%20ad%3D%222018-01-12T21%3A58%3A13Z%22%20sd%3D%222018-01-12T00%3A00%3A00Z%22%20te%3D%222018-06-30T02%3A49%3A34Z%22%20test%3D%221%22%20%2F%3E%3Cd%3EVNNAnf36IrkyUVZlihQJNdUUZlYFEfJOeldWBtd3IM%3D%3C%2Fd%3E%3C%2Fr%3E No matter what and how I pass this to <https://verificationservice.officeapps.live.com/ova/verificationagent.svc/rest/verify?token=>{token} it just does not return anything with any value populated. It always return as below. [Response from Service](https://i.stack.imgur.com/aCQ7i.png) I am using the test token from this reference . <https://learn.microsoft.com/en-us/office/dev/store/add-in-license-schema> Is the documentation stale on the string format or am I missing anything? The documentation says 64 bit encoding is not required for Office Add-ins. Can anyone give me a working test token for Outlook Add in.<issue_comment>username_1: You'll want to use the following regex with `REGEXP_LIKE()`, `REGEXP_SUBSTR()`, etc: ``` ^[0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y][A-Z0-9][0-9][AC-HJKMNP-RT-Y]{2}[0-9]{2}$ ``` Hope this helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: **Make a fancy Character List** I just make a fancy character list excluding the alphabetical upper case letter you cited. This is similar to username_1's answer. Here is my fancy character list: -'`[AC-HJKMNPQRT-Y]`' -Oracle's documentation states that the hyphen is special in that it forms a range when in this character list. To make this pattern succinct, I noticed that for the most part, this string follows a pattern of digit, alphabet, alphabet pattern. Consequently, I placed this in a subexpression grouping which occurs 2 times (quantifier follows). ``` SCOTT@db>WITH smple AS ( 2 SELECT 3 '123456789ab' tst 4 FROM 5 dual 6 UNION ALL 7 SELECT 8 '1CC4DD7EE01' 9 FROM 10 dual 11 UNION ALL 12 SELECT 13 '1CB4DD7EE01' 14 FROM 15 dual 16 UNION ALL 17 SELECT 18 '1C44D67EE01' 19 FROM 20 dual 21 ) SELECT 22 smple.tst, 23 regexp_substr(smple.tst,'^(\d[AC-HJKMNPQRT-Y](\d|[AC-HJKMNPQRT-Y])){2}\d[AC-HJKMNPQRT-Y]{2}\d{2}$') matching 24 FROM 25 smple; TST MATCHING ------------------------- 123456789ab 1CC4DD7EE01 1CC4DD7EE01 1CB4DD7EE01 1C44D67EE01 1C44D67EE01 ``` Upvotes: 1
2018/03/19
531
1,833
<issue_start>username_0: I have the following object structure represented by c# classes: ``` [ { Title: "", Year: 1977, Categories: ["Action", "Adventure", "Fantasy"], Score: 96 } ] ``` This json is serialized to an `IEnumerable` object, where `Categories` is an `IList` object. From that collection of `TitleItem` objects I would like to get a new `IList` of distinct categories. How can this be done?<issue_comment>username_1: ``` collection.SelectMany(x=>x.Categories).District().ToList() ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use [`SelectMany`](https://msdn.microsoft.com/en-us/Library/bb534336(v=vs.110).aspx) to "flatten" the lists of categories and then use [`Distinct`](https://msdn.microsoft.com/en-us/library/bb348436(v=vs.110).aspx): ``` var result = titleItems.SelectMany(item => item.Categories).Distinct().ToList(); ``` Upvotes: 1 <issue_comment>username_3: You can do a `SelectMany` and then a `distinct` on that `IEnumerable`: ``` IEnumerable items = getItemsFromSomeWhere(); var uniqueTitles = items.SelectMany(i => i.Categories).Distinct().ToList(); ``` Upvotes: 1 <issue_comment>username_4: Here is the query that will return distinct of categories ``` class TitleItem { public string Title; public string Year; public float score; public IList Categories; } var titleItems = new List(); var titleItem1 = new TitleItem(); titleItem1.Categories = new List(); titleItem1.Categories.Add("Action"); titleItem1.Categories.Add("Adventure"); titleItem1.Categories.Add("Fantasy"); titleItem1.Categories.Add("Action"); titleItems.Add(titleItem1); var titleItem2 = new TitleItem(); titleItem2.Categories = new List(); titleItem2.Categories.Add("Action"); titleItems.SelectMany(a => a.Categories).Distinct(); ``` Upvotes: -1
2018/03/19
661
2,525
<issue_start>username_0: I am first time working with both windows service and MSMQ. I am trying to read messages from queue. When I start my windows service, i am receiving only first message and next message not able to read, service is still running. if I restart the service It is reading first message from the queue. Please let me know how to fix this issue. This is my code on start of my service: ``` protected override void OnStart(string[] args) { MessageQueue msMq = null; JobModel j = new JobModel(); msMq = new MessageQueue(queueRequestName); try { if (msMq != null) { msMq.Formatter = new XmlMessageFormatter(new Type[] { typeof(JobModel) }); var message = (JobModel)msMq.BeginReceive(); } } catch (MessageQueueException ee) { Console.Write(ee.ToString()); } catch (Exception eee) { Console.Write(eee.ToString()); } finally { msMq.Close(); } } ```<issue_comment>username_1: ``` collection.SelectMany(x=>x.Categories).District().ToList() ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use [`SelectMany`](https://msdn.microsoft.com/en-us/Library/bb534336(v=vs.110).aspx) to "flatten" the lists of categories and then use [`Distinct`](https://msdn.microsoft.com/en-us/library/bb348436(v=vs.110).aspx): ``` var result = titleItems.SelectMany(item => item.Categories).Distinct().ToList(); ``` Upvotes: 1 <issue_comment>username_3: You can do a `SelectMany` and then a `distinct` on that `IEnumerable`: ``` IEnumerable items = getItemsFromSomeWhere(); var uniqueTitles = items.SelectMany(i => i.Categories).Distinct().ToList(); ``` Upvotes: 1 <issue_comment>username_4: Here is the query that will return distinct of categories ``` class TitleItem { public string Title; public string Year; public float score; public IList Categories; } var titleItems = new List(); var titleItem1 = new TitleItem(); titleItem1.Categories = new List(); titleItem1.Categories.Add("Action"); titleItem1.Categories.Add("Adventure"); titleItem1.Categories.Add("Fantasy"); titleItem1.Categories.Add("Action"); titleItems.Add(titleItem1); var titleItem2 = new TitleItem(); titleItem2.Categories = new List(); titleItem2.Categories.Add("Action"); titleItems.SelectMany(a => a.Categories).Distinct(); ``` Upvotes: -1
2018/03/19
435
1,945
<issue_start>username_0: Prior to using Facebook SDK we used to share via UIActivityViewController since Facebook does not allow for us to pre-fill information on the user sharing, our solution was to use information the user description of the Image being share `UIPasteboard.general.string`. So the app would switch to the messenger and the user could paste. This worked just fine until we started using the Facebook SDK. Now it seems that the `UIPasteboard.general.string` is reset when it opens up messenger and we no longer can get the image description copied to the clipboard. This is how I'm sharing to messenger: ``` let sharePhoto = FBSDKSharePhoto() sharePhoto.image = image let content = FBSDKSharePhotoContent() content.photos = [sharePhoto] FBSDKMessageDialog.show(with: content, delegate: delegate) ```<issue_comment>username_1: Yes, I'm stacked on that staff too after FB lib update. As for me - I figured out with web link on image like it shown below. As a result - you do not need ask user to paste something. You can do it by yourself and paste it to title or description. ``` let content = LinkShareContent(url: url, title: title, description: description, quote: nil, imageURL: imageURL) let dialog = ShareDialog(content: content) dialog.presentingViewController = parentVC dialog.mode = .automatic dialog.completion = { result in ... } try? dialog.show() ``` Upvotes: 2 <issue_comment>username_2: Facebook is not support UI share both image and text. If you want to share an image and a text together, you can create a custom story from your app's settings, and share it as an open graph story. Here's the [documentation](https://developers.facebook.com/docs/sharing/opengraph/ios). Upvotes: 0
2018/03/19
1,011
3,472
<issue_start>username_0: I have a problem with my Flutter Layout. I have a simple container with a Margin right and left of 20.0 Inside this container i have another container. But this container does not fit to the parent container only on the left side. I dont know why this happens. Here is my Code: ``` @override Widget build(BuildContext context) { return new Scaffold( backgroundColor: Colors.white, body: new Container( margin: new EdgeInsets.symmetric(horizontal: 20.0), child: new Container( ) ), ); } ``` [Screenshot of the Problem](https://i.stack.imgur.com/KtN8B.png)<issue_comment>username_1: > > You can use left and right values :) > > > ``` @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.white, body: Container( margin: const EdgeInsets.only(left: 20.0, right: 20.0), child: Container(), ), ); } ``` Upvotes: 7 <issue_comment>username_2: You can try: To the margin of any one edge ``` Container( margin: const EdgeInsets.only(left: 20.0, right: 20.0), child: Container() ) ``` You can try :To the margin of any all edge ``` Container( margin: const EdgeInsets.all(20.0), child: Container() ) ``` If you need the current system padding or view insets in the context of a widget, consider using [MediaQuery.of] to obtain these values rather than using the value from [dart:ui.window], so that you get notified of changes. ``` Container( margin: EdgeInsets.fromWindowPadding(padding, devicePixelRatio), child: Container() ) ``` Upvotes: 5 <issue_comment>username_3: ``` Container( margin: EdgeInsets.all(10) , alignment: Alignment.bottomCenter, decoration: BoxDecoration( gradient: LinearGradient( begin: Alignment.topCenter, end: Alignment.bottomCenter, colors: [ Colors.black.withAlpha(0), Colors.black12, Colors.black45 ], ), ), child: Text( "Foreground Text", style: TextStyle(color: Colors.white, fontSize: 20.0), ), ), ``` Upvotes: 3 <issue_comment>username_4: You can try to set margin in the following ways. ``` @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.white, body: Container ( // Even margin on all sides margin: EdgeInsets.all(10.0), // Symetric margin margin: EdgeInsets.symmetric(vertical: 10.0, horizontal: 5.0), // Different margin for all sides margin: EdgeInsets.fromLTRB(1.0, 2.0, 3.0, 4.0), // Margin only for left and right sides margin: const EdgeInsets.only(left: 10.0, right: 10.0), // Different margin for all sides margin: EdgeInsets.only(left: 5.0, top: 10.0, right: 15.0, bottom: 20.0), child: Child ( ... ), ), ); } ``` Upvotes: 2 <issue_comment>username_5: Padding and Margin side by side. ``` Container( padding: const EdgeInsets.fromLTRB(10, 10, 10, 10), margin: const EdgeInsets.fromLTRB(10, 10, 10, 10), } ``` Padding and Margin only. ``` Container( padding: const EdgeInsets.only(left: 10.0, right: 10.0), margin: const EdgeInsets.only(left: 10.0, right: 10.0), } ``` Padding and Margin all side same. ``` Container( padding: const EdgeInsets.all(10.0), margin: const EdgeInsets.all(10.0), } ``` Upvotes: 0
2018/03/19
955
3,225
<issue_start>username_0: I am using this code to read the file and add new data at the end of file. `$fh = fopen($myFile, 'a') or die("can't open file");` The problem is that this code works fine on local server but giving an error of `can't open file` on Amazon EC2 server. Also file i use the parameter `r` instead of `a` it works but i can't understand why isn't the `a` parameter working?<issue_comment>username_1: > > You can use left and right values :) > > > ``` @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.white, body: Container( margin: const EdgeInsets.only(left: 20.0, right: 20.0), child: Container(), ), ); } ``` Upvotes: 7 <issue_comment>username_2: You can try: To the margin of any one edge ``` Container( margin: const EdgeInsets.only(left: 20.0, right: 20.0), child: Container() ) ``` You can try :To the margin of any all edge ``` Container( margin: const EdgeInsets.all(20.0), child: Container() ) ``` If you need the current system padding or view insets in the context of a widget, consider using [MediaQuery.of] to obtain these values rather than using the value from [dart:ui.window], so that you get notified of changes. ``` Container( margin: EdgeInsets.fromWindowPadding(padding, devicePixelRatio), child: Container() ) ``` Upvotes: 5 <issue_comment>username_3: ``` Container( margin: EdgeInsets.all(10) , alignment: Alignment.bottomCenter, decoration: BoxDecoration( gradient: LinearGradient( begin: Alignment.topCenter, end: Alignment.bottomCenter, colors: [ Colors.black.withAlpha(0), Colors.black12, Colors.black45 ], ), ), child: Text( "Foreground Text", style: TextStyle(color: Colors.white, fontSize: 20.0), ), ), ``` Upvotes: 3 <issue_comment>username_4: You can try to set margin in the following ways. ``` @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.white, body: Container ( // Even margin on all sides margin: EdgeInsets.all(10.0), // Symetric margin margin: EdgeInsets.symmetric(vertical: 10.0, horizontal: 5.0), // Different margin for all sides margin: EdgeInsets.fromLTRB(1.0, 2.0, 3.0, 4.0), // Margin only for left and right sides margin: const EdgeInsets.only(left: 10.0, right: 10.0), // Different margin for all sides margin: EdgeInsets.only(left: 5.0, top: 10.0, right: 15.0, bottom: 20.0), child: Child ( ... ), ), ); } ``` Upvotes: 2 <issue_comment>username_5: Padding and Margin side by side. ``` Container( padding: const EdgeInsets.fromLTRB(10, 10, 10, 10), margin: const EdgeInsets.fromLTRB(10, 10, 10, 10), } ``` Padding and Margin only. ``` Container( padding: const EdgeInsets.only(left: 10.0, right: 10.0), margin: const EdgeInsets.only(left: 10.0, right: 10.0), } ``` Padding and Margin all side same. ``` Container( padding: const EdgeInsets.all(10.0), margin: const EdgeInsets.all(10.0), } ``` Upvotes: 0
2018/03/19
629
1,436
<issue_start>username_0: I was trying to deploy flask-ask alexa skill using Zappa. I followed the instructions on [Alexa Tutorial: Deploy Flask-Ask Skills to AWS Lambda with Zappa`](https://developer.amazon.com/blogs/post/8e8ad73a-99e9-4c0f-a7b3-60f92287b0bf/new-alexa-tutorial-deploy-flask-ask-skills-to-aws-lambda-with-zappa) But I'm getting a ``` KeyError: pip._vendor.urllib3.contrib. ``` when I run: `zappa deploy dev`<issue_comment>username_1: Seems like a [bug #5079 in pip](https://github.com/pypa/pip/issues/5079). Please verify with `pip --version` that you have `pip` version 9.0.2, then downgrade to 9.0.1: ``` pip install -U pip==9.0.1 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I had the same issue. But "upgrading" to pip 9.0.1 worked for me. Now I am facing another issue: After deploying with zappa I always get an HTTP 500 error by testing the alexa skill. I used zappa tail and figured out following error: ``` [1521581897750] File "/private/var/folders/8g/t93g7k9j0rb_18d07m1k8shr0000gn/T/pip-build-SO5htj/pyOpenSSL/OpenSSL/crypto.py", line 740, in _subjectAltNameString [1521581897750] AttributeError: 'module' object has no attribute 'X509V3_EXT_get' [1521581897751] [INFO] 2018-03-20T21:38:17.751Z 005fde13-2c87-11e8-ac0a-8b163e7315d5 172.16.58.3 - - [20/Mar/2018:21:38:17 +0000] "POST / HTTP/1.1" 500 291 "" "Apache-HttpClient/UNAVAILABLE (Java/1.8.0_131)" 0/390.608 ``` Upvotes: 2
2018/03/19
1,589
3,234
<issue_start>username_0: I've been trying to do this for a while and I've tried things I've found on forums but it's still not working and I feel like it's a really obvious error. Here's my code: ``` allnum=[] num=[] for i in range(100): allnum.append(random.randint(1,99)) num.append(allnum[:10]) print (allnum) print(num) allnum= [i for i in allnum if i not in num] print (allnum) ``` and the result is just: `[55, 94, 88, 66, 34, 99, 76, 41, 48, 55, 84, 16, 57, 58, 46, 26, 10, 25, 10, 94, 93, 3, 29, 74, 6, 47, 45, 67, 1, 9, 10, 10, 3, 34, 13, 91, 81, 70, 87, 46, 26, 52, 28, 66, 88, 9, 30, 43, 85, 32, 38, 5, 60, 65, 27, 52, 68, 94, 94, 14, 46, 5, 8, 9, 32, 62, 6, 18, 84, 86, 2, 12, 97, 99, 84, 13, 64, 39, 84, 75, 23, 88, 21, 60, 37, 33, 75, 53, 88, 54, 28, 5, 29, 97, 38, 21, 27, 25, 17, 31]` `[[55, 94, 88, 66, 34, 99, 76, 41, 48, 55]]` `[55, 94, 88, 66, 34, 99, 76, 41, 48, 55, 84, 16, 57, 58, 46, 26, 10, 25, 10, 94, 93, 3, 29, 74, 6, 47, 45, 67, 1, 9, 10, 10, 3, 34, 13, 91, 81, 70, 87, 46, 26, 52, 28, 66, 88, 9, 30, 43, 85, 32, 38, 5, 60, 65, 27, 52, 68, 94, 94, 14, 46, 5, 8, 9, 32, 62, 6, 18, 84, 86, 2, 12, 97, 99, 84, 13, 64, 39, 84, 75, 23, 88, 21, 60, 37, 33, 75, 53, 88, 54, 28, 5, 29, 97, 38, 21, 27, 25, 17, 31]` sorry if it's a really stupid mistake:)<issue_comment>username_1: Just tried this and it worked. ``` first_list = list(range(5,15)) second_list = list(range(10,20)) first_minus_second = [x for x in first_list if x not in second_list] ``` Upvotes: -1 <issue_comment>username_2: You can use set operations here as well to identify the difference between the two lists ``` import numpy as np allnum = [] for i in range(100): allnum.append(np.random.randint(1,99)) # slice allnum to the desired level num = allnum[:10] # this is a good opportunity to use set operations and identify all elements in allnum that # are not in num diff = list(set(allnum).difference(set(num))) ``` as an added bonus, set operations are blazing fast. You will however get only unique elements in your allnum list, so if a random number is generated multiple times, it will only appear once in the output. And per the comment +1 you can do all of this using numpy in two lines: ``` arr1 = np.random.randint(1, 99, 100) np.delete(arr1, arr1[:10]) ``` Upvotes: 1 <issue_comment>username_3: You used `num.append`, which takes one object as parameter and appends it to the list. So, you appended one element, which is a list containing ten integers (note the two opening and closing square brackets when you print `num`. You wanted to use `extend`, which takes an iterable as argument, and adds each of its elements to the list: ``` num.extend(allnum[:10]) ``` or, more simply here, as `num` is empty before: ``` num = allnum[:10] ``` Note also that testing if each element is in the `num` list is quite slow. It's much faster to test if an element belongs to a [set](https://docs.python.org/3.6/tutorial/datastructures.html#sets). So, you could use: ``` import random allnum=[] num=[] for i in range(100): allnum.append(random.randint(1,99)) num = set(allnum[:10]) print (allnum) print(num) allnum = [i for i in allnum if i not in num] print(allnum) ``` Upvotes: 2 [selected_answer]
2018/03/19
869
3,300
<issue_start>username_0: I'm using a function called Get-ADdirectReports which recursively grabs all users who report to a specific manager (includes other managers and their team members if the manager is responsible for multiple teams). I'm trying to return all AD users and see if they recursively report to the defined manager. Code ``` $Manager = Get-ADdirectReports -SamAccountName "ManagerName" | Select SamAccountName $Users = Get-AdUser -Filter * -Properties * | Select SamAccountName Foreach ($User in $Users) { If ($User -Contains $Manager) {Write-Host $User reports to $Manager} Else {$User doesn't report to $Manager} } ```<issue_comment>username_1: Ok, so this is not the most elegant and certainly not the fastest way to do this but it should do what you want. ``` function Get-ADTopLevelManager{ param( $identity ) $result = New-Object System.Collections.ArrayList $manager = Get-ADUser $identity -Properties Manager $result.Add($manager) | Out-Null $managerDN = $manager.Manager while($manager -ne $null){ $manager = $null $manager = Get-AdUser -Filter {DistinguishedName -eq $managerDN} -Properties Manager $managerDN = $manager.Manager if($manager.SamAccountName -eq $result[-1].SamAccountName){ $manager = $null }else{ $result.Add($manager) | Out-Null } } $result } $allUsers = New-Object System.Collections.ArrayList foreach($adUser in Get-AdUser -Filter *){ $temp = New-Object PSCustomObject -Property @{'User' = $adUser.SamAccountName; 'Managers' = Get-ADTopLevelManager $adUser.SamAccountName} $allUsers.Add($temp) | Out-Null } ``` So at this point you have a collection of objects that links any given user to all of their managers. So lets say you want to know what users have Manager1 in their chain of command anywhere: ``` $allUsers | Where-Object{$_.Managers -Contains Manager1} ``` Or if you want to know the immediate manager of user User1: ``` ($allUsers | Where-Object{$_.User -eq User1}).Managers[0] ``` Or if you want to know the top level manager for User1: ``` ($allUsers | Where-Object{$_.User -eq User1}).Managers[-1] ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Not exactly what you are looking for, but still worth trying I guess ``` function GetADUserManagerRecursive { param ( $Identity ) $UserAD = Get-ADUser $Identity -Properties Manager $DirectManager = $UserAD.Manager $DirectManager = Get-ADUser -Identity $DirectManager -Properties Manager Write-Output $DirectManager.DistinguishedName if ( $DirectManager.Manager -ne $UserAD.DistinguishedName ) { GetADUserManagerRecursive -Identity $DirectManager } } ``` Once done, you may want to get all manangers of a specific user recursively, save it in variable and then check if a manager is there in the variable: ``` $managers = GetADUserManagerRecursive -Identity user $managers = $managers | select -unique (Get-ADUser bossname ).SamaccountName -match $managers.SamaccountName ``` PS Please note you might need changing `$DirectManager.Manager -ne $UserAD.DistinguishedName` to `$DirectManager.Manager -ne $null`. I have seen companies where topmost boss has been a manager of himself Upvotes: 0
2018/03/19
797
2,988
<issue_start>username_0: I have a set of coordinates X and Y for a line around a race track. I want to offset this line to give me an inner and outer set of track coordinates but i am not sure how to best do this. I want it to look like the track below. Thanks ![enter image description here](https://i.stack.imgur.com/2ISzJ.png)<issue_comment>username_1: Ok, so this is not the most elegant and certainly not the fastest way to do this but it should do what you want. ``` function Get-ADTopLevelManager{ param( $identity ) $result = New-Object System.Collections.ArrayList $manager = Get-ADUser $identity -Properties Manager $result.Add($manager) | Out-Null $managerDN = $manager.Manager while($manager -ne $null){ $manager = $null $manager = Get-AdUser -Filter {DistinguishedName -eq $managerDN} -Properties Manager $managerDN = $manager.Manager if($manager.SamAccountName -eq $result[-1].SamAccountName){ $manager = $null }else{ $result.Add($manager) | Out-Null } } $result } $allUsers = New-Object System.Collections.ArrayList foreach($adUser in Get-AdUser -Filter *){ $temp = New-Object PSCustomObject -Property @{'User' = $adUser.SamAccountName; 'Managers' = Get-ADTopLevelManager $adUser.SamAccountName} $allUsers.Add($temp) | Out-Null } ``` So at this point you have a collection of objects that links any given user to all of their managers. So lets say you want to know what users have Manager1 in their chain of command anywhere: ``` $allUsers | Where-Object{$_.Managers -Contains Manager1} ``` Or if you want to know the immediate manager of user User1: ``` ($allUsers | Where-Object{$_.User -eq User1}).Managers[0] ``` Or if you want to know the top level manager for User1: ``` ($allUsers | Where-Object{$_.User -eq User1}).Managers[-1] ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Not exactly what you are looking for, but still worth trying I guess ``` function GetADUserManagerRecursive { param ( $Identity ) $UserAD = Get-ADUser $Identity -Properties Manager $DirectManager = $UserAD.Manager $DirectManager = Get-ADUser -Identity $DirectManager -Properties Manager Write-Output $DirectManager.DistinguishedName if ( $DirectManager.Manager -ne $UserAD.DistinguishedName ) { GetADUserManagerRecursive -Identity $DirectManager } } ``` Once done, you may want to get all manangers of a specific user recursively, save it in variable and then check if a manager is there in the variable: ``` $managers = GetADUserManagerRecursive -Identity user $managers = $managers | select -unique (Get-ADUser bossname ).SamaccountName -match $managers.SamaccountName ``` PS Please note you might need changing `$DirectManager.Manager -ne $UserAD.DistinguishedName` to `$DirectManager.Manager -ne $null`. I have seen companies where topmost boss has been a manager of himself Upvotes: 0
2018/03/19
889
3,365
<issue_start>username_0: I have an array of strings that are also keys for endpoints in my Firebase Realtime Database. I would like to loop through this array and get the Firebase object for each one of the keys. I get no response with the code below Is there a way to dynamically subscribe to Firebase Observables? ``` sortedArray:any = ["x","y","z"] ``` The method : ``` getItems() { this.sortedArray.forEach(el => { this.ngDB.object(`items/${el}`).valueChanges() .toPromise().then((res) => { console.log("Found Item",res); }) .catch(err => { console.log("Err listing items", err); }) }); } ```<issue_comment>username_1: Ok, so this is not the most elegant and certainly not the fastest way to do this but it should do what you want. ``` function Get-ADTopLevelManager{ param( $identity ) $result = New-Object System.Collections.ArrayList $manager = Get-ADUser $identity -Properties Manager $result.Add($manager) | Out-Null $managerDN = $manager.Manager while($manager -ne $null){ $manager = $null $manager = Get-AdUser -Filter {DistinguishedName -eq $managerDN} -Properties Manager $managerDN = $manager.Manager if($manager.SamAccountName -eq $result[-1].SamAccountName){ $manager = $null }else{ $result.Add($manager) | Out-Null } } $result } $allUsers = New-Object System.Collections.ArrayList foreach($adUser in Get-AdUser -Filter *){ $temp = New-Object PSCustomObject -Property @{'User' = $adUser.SamAccountName; 'Managers' = Get-ADTopLevelManager $adUser.SamAccountName} $allUsers.Add($temp) | Out-Null } ``` So at this point you have a collection of objects that links any given user to all of their managers. So lets say you want to know what users have Manager1 in their chain of command anywhere: ``` $allUsers | Where-Object{$_.Managers -Contains Manager1} ``` Or if you want to know the immediate manager of user User1: ``` ($allUsers | Where-Object{$_.User -eq User1}).Managers[0] ``` Or if you want to know the top level manager for User1: ``` ($allUsers | Where-Object{$_.User -eq User1}).Managers[-1] ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Not exactly what you are looking for, but still worth trying I guess ``` function GetADUserManagerRecursive { param ( $Identity ) $UserAD = Get-ADUser $Identity -Properties Manager $DirectManager = $UserAD.Manager $DirectManager = Get-ADUser -Identity $DirectManager -Properties Manager Write-Output $DirectManager.DistinguishedName if ( $DirectManager.Manager -ne $UserAD.DistinguishedName ) { GetADUserManagerRecursive -Identity $DirectManager } } ``` Once done, you may want to get all manangers of a specific user recursively, save it in variable and then check if a manager is there in the variable: ``` $managers = GetADUserManagerRecursive -Identity user $managers = $managers | select -unique (Get-ADUser bossname ).SamaccountName -match $managers.SamaccountName ``` PS Please note you might need changing `$DirectManager.Manager -ne $UserAD.DistinguishedName` to `$DirectManager.Manager -ne $null`. I have seen companies where topmost boss has been a manager of himself Upvotes: 0
2018/03/19
787
2,055
<issue_start>username_0: I have following array which contains strings; ``` let data = ["2018-1", "2018-5", "2018-11", "2018-2", "2018-10", "2018-12"]; ``` these strings are composed of number (year and month). Can you tell me why didn't work following function for sorting? I need sort this array from latest date to oldest. In this case from **"2018-12"** to **"2018-1".** I am using lodash in whole project so I try use it here as well. ``` var result = _.sortBy(data, function(i) { var x = i.split("-").map(Number); return [x[0], x[1]]; }); ``` can you tell me why this code doesn't work and how to fix it? Thanks.<issue_comment>username_1: Unfortunately, `sortBy` doesn't support compound keys, so your array key is converted to a string. A workaround is either to provide two separate keys: ``` var result = _.sortBy(data, [ x => Number(x.split('-')[0]), x => Number(x.split('-')[1]), ]); ``` or synthesize a numeric one: ``` var result = _.sortBy(data, x => { x = x.split('-'); return Number(x[0]) * 1000 + Number(x[1]) }); ``` Finally, you can take a risk and try `Date.parse`: ``` var result = _.sortBy(data, Date.parse) ``` which looks neat, but requires some cross-browser testing. Upvotes: 1 <issue_comment>username_2: I added a few more dates as proof. ```js let data = ["2018-1", "2018-5", "2018-11", "2018-2", "2018-10", "2018-12", "2017-5", "2019-12"]; var result = data.sort((a, b) => { var n1 = a.split("-"); var n2 = b.split("-"); n1 = parseInt(n1[0]) * 100 + parseInt(n1[1]); n2 = parseInt(n2[0]) * 100 + parseInt(n2[1]); return n1 - n2; }) console.log(result); ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: You could chain the splitted deltas for year and month. ```js var data = ["2018-1", "2018-5", "2018-11", "2018-2", "2018-10", "2018-12", "2017-5", "2019-12"], result = data.sort((a, b) => { var aa = a.split("-"), bb = b.split("-"); return aa[0] - bb[0] || aa[1] - bb[1]; }); console.log(result); ``` Upvotes: 0
2018/03/19
1,053
3,893
<issue_start>username_0: How can I get javascript to **immediately** stop script execution and change to another page. Consider this: ``` function onload() { alert('before'); window.location.href = 'http://google.com'; alert('after redirect'); } Hello World! ``` The second alert message always fires.<issue_comment>username_1: Try to return value from the first function and if a user logged in return true. I hope this will help you. ``` function init() { if(checkIfLoggedIn()) { getUserInfo(); // I would expect this to not run if the user was not logged in } } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; } else { return true; } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } ``` Upvotes: 2 <issue_comment>username_2: You are calling `getUserInfo()` with no conditionals around it. So this function will be called when the page loads, as you have placed no restrictions on this function call. Hence, code is still running. Perhaps this is what is intended in the first block of your snippet: ``` if(checkIfLoggedIn()) { getUserInfo(); } ``` This way, `getUserInfo` will only be called if `checkIfLoggedIn` returns `true`. The way you have written it, `getUserInfo` is also running when the page loads. Upvotes: 1 <issue_comment>username_3: If you don't want to litter your code you can break all the rules and use an exception: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; throw new Error('Kill all code running after me'); } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } function loggedIn() { return false; } init(); ``` The thrown exception will prevent all of the rest of the code from executing. > > OK. It is ugly and breaks so many best practice rules. But it should work. > > > Or you can use exception handling around the other code: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) window.location.href = "/login.html"; } function getUserInfo() { try { var username = getUsername(); var email = getEmail(); // etc. etc. } catch(ex) { // Don't do anything here unless you really need to } } function loggedIn() { return false; } init(); ``` Upvotes: 1 <issue_comment>username_4: I came across the answer today while I was looking for something else: Even though `window.location.href` has been set and the window is in the middle of redirecting, code execution continues in most browsers: ``` function onload() { console.log('before'); window.location.href = 'newpage.html'; console.log('after redirect 1'); console.log('after redirect 2'); console.log('after redirect 3'); } ``` `after redirect 1`, `after redirect 2`, & `after redirect 3` still log to the console even though the page is being redirected. However, since `init()` is a top level function, I can `return;` from the function and stop code from continuing to execute.. ``` function init() { if(checkIfLoggedIn() === false) return; // <-- stops following code from executing and allows return to proceed getUserInfo(); } function checkIfLoggedIn() { var loggedIn = loggedIn(); if(loggedIn === false) window.location.href = "/login.html"; return loggedIn; } ... ... ``` Upvotes: 1 [selected_answer]
2018/03/19
1,161
4,325
<issue_start>username_0: I have a combo box field that is associated to a parent list on a new item form. When I customize the form with info path and add a query string web part, the field is not available. If I don't customize it and use the default new item form, it becomes available, but it does not set the value. I have the query parameter set in the query string. This is working fine. I tried javascript solutions out there, but they appear to be for SP 2010 since SP 2013 completely mangles the names of fields. According to <https://knowledge.hubspot.com/articles/kcs_article/forms/can-i-auto-populate-form-fields-through-a-query-string>, setting this dependent field can't be done. I still need someway of setting the field though.<issue_comment>username_1: Try to return value from the first function and if a user logged in return true. I hope this will help you. ``` function init() { if(checkIfLoggedIn()) { getUserInfo(); // I would expect this to not run if the user was not logged in } } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; } else { return true; } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } ``` Upvotes: 2 <issue_comment>username_2: You are calling `getUserInfo()` with no conditionals around it. So this function will be called when the page loads, as you have placed no restrictions on this function call. Hence, code is still running. Perhaps this is what is intended in the first block of your snippet: ``` if(checkIfLoggedIn()) { getUserInfo(); } ``` This way, `getUserInfo` will only be called if `checkIfLoggedIn` returns `true`. The way you have written it, `getUserInfo` is also running when the page loads. Upvotes: 1 <issue_comment>username_3: If you don't want to litter your code you can break all the rules and use an exception: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; throw new Error('Kill all code running after me'); } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } function loggedIn() { return false; } init(); ``` The thrown exception will prevent all of the rest of the code from executing. > > OK. It is ugly and breaks so many best practice rules. But it should work. > > > Or you can use exception handling around the other code: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) window.location.href = "/login.html"; } function getUserInfo() { try { var username = getUsername(); var email = getEmail(); // etc. etc. } catch(ex) { // Don't do anything here unless you really need to } } function loggedIn() { return false; } init(); ``` Upvotes: 1 <issue_comment>username_4: I came across the answer today while I was looking for something else: Even though `window.location.href` has been set and the window is in the middle of redirecting, code execution continues in most browsers: ``` function onload() { console.log('before'); window.location.href = 'newpage.html'; console.log('after redirect 1'); console.log('after redirect 2'); console.log('after redirect 3'); } ``` `after redirect 1`, `after redirect 2`, & `after redirect 3` still log to the console even though the page is being redirected. However, since `init()` is a top level function, I can `return;` from the function and stop code from continuing to execute.. ``` function init() { if(checkIfLoggedIn() === false) return; // <-- stops following code from executing and allows return to proceed getUserInfo(); } function checkIfLoggedIn() { var loggedIn = loggedIn(); if(loggedIn === false) window.location.href = "/login.html"; return loggedIn; } ... ... ``` Upvotes: 1 [selected_answer]
2018/03/19
1,080
3,994
<issue_start>username_0: I create a left nav bar button but I don't know how to store image from url into the button image. Below is my nav bar left button code ``` navigationItem.leftBarButtonItem = UIBarButtonItem(image: <#T##UIImage?#>, style: .plain, target: self, action: #selector(handleButtonPressed)) ``` Should I create an UIimage variable and then use it for image param? Any suggestions? Thanks!<issue_comment>username_1: Try to return value from the first function and if a user logged in return true. I hope this will help you. ``` function init() { if(checkIfLoggedIn()) { getUserInfo(); // I would expect this to not run if the user was not logged in } } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; } else { return true; } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } ``` Upvotes: 2 <issue_comment>username_2: You are calling `getUserInfo()` with no conditionals around it. So this function will be called when the page loads, as you have placed no restrictions on this function call. Hence, code is still running. Perhaps this is what is intended in the first block of your snippet: ``` if(checkIfLoggedIn()) { getUserInfo(); } ``` This way, `getUserInfo` will only be called if `checkIfLoggedIn` returns `true`. The way you have written it, `getUserInfo` is also running when the page loads. Upvotes: 1 <issue_comment>username_3: If you don't want to litter your code you can break all the rules and use an exception: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; throw new Error('Kill all code running after me'); } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } function loggedIn() { return false; } init(); ``` The thrown exception will prevent all of the rest of the code from executing. > > OK. It is ugly and breaks so many best practice rules. But it should work. > > > Or you can use exception handling around the other code: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) window.location.href = "/login.html"; } function getUserInfo() { try { var username = getUsername(); var email = getEmail(); // etc. etc. } catch(ex) { // Don't do anything here unless you really need to } } function loggedIn() { return false; } init(); ``` Upvotes: 1 <issue_comment>username_4: I came across the answer today while I was looking for something else: Even though `window.location.href` has been set and the window is in the middle of redirecting, code execution continues in most browsers: ``` function onload() { console.log('before'); window.location.href = 'newpage.html'; console.log('after redirect 1'); console.log('after redirect 2'); console.log('after redirect 3'); } ``` `after redirect 1`, `after redirect 2`, & `after redirect 3` still log to the console even though the page is being redirected. However, since `init()` is a top level function, I can `return;` from the function and stop code from continuing to execute.. ``` function init() { if(checkIfLoggedIn() === false) return; // <-- stops following code from executing and allows return to proceed getUserInfo(); } function checkIfLoggedIn() { var loggedIn = loggedIn(); if(loggedIn === false) window.location.href = "/login.html"; return loggedIn; } ... ... ``` Upvotes: 1 [selected_answer]
2018/03/19
1,061
3,924
<issue_start>username_0: I executed a script simulating 950 VU in 10 seconds, but Jmeter last 23 minutes in finish the test. I guess that Jmeter doesn't finish the test until it completes the number of threads of the test, and that depends on the server response time. Does it work like that or on what the finish time depends? Thank you.<issue_comment>username_1: Try to return value from the first function and if a user logged in return true. I hope this will help you. ``` function init() { if(checkIfLoggedIn()) { getUserInfo(); // I would expect this to not run if the user was not logged in } } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; } else { return true; } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } ``` Upvotes: 2 <issue_comment>username_2: You are calling `getUserInfo()` with no conditionals around it. So this function will be called when the page loads, as you have placed no restrictions on this function call. Hence, code is still running. Perhaps this is what is intended in the first block of your snippet: ``` if(checkIfLoggedIn()) { getUserInfo(); } ``` This way, `getUserInfo` will only be called if `checkIfLoggedIn` returns `true`. The way you have written it, `getUserInfo` is also running when the page loads. Upvotes: 1 <issue_comment>username_3: If you don't want to litter your code you can break all the rules and use an exception: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) { window.location.href = "/login.html"; throw new Error('Kill all code running after me'); } } function getUserInfo() { var username = getUsername(); // throws exception if not logged in var email = getEmail(); // throws exception if not logged in // etc. etc. } function loggedIn() { return false; } init(); ``` The thrown exception will prevent all of the rest of the code from executing. > > OK. It is ugly and breaks so many best practice rules. But it should work. > > > Or you can use exception handling around the other code: ```js function init() { checkIfLoggedIn(); getUserInfo(); // I would expect this to not run if the user was not logged in } function checkIfLoggedIn() { if(loggedIn() === false) window.location.href = "/login.html"; } function getUserInfo() { try { var username = getUsername(); var email = getEmail(); // etc. etc. } catch(ex) { // Don't do anything here unless you really need to } } function loggedIn() { return false; } init(); ``` Upvotes: 1 <issue_comment>username_4: I came across the answer today while I was looking for something else: Even though `window.location.href` has been set and the window is in the middle of redirecting, code execution continues in most browsers: ``` function onload() { console.log('before'); window.location.href = 'newpage.html'; console.log('after redirect 1'); console.log('after redirect 2'); console.log('after redirect 3'); } ``` `after redirect 1`, `after redirect 2`, & `after redirect 3` still log to the console even though the page is being redirected. However, since `init()` is a top level function, I can `return;` from the function and stop code from continuing to execute.. ``` function init() { if(checkIfLoggedIn() === false) return; // <-- stops following code from executing and allows return to proceed getUserInfo(); } function checkIfLoggedIn() { var loggedIn = loggedIn(); if(loggedIn === false) window.location.href = "/login.html"; return loggedIn; } ... ... ``` Upvotes: 1 [selected_answer]
2018/03/19
871
3,682
<issue_start>username_0: I get this error when trying to compare password with `bcrypt` **Error:** `Error: data and hash arguments required` This is my code ``` //compare password bcrypt.compare(req.param('password'), user.encryptedPassword, function(err, valid) { if (err) return next(err); //if password doesn't match if (!valid) { var usernamePasswordMismatchError = [{name: 'usernamePasswordMismatch', message: 'Invalid Password'}] req.session.flash = { err: usernamePasswordMismatchError } res.redirect('/session/new'); return; } //log user in req.session.authenticated = true; req.session.User = user; //redirect user to profile page res.redirect('/user/show/'+ user.id); }); ``` This is my schema setup ``` module.exports = { attributes: { name: { type: 'string', required: true }, email: { type: 'string', email: true, required: true, unique: true }, phone: { type: 'string', required: true }, encryptedPassword: { type: 'string' }, toJSON: function() { var obj = this.toObject(); delete obj.password; delete obj.encryptedPassword; delete obj._csrf; return obj; } }, connection:'mongodb' }; ``` Please what am doing wrong?<issue_comment>username_1: Not sure about the scope of your compare function but it looks like either or both of your input params to bcrypt.compare is either empty or null. (compare function requires both input params to be non-null and non-empty) Try logging your req.param('password') and user.encryptedPassword to further debug your issue. Ref: <https://github.com/kelektiv/node.bcrypt.js> Upvotes: -1 <issue_comment>username_2: When you are calling the bcrypt.compare() function, you need to make sure you supply both the password and the hashed password. So put a print there and see what are you missing. Case 1: req.param('password') is empty for some reason (the field name in your html form may have a typo) Case 2: user.encryptedPassword is empty. For this case the reason might be that you don't encrypt your password when creating/updating your user record. In order to make sure you do, open your database and after creating a new user check the record in the DB and see if your encryptedPassword is there. Here is an example how you can encrypt the password in time of creation/update: In your User model you may have these functions: ``` beforeUpdate: function (values, next) { if (values.password) { bcrypt.hash(values.password, 10, function (err, hash) { if (err) { return next(err); } values.encryptedPassword = <PASSWORD>; next(); }); } }, beforeCreate: function (values, next) { bcrypt.hash(values.password, 10, function (err, hash) { if (err) { return next(err); } values.encryptedPassword = <PASSWORD>; next(); }); } ``` Note this line: ``` values.encryptedPassword = <PASSWORD>; ``` Since you don't have a password attribute in the schema, the values.password that is supplied will not be saved anywhere. The other thing in your case may be that you don't set encryptedPassword anywhere. So the other approach is to alter your schema and rename encryptedPassword to simple password and then replace ``` values.encryptedPassword = <PASSWORD>; ``` with ``` values.password = <PASSWORD>; ``` Upvotes: 1
2018/03/19
2,087
6,891
<issue_start>username_0: I am following the [Quick Start Guide for Yocto Project](https://www.yoctoproject.org/docs/2.4.2/yocto-project-qs/yocto-project-qs.html) on **Windows Subsystem for Linux**. I cloned all the necessary stuff: ``` sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib \ build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \ xz-utils debianutils iputils-ping libsdl1.2-dev xterm ``` cloned the repository to a directory in the `/mnt/c/Users//Yocto` directory created a branch as mentioned in the manual ``` cd poky/ && git checkout tags/yocto_2.4.2 -b poky_2.4.2 ``` and executed the file ``` source oe-init-build-env ``` Once I execute `bitbake core-image-sato` I get the following Error: ``` Previous bitbake instance shutting down?, waiting to retry... Previous bitbake instance shutting down?, waiting to retry... NOTE: Retrying server connection... (Traceback (most recent call last): File "/mnt/c/Users//Yocto/poky/bitbake/lib/bb/main.py", line 441, in setup\_bitbake server = bb.server.process.BitBakeServer(lock, sockname, configuration, featureset) File "/mnt/c/Users//Yocto/poky/bitbake/lib/bb/server/process.py", line 385, in \_\_init\_\_ self.sock.bind(os.path.basename(sockname)) PermissionError: [Errno 1] Operation not permitted ) WARNING: /mnt/c/Users/des/Development/Yocto/poky/bitbake/lib/bb/main.py:476: ResourceWarning: unclosed logger.info("Retrying server connection... (%s)" % traceback.format\_exc()) NOTE: Reconnecting to bitbake server... NOTE: Retrying server connection... ERROR: Unable to connect to bitbake server, or start one ``` I stumbled upon some GitHub Issue from the WSL Issue Tracker that this might have something to do with `inotify`. I can't figure out what is wrong here and which socket is the error really about?<issue_comment>username_1: First of all, good for you trying out Yocto on the Windows Subsystem for Linux! You are a brave individual. The problem you are running into is probably filesystem related. The socket is trying to create is `bitbake.sock` referenced in `lib/bb/main.py`. This is pretty much the first thing bitbake does, so if it cannot bind to that socket the server won't start. I should mention that you probably saw a warning that you were not using a "supported distribution". Still, it is interesting seeing someone try this idea out. For serious development I'd recommend using virtualization until WSL is mature enough to support applications as complex as bitbake. If you are interested in contributing to the Yocto Project and getting bitbake working on any platform check out the [newcomers page](https://wiki.yoctoproject.org/wiki/Newcomers), submit a feature request in the [bugzilla](https://bugzilla.yoctoproject.org/), and join us for the [monthly technical call](https://www.yoctoproject.org/monthly-technical-call/). The project is always interested in using new and exciting technologies, and patches are always welcomed. Upvotes: 1 <issue_comment>username_2: I have also tried this and had the same issue when building from within the /mnt/c/../poky/build location. As of Win ver. 1803 Build#17134.1 I can't seem to get pyinotify and the bitbake scripts to work from the widows directory. However bitbake works from WSL when inside the WSL directory structure, when did this I simply `cp -a -r /mnt/c/linux/. ~/` But this was a mistake as I had already attempted to compile and had just copied my symbolic links to a new location. I have added mc ([Midnight Commander](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwie6Nj9jtfeAhWPG3wKHf4gBF4QFjAAegQIBxAB&url=https%3A%2F%2Fmidnight-commander.org%2F&usg=AOvVaw3ylNsUqoYeqj5lTBbm43aW)) to my WSL because I am too lazy to type everything every time and use mcedit as my default editor, because it has syntax highlighting, mouse, go to line, search support and more (but the point is get used to the terminal editor of your choice) . My first run completed with errors, because of the fore mentioned copy path error, but after double checking the log and resetting my links I will compile again (more to come). ``` ### Shell environment set up for builds. ### You can now run 'bitbake ' Common targets are: core-image-minimal core-image-sato meta-toolchain meta-ide-support You can also run generated qemu images with a command like 'runqemu qemux86' ~/poky/build $ (wsl-test-yocto-2.5) bitbake core-image-sato Parsing recipes: 100% |##################################################################################################| Time: 0:00:26 Parsing of 814 .bb files complete (0 cached, 814 parsed). 1282 targets, 46 skipped, 0 masked, 0 errors. NOTE: Resolving any missing task queue dependencies Build Configuration: BB\_VERSION = "1.37.0" BUILD\_SYS = "x86\_64-linux" NATIVELSBSTRING = "ubuntu-16.04" TARGET\_SYS = "i586-poky-linux" MACHINE = "qemux86" DISTRO = "poky" DISTRO\_VERSION = "2.5" TUNE\_FEATURES = "m32 i586" TARGET\_FPU = "" meta meta-poky meta-yocto-bsp = "wsl-test-yocto-2.5:da3625c52e1ab8985fba4fc3d133edf92142f182" NOTE: Fetching uninative binary shim from http://downloads.yoctoproject.org/releases/uninative/1.9/x86\_64-nativesdk-libc.tar.bz2;sha256sum=c26622a1f27dbf5b25de986b11584b5c5b2f322d9eb367f705a744f58a5561ec WARNING: The Linux kernel on your build host was not configured to provide process I/O statistics. (CONFIG\_TASK\_IO\_ACCOUNTING is not set) Initialising tasks: 100% |###############################################################################################| Time: 0:00:05 Checking sstate mirror object availability: 100% |#######################################################################| Time: 0:01:00 NOTE: Executing SetScene Tasks Currently 6 running tasks (1111 of 2266) 48% |########################################## | 0: nettle-3.4-r0 do\_populate\_lic\_setscene (pid 8617) 100% |####################################################################| 1012K/s 1: mpfr-native-3.1.5-r0 do\_populate\_lic\_setscene (pid 8614) 100% |##############################################################| 970K/s 2: grep-3.1-r0 do\_populate\_lic\_setscene (pid 8637) 100% |######################################################################| 1.54M/s ``` Upvotes: 1 <issue_comment>username_3: I had this same issue, I am guessing you have the path of your bitbake not in the root linux filesystem. I edited the file main.py inside the bitbake lib directory line ~ 436 to change to sockpath = "/tmp/bitbake.sock" and got over this. The next thing you have to look forward to is the fact that bitbake does not like case insensitive filestystems. Upvotes: 2
2018/03/19
747
2,503
<issue_start>username_0: I installed 2 VS versions and trying to connect them to different TFS server, and Looks like there is not place to do this?? my existing VS12 was working with TFS1 server, and once I launched VS17 it recognized TFS1 and I could not find a place either FILE/SOURCE CONTROL on in Team Explorer Home to add TFS2 server ? Is there a way to have this setup ? Under File/Source Control/Advanced/WSpaces I can delete TFS1 wspace, but this will be delete out from my PC not only from VS, right ? So I can not do this. I need to keep my old TFS1 wspace. [![enter image description here](https://i.stack.imgur.com/BPtOa.jpg)](https://i.stack.imgur.com/BPtOa.jpg) [![enter image description here](https://i.stack.imgur.com/hz283.jpg)](https://i.stack.imgur.com/hz283.jpg)<issue_comment>username_1: As personal recommendation, I would not use 2 versions of VS on the same server. This things rarely work 100% as expected. But ... You can connect to two different tfs servers: [![enter image description here](https://i.stack.imgur.com/zxVYj.png)](https://i.stack.imgur.com/zxVYj.png) [![enter image description here](https://i.stack.imgur.com/iHoBO.png)](https://i.stack.imgur.com/iHoBO.png) Then add your tfs server url. After that, you will be connected to a different tfs server. The last step is to setup a separate workspace folder for your new tfs server. I recommend that you setup a fully separeted workspace for this, to avoid confusion. Something like this: C:/src\_tfs1 -> mapped to tfs1 server C:/src\_tfs2 -> mapped to tfs2 server Upvotes: 1 <issue_comment>username_2: > > If the server was split into two servers, it's very important that > **the Server ID was changed** when setting up the second server. Very weird behavior can happen otherwise. > > > You could find your Team Foundation Server 201x **GUID** Info in below location: ``` C:\Program Files\Microsoft Team Foundation Server 1x.0\Application Tier\Web Services\web.config ``` There should be a value like [![enter image description here](https://i.stack.imgur.com/bphd8.png)](https://i.stack.imgur.com/bphd8.png) Then you could compare the GUID of two servers, change one of them and re-register db. More details about it take a look at this blog: [How to find Team Foundation Server(TFS) GUID Info](https://blogs.msdn.microsoft.com/dstfs/2014/04/07/how-to-find-team-foundation-servertfs-2012-guid-info/) After this, then try to add the TFS2 server to your Visual Studio2017 again. Upvotes: 2
2018/03/19
809
3,280
<issue_start>username_0: I am adding MKAnnotationView in Apple map and make a tap event in the annotationView.But Mylocation marker is interacting the annotationView tap.How to overcome this problem.Hope you understand my problem.Thanks in advance. ``` func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? { var annotationView = MKAnnotationView() guard let annotation = annotation as? ClientLocation else{ return nil } if let dequedView = mapView.dequeueReusableAnnotationView(withIdentifier: annotation.identifier){ annotationView = dequedView } else{ annotationView = MKAnnotationView(annotation: annotation, reuseIdentifier: annotation.identifier) } //////////////////////// /////////////////// annotationView.rightCalloutAccessoryView = button annotationView.canShowCallout = true return annotationView } func mapView(_ mapView: MKMapView, annotationView view: MKAnnotationView, calloutAccessoryControlTapped control: UIControl) { } ``` [![enter image description here](https://i.stack.imgur.com/4dpVF.png)](https://i.stack.imgur.com/4dpVF.png)<issue_comment>username_1: you can customize your annotation view using the delegate method `viewFor annotation` from `MKMapViewDelegate`. EDIT: I just tried to reproduce your code, what puts me in doubt is just your class `ClientLocation`. If your `guard let` return nil you have to check this statement. What you have to do is to check the kind of view class is coming from your delegate, like that: ``` if annotation.isKind(of: MKUserLocation.self) { // this means we are handling the blue point or the user location. //then you can set the parameters for this view like annotationView.canShowCallout = false // or even unable user to interact with annotationView.isUserInteractionEnabled = false } ``` Your delegate should look like: ``` func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView?{ var annotationView = MKAnnotationView() guard let annotation = annotation as? ClientLocation else{ return nil } if let dequedView = mapView.dequeueReusableAnnotationView(withIdentifier: annotation.identifier){ annotationView = dequedView } else{ annotationView = MKAnnotationView(annotation: annotation, reuseIdentifier: annotation.identifier) } if annotation.isKind(of: MKUserLocation.self) { annotationView.isUserInteractionEnabled = false } //////////////////////// /////////////////// annotationView.rightCalloutAccessoryView = button annotationView.canShowCallout = true return annotationView } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: On your delegate ``` public func mapView(_ mapView: MKMapView, didAdd views: [MKAnnotationView]) { let userView = mapView.view(for: mapView.userLocation) userView?.isUserInteractionEnabled = false userView?.isEnabled = false userView?.canShowCallout = false } ``` Upvotes: 3 <issue_comment>username_3: It's working for me. Try this ``` override func viewDidLoad() { super.viewDidLoad() mapView.userLocation.title = nil } ``` Upvotes: 2
2018/03/19
673
1,783
<issue_start>username_0: I'm getting the following error when I execute my tests. This was working previously. Not sure what I need to do in order to fix this error. ``` golang.org/x/crypto/ripemd160/ripemd160block.go:12:2: cannot find package "math/bits" in any of: /usr/local/Cellar/[email protected]/1.8.7/libexec/src/math/bits (from $GOROOT) ```<issue_comment>username_1: > > [Go 1.9 Release Notes](https://golang.org/doc/go1.9) > > > [New bit manipulation package](https://golang.org/doc/go1.9#math-bits) > > > Go 1.9 includes a new package, math/bits, with optimized > implementations for manipulating bits. On most architectures, > functions in this package are additionally recognized by the compiler > and treated as intrinsics for additional performance. > > > You need Go version 1.9 or later. --- I have several versions of Go installed from source in my `$HOME` directory: `~/go1.4`, `~/go1.8`, `~/go1.9`, `~/go1.10`, and `~/go` (devel). Copy the `src/math/bits` folder from go1.9 or later to go1.8. From `go1.8/src` run `go1.8 install -v math/bits`. `go1.8`: ``` #!/bin/sh # $HOME/bin/go1.8 export GOARCH=amd64 export GOOS=linux export GOROOT=$HOME/go1.8 export GOBIN=$GOROOT/bin exec $GOBIN/go "$@" ``` Output: ``` $ cd ~/go1.8/src $ go1.8 install -v math/bits math/bits ``` For example, ``` package main import ( "fmt" "math/bits" ) func main() { fmt.Println(bits.UintSize) } ``` Output: ``` $ go1.8 run bits.go 64 ``` Upvotes: 4 <issue_comment>username_2: You need to copy package "math/bits" for you gopath. An way is: 1. download golang 1.10+ 2. make directory `/src/math` 3. copy `(cp -r ...)` from `/src/math/bits` to `/src/math` 4. try again: `govendor add +external` Or install new golang version to 1.10+. Upvotes: 0
2018/03/19
1,529
3,785
<issue_start>username_0: I'm a real beginner with PHP, just had my first lesson. We have to create a program that prints Armstrong numbers (up to 10000). The output should look like this. ``` 0 ist eine Armstrong-Zahl: 0^1 = 0 […] 8 ist eine Armstrong-Zahl: 8^1 = 8 9 ist eine Armstrong-Zahl: 9^1 = 9 153 ist eine Armstrong-Zahl: 1^3 + 5^3 + 3^3 = 153 370 ist eine Armstrong-Zahl: 3^3 + 7^3 + 0^3 = 370 371 ist eine Armstrong-Zahl: 3^3 + 7^3 + 1^3 = 371 407 ist eine Armstrong-Zahl: 4^3 + 0^3 + 7^3 = 407 1634 ist eine Armstrong-Zahl: 1^4 + 6^4 + 3^4 + 4^4 = 1634 8208 ist eine Armstrong-Zahl: 8^4 + 2^4 + 0^4 + 8^4 = 8208 9474 ist eine Armstrong-Zahl: 9^4 + 4^4 + 7^4 + 4^4 = 9474 ``` That's what I came up with so far. It was working before but now it shows this error: > > Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\htdocs\UE\HUE01\bsp-b\index.php on line 14 > > > Code: ``` php /** * Created by PhpStorm. * User: lisap * Date: 08.03.2018 * Time: 18:49 */ $number = 0; while ($number=0) { $digits = str_split($number); $count = count($digits); $result = array_sum(array_map('pow', $digits, array_fill(0, $count, $count))); if (($number == $result)) { echo $number, ' ist eine Armstrong Zahl: '; if ($number < 10) { echo $digits[0].'^1 = '.$number.' '; } if (($number > 9) && ($number < 100)) { echo $digits[0].'^1'.$digits[1].'^2 = '.$number.' '; } if (($number > 99) && ($number < 1000)) { echo $digits[0].'^1'.$digits[1].'^2'.$digits[2].'^3 = '.$number.' '; } if (($number > 999) && ($number < 10000)) { echo $digits[0].'^1'.$digits[1].'^2'.$digits[2].'^3'.$digits[3].'^4 = '.$number.' '; } $number++; } } ?> ``` The output look something like this right now. ``` 0 ist eine Armstrong Zahl: 0^1 = 0 1 ist eine Armstrong Zahl: 1^1 = 1 2 ist eine Armstrong Zahl: 2^1 = 2 3 ist eine Armstrong Zahl: 3^1 = 3 4 ist eine Armstrong Zahl: 4^1 = 4 5 ist eine Armstrong Zahl: 5^1 = 5 6 ist eine Armstrong Zahl: 6^1 = 6 7 ist eine Armstrong Zahl: 7^1 = 7 8 ist eine Armstrong Zahl: 8^1 = 8 9 ist eine Armstrong Zahl: 9^1 = 9 Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\htdocs\UE\HUE01\bsp-b\index.php on line 14 ```<issue_comment>username_1: > > [Go 1.9 Release Notes](https://golang.org/doc/go1.9) > > > [New bit manipulation package](https://golang.org/doc/go1.9#math-bits) > > > Go 1.9 includes a new package, math/bits, with optimized > implementations for manipulating bits. On most architectures, > functions in this package are additionally recognized by the compiler > and treated as intrinsics for additional performance. > > > You need Go version 1.9 or later. --- I have several versions of Go installed from source in my `$HOME` directory: `~/go1.4`, `~/go1.8`, `~/go1.9`, `~/go1.10`, and `~/go` (devel). Copy the `src/math/bits` folder from go1.9 or later to go1.8. From `go1.8/src` run `go1.8 install -v math/bits`. `go1.8`: ``` #!/bin/sh # $HOME/bin/go1.8 export GOARCH=amd64 export GOOS=linux export GOROOT=$HOME/go1.8 export GOBIN=$GOROOT/bin exec $GOBIN/go "$@" ``` Output: ``` $ cd ~/go1.8/src $ go1.8 install -v math/bits math/bits ``` For example, ``` package main import ( "fmt" "math/bits" ) func main() { fmt.Println(bits.UintSize) } ``` Output: ``` $ go1.8 run bits.go 64 ``` Upvotes: 4 <issue_comment>username_2: You need to copy package "math/bits" for you gopath. An way is: 1. download golang 1.10+ 2. make directory `/src/math` 3. copy `(cp -r ...)` from `/src/math/bits` to `/src/math` 4. try again: `govendor add +external` Or install new golang version to 1.10+. Upvotes: 0
2018/03/19
845
2,646
<issue_start>username_0: these are the settings wordpress theme: shopkeeper plugins: w3 total cache, woo commerce, instagram widget, SEO Yoast, Media file renamed, WP smush, slider revolution , my bag extension, WPBakery Page Builder, WooCommerce Stripe Gateway. I tried even disabling all the plugins I don't know about the server environment, I know that is NGINX type. I have never had any kind of http error 500 problem until today ( 2 months the the website works). Today I asked my hosting provider to change the nameservers as everyone does in order to use the FREE version of cloudFlare, the configuration is active on cloudflare, but the website now is not available (i tried even to reset the .htaccess) nothing changed. I want to use the cloudflare in the way that everyone does but i do not understand why am having this kind of error( is it related to a mistake made by my hosting provider or what?) i did not ask to change anything else but as you can see this is the actual result. So, I don't understand why ( following exactly this steps ) : <https://support.cloudflare.com/hc/en-us/articles/205195708-Step>… Thanks for the answer though. Also this is my first time using stackoverflow.<issue_comment>username_1: > > [Go 1.9 Release Notes](https://golang.org/doc/go1.9) > > > [New bit manipulation package](https://golang.org/doc/go1.9#math-bits) > > > Go 1.9 includes a new package, math/bits, with optimized > implementations for manipulating bits. On most architectures, > functions in this package are additionally recognized by the compiler > and treated as intrinsics for additional performance. > > > You need Go version 1.9 or later. --- I have several versions of Go installed from source in my `$HOME` directory: `~/go1.4`, `~/go1.8`, `~/go1.9`, `~/go1.10`, and `~/go` (devel). Copy the `src/math/bits` folder from go1.9 or later to go1.8. From `go1.8/src` run `go1.8 install -v math/bits`. `go1.8`: ``` #!/bin/sh # $HOME/bin/go1.8 export GOARCH=amd64 export GOOS=linux export GOROOT=$HOME/go1.8 export GOBIN=$GOROOT/bin exec $GOBIN/go "$@" ``` Output: ``` $ cd ~/go1.8/src $ go1.8 install -v math/bits math/bits ``` For example, ``` package main import ( "fmt" "math/bits" ) func main() { fmt.Println(bits.UintSize) } ``` Output: ``` $ go1.8 run bits.go 64 ``` Upvotes: 4 <issue_comment>username_2: You need to copy package "math/bits" for you gopath. An way is: 1. download golang 1.10+ 2. make directory `/src/math` 3. copy `(cp -r ...)` from `/src/math/bits` to `/src/math` 4. try again: `govendor add +external` Or install new golang version to 1.10+. Upvotes: 0
2018/03/19
795
2,397
<issue_start>username_0: I am using a 64bit Windows OS and Code blocks version 17.12. Whenever I try to debug my code, I get the following message and debug does not start. ``` Active debugger config: GDB/CDB debugger:Default Building to ensure sources are up-to-date Selecting target: Debug Adding source dir: C:\CX\will\ Adding source dir: C:\CX\will\ Adding file: C:\CX\will\bin\Debug\will.exe Changing directory to: C:/CX/will/. Set variable: PATH=.;C:\Program Files (x86)\CodeBlocks\MinGW\bin;C:\Program Files (x86)\CodeBlocks\MinGW;C:\oraclexe\app\oracle\product\10.2.0\server\BIN;C:\RailsInstaller\Git\cmd;C:\RailsInstaller\Ruby2.3.0\bin;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Intel\iCLS Client;C:\Program Files\Intel\iCLS Client;C:\Windows\System32;C:\Windows;C:\Windows\System32\wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\WiFi\bin;C:\Program Files\Common Files\Intel\WirelessCommon;C:\Program Files\PuTTY;C:\Program Files\Git\cmd;C:\Program Files\nodejs;C:\Program Files\Java\jdk1.8.0_161\bin;C:\Users\Vinay A\AppData\Local\Microsoft\WindowsApps;C:\Users\Vinay A\AppData\Local\atom\bin;C:\Program Files\Java\jdk1.8.0_151\bin;C:\Users\Vinay A\AppData\Roaming\npm Starting debugger: C:\Program Files (x86)\CodeBlocks\MinGW\bin\gdb.exe -nx -fullname -quiet -args C:/CX/will/bin/Debug/will.exe failed ``` I have set the path in environment variables as `C:\Program Files (x86)\CodeBlocks\MinGW\bin` I have also set the produce debugging flag `-g` in `CodeBlocks`. The code is part of a console application project. The executable path for the debugger is `C:\Program Files (x86)\CodeBlocks\MinGW\bin\gdb.exe` What am I missing here? Please help. Thanks in advance.<issue_comment>username_1: Install older version of code blocks since there is issue with the debugger for Code blocks version 17.12. Upvotes: 0 <issue_comment>username_2: go to `Settings` and then `Debugger...` and `GDB/CDB debugger` and press `Reset defaults` [![enter image description here](https://i.stack.imgur.com/Ngg9q.png)](https://i.stack.imgur.com/Ngg9q.png) Upvotes: 2
2018/03/19
756
2,768
<issue_start>username_0: I am generating a random number in jquery and I want this value not to change per session. It means, the value should not renew every time the page is refreshed. ``` XX/p> ``` This is my current code: ``` var minMember = 10; var maxMember = 50; var randomNumber = randomNumberFromRange(minMember, maxMember); function randomNumberFromRange(min,max) { return Math.floor(Math.random()*(max-min+1)+min); } var yetVisited = localStorage['visited']; var setRandomMember = randomNumber; if (!yetVisited) { // open popup localStorage['visited'] = "yes"; $('.remaining-memberships span').html(randomNumber); } else { } ``` How do I set my if else code properly so that the value inside $('.remaining-memberships span') will only change if the page is not yet visited and if it is already visited, the value must retain. Any help is appreciated. Thanks<issue_comment>username_1: Attempt to get the value/object from local storage, if it is empty then run your function and then store in localStorage. I've changed it to store as JSON which can be parsed upon retrieval as it will allow you to store a *visit* object with your `randomNumber` and `visited` values (not sure if you wanted the later). ``` var visit = localStorage.getItem("visit"); // new visit if (!visit) { var minMember = 10; var maxMember = 50; function randomNumberFromRange(min, max) { return Math.floor(Math.random() \* (max - min + 1) + min); } var randomNumber = randomNumberFromRange(minMember, maxMember); visit = { visited: 'yes', randomNumber: randomNumber }; localStorage.setItem('visit', JSON.stringify(visit)); } else { visit = JSON.parse(visit) } $('.remaining-memberships span').text(visit.randomNumber); ``` Untested but should work. Upvotes: 1 <issue_comment>username_2: Why not just store the random number in your localStorage with the visited flag? ``` var minMember = 10; var maxMember = 50; var randomNumber = randomNumberFromRange(minMember, maxMember); function randomNumberFromRange(min,max) { return Math.floor(Math.random()*(max-min+1)+min); } var yetVisited = localStorage['visited']; var setRandomMember = randomNumber; if (!yetVisited) { // open popup localStorage['visited'] = "yes"; localStorage['randomNumber'] = randomNumber; $('.remaining-memberships span').html(randomNumber); } else { $('.remaining-memberships span').html(localStorage['randomNumber']); } ``` Upvotes: 0 <issue_comment>username_3: You can just store the random in localStorage like so: ``` var minMember = 10; var maxMember = 50; var randomNumber = randomNumberFromRange(minMember, maxMember); localStorage['myRandom'] = randomNumber; ``` Upvotes: 2 [selected_answer]
2018/03/19
925
3,317
<issue_start>username_0: Hi I am using learndash Wordpress plugin. I want to get the data related to a user tht how many courses he is enrolled in and how many has he completed. Is there a way to check this? does learndash provide any solution for this or should I query data myself? Any help is appreciated. Thanks in advance. Please ask for any more details if you want.<issue_comment>username_1: You can return anything using wp\_query. Try this: ``` function wpso49370180_get_course_name($courseid) { global $wpdb; $user_id = the_author_meta( 'ID' ); //alt method below $query_course = "SELECT post_title FROM wp_posts WHERE post_type = 'sfwd-courses' AND post_status NOT IN ( 'trash','auto-draft','inherit' ) AND post_author='$user_id' LIMIT 10"; return $wpdb->get_var($query_course); } ``` You will need to either know the user\_id or get it from the post (sfwd-quiz, sfwd-course, sfwd-lesson) -see below. The data you want can be all (\*) You will have to do a meta\_query if you want deeper data that is not in the post\_type tables. ``` /** * Gets the author of the specified post. Can also be used inside the loop * to get the ID of the author of the current post, by not passing a post ID. * Outside the loop you must pass a post ID. * * @param int $post_id ID of post * @return int ID of post author */ function wpso49370180_get_author( $post_id = 0 ){ $post = get_post( $post_id ); return $post->post_author; } ``` Upvotes: 0 <issue_comment>username_2: You can use the following to get all course ID's the current user is currently enrolled to: `learndash_user_get_enrolled_courses(get_current_user_id())` Upvotes: 2 <issue_comment>username_3: I found that all is stored inside the table `learndash_user_activity` and you can query that table to get user stats. For example I get the list of the in-progress users for a given course with the following query: ``` public static function get_inprogress_users_for_course( $course_id ) { global $wpdb; if( empty( $course_id ) ) return []; $results = $wpdb->get_results( "SELECT `user_id` FROM `" . $wpdb->prefix . "learndash_user_activity` " ."WHERE `course_id` = '" . intval( $course_id ) . "' " ."AND `activity_type` = 'lesson' " ."GROUP BY `user_id`" ); return $results; } ``` In the same way, you can get all the IDs of the users ever enrolled to a course changing `activity_type` from 'lesson' to 'course' in the query, and if you want to only get enrolled course by a user, you can add the user\_id to the query, like this: ``` public static function get_courses_for_user( $user_id ) { global $wpdb; if( empty( $course_id ) ) return []; $results = $wpdb->get_results( "SELECT * FROM `" . $wpdb->prefix . "learndash_user_activity` " ."WHERE `user_id` = '" . intval( $user_id ) . "' " ."AND `activity_type` = 'course' " ."GROUP BY `course_id`" ); return $results; } ``` I know this is not exactly what you were searching for, but it could still be useful. Upvotes: 3 [selected_answer]
2018/03/19
1,286
4,787
<issue_start>username_0: I have implemented ionic inapp browser in my app. I want to hide the top bar from it. I am trying using the below code but it does not seem to work. [![enter image description here](https://i.stack.imgur.com/nVaND.png)](https://i.stack.imgur.com/nVaND.png) **page.ts code** ``` openWebpage(url: string) { const browser = this.inAppBrowser.create('http://sco7.com/filemanager/sapphire/','_self','toolbar=no'); const options: InAppBrowserOptions = { zoom: 'no', location: 'no', toolbar: 'no' } } ``` I have added `toolbar=no` but still top address bar is visible.<issue_comment>username_1: The code you shared is the code you use to create an inAppBrowser? If so you need to declare your options `const` before the creation of your inAppBrowser: ``` openWebpage(url: string) { const options: InAppBrowserOptions = { zoom: 'no', location: 'no', toolbar: 'no' }; const browser = this.inAppBrowser.create('http://sco7.com/filemanager/sapphire/','_self','toolbar=no'); } ``` Doing so i was able to open and an browser window without the URL bar. Also using `'toolbar=no'` is wrong since toolbar is one of the options property and it needs to be a string, toolbar doesn't need to be part of the string. An alternative is using simply an object with the location property: ``` this.inAppBrowser.create('http://sco7.com/filemanager/sapphire/','_self',{ toolbar: 'no'}); ``` Hope this helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: > > Either you can apply [`InAppBrowserOptions`](https://ionicframework.com/docs/native/in-app-browser/#InAppBrowserOptions) in [`in-app-browser`](https://ionicframework.com/docs/native/in-app-browser/) > > > ``` private openBrowser(url: string, title?: string) { let options: InAppBrowserOptions = { toolbarcolor: "#488aff", hideurlbar: "yes", closebuttoncolor: "#fff", navigationbuttoncolor: "#fff" }; const browser = this.iab.create(url, "_blank", options); } ``` > > or you can use the highly customizable [`themeable-browser`](https://ionicframework.com/docs/native/themeable-browser/) which built on top of > [`in-app-browser`](https://ionicframework.com/docs/native/in-app-browser/) > > > ``` // can add options from the original InAppBrowser in a JavaScript object form (not string) // This options object also takes additional parameters introduced by the ThemeableBrowser plugin // This example only shows the additional parameters for ThemeableBrowser // Note that that `image` and `imagePressed` values refer to resources that are stored in your app const options: ThemeableBrowserOptions = { statusbar: { color: '#ffffffff' }, toolbar: { height: 44, color: '#f0f0f0ff' }, title: { color: '#003264ff', showPageTitle: true }, backButton: { image: 'back', imagePressed: 'back_pressed', align: 'left', event: 'backPressed' }, forwardButton: { image: 'forward', imagePressed: 'forward_pressed', align: 'left', event: 'forwardPressed' }, closeButton: { image: 'close', imagePressed: 'close_pressed', align: 'left', event: 'closePressed' }, customButtons: [ { image: 'share', imagePressed: 'share_pressed', align: 'right', event: 'sharePressed' } ], menu: { image: 'menu', imagePressed: 'menu_pressed', title: 'Test', cancel: 'Cancel', align: 'right', items: [ { event: 'helloPressed', label: 'Hello World!' }, { event: 'testPressed', label: 'Test!' } ] }, backButtonCanClose: true }; const browser: ThemeableBrowserObject = this.themeableBrowser.create('https://ionic.io', '_blank', options); ``` Upvotes: 0 <issue_comment>username_3: ``` openWebpage(url: string) { const options: InAppBrowserOptions = { zoom: 'no', fullscreen: "yes", hidenavigationbuttons: "no", toolbar:'no', hideurlbar: 'yes', } // Opening a URL and returning an InAppBrowserObject const browser = this.inAppBrowser.create(url, '_blank',{ toolbar: 'no', hideurlbar: 'yes', fullscreen: "yes",location:"no", options}); browser.on('loadstop').subscribe(event => { browser.insertCSS({ code: "toolbar{display: none;" }); }); // Inject scripts, css and more with browser.X } ``` like this way shall appear what you wanna hidden or control! Upvotes: 1
2018/03/19
1,495
4,065
<issue_start>username_0: I have few string with numbers like this; and its around 3000 records. ``` Column ------------ Cell 233567-3455 Cell123-4567 Cell#123-7449 Local 456-0987 1 616 468-7796 1234567-5x2345 234/625-1234 (C)755-7442 5732878-2 5721899-23 6712909-3 7894200-234 2144-57238 5673893/588218 437-4737-5772 ``` How can i find the records like below: ``` Column ------------- 5732878-2 5721899-23 6712909-3 7894200-234 ``` Once I find this, I need to split those into two parts ``` 1st Column. | 2nd column ------------- | 5732878 | 5732872 5721899 | 5721823 6712909 | 6712903 7894200 | 7894234 ``` I tried to fix This using PARINDEX and CHARINDEX But somehow its not working.Please help.<issue_comment>username_1: I don't know your filtering logic to get to your intermediate set, but this should get your expected final result set. I assumed you only want records where the length of the string to the left of the hyphen is greater than the length on the right and also exclude records with more than 1 hyphen. ``` SELECT LEFT(telephone, CHARINDEX('-', telephone)-1) AS [1stTelephone], STUFF( --get the string before the hyphen LEFT(telephone, CHARINDEX('-', telephone)-1), --get the starting location of chars we are going to replace LEN(LEFT(telephone, CHARINDEX('-', telephone)))-LEN(RIGHT(telephone, CHARINDEX('-', REVERSE(telephone))-1)), --get the length of the section we are replacing LEN(RIGHT(telephone, CHARINDEX('-', REVERSE(telephone))-1)), --replace that section with the string after the hyphen RIGHT(telephone, CHARINDEX('-', REVERSE(telephone))-1) ) AS [2nd telephone] FROM your_table WHERE LEN(LEFT(telephone, CHARINDEX('-', telephone))) > LEN(RIGHT(telephone, CHARINDEX('-', REVERSE(telephone)))) AND len(telephone) - len(REPLACE(telephone, '-', '')) = 1 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You could use something like this: DDL --- ``` use tempdb create table TelNo ( Tel varchar(30) ) insert TelNo(Tel) values ('5732878-2'), ('5721899-23'), ('6712909-3'), ('7894200-234'), ('2144-57238'), ('5673893/588218'), ('437-4737-5772') ``` Code ---- ``` select Tel, case when Tel like '%_-[0-9]' then left(Tel, len(Tel)-2) when Tel like '%__-[0-9][0-9]' then left(Tel, len(Tel)-3) when Tel like '%___-[0-9][0-9][0-9]' then left(Tel, len(Tel)-4) else Tel end Tel1, case when Tel like '%_-[0-9]' then left(Tel, len(Tel)-3) + right(Tel, 1) when Tel like '%__-[0-9][0-9]' then left(Tel, len(Tel)-5) + right(Tel, 2) when Tel like '%___-[0-9][0-9][0-9]' then left(Tel, len(Tel)-7) + right(Tel, 3) else NULL end Tel2 from TelNo ``` Upvotes: 0 <issue_comment>username_3: Somewhat dirty method (looks specifically for 7 digits followed by hyphen followed by any number of digits): ``` SELECT BasePhone AS Phone1, LEFT(BasePhone, 7-LEN(OtherPhoneEnd)) + OtherPhoneEnd AS Phone2 FROM ( SELECT LEFT(Telephone, 7) AS BasePhone, SUBSTRING(Telephone,9,7) AS OtherPhoneEnd FROM Telephones WHERE Telephone LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][0-9]-%' ) ``` Upvotes: 0 <issue_comment>username_4: I assumed based on information you given, that you want numbers with hyphen (-) at 8th position. Try this: ``` create table #TelNo ( Tel varchar(30) ) insert #TelNo(Tel) values ('5732878-2'), ('5721899-23'), ('6712909-3'), ('7894200-234'), ('2144-57238'), ('5673893/588218'), ('437-4737-5772') select Tel, LEFT(Tel, Len(tel) - len(suffix)) + suffix [SecondTel] from ( select substring(Tel, 1, 7) [Tel], substring(Tel, 9, 10) [suffix] from #TelNo where CHARINDEX('-', Tel) = 8 )a ``` Upvotes: 0
2018/03/19
1,335
4,545
<issue_start>username_0: Trying to use custom font in WKWebView but no luck. ``` let htmlString = "\(Utils.aboutUsText)" webView.loadHTMLString(htmlString, baseURL: nil) ``` I can use HelveticaNeue-Bold and works great but not with the custom font above. ``` let htmlString = "\(Utils.aboutUsText)" webView.loadHTMLString(htmlString, baseURL: nil) ``` I have added the custom fonts properly.See screenshots. Can someone please tell me how can i achieve this or point me in the right direction. [![enter image description here](https://i.stack.imgur.com/3jjjM.png)](https://i.stack.imgur.com/3jjjM.png) [![enter image description here](https://i.stack.imgur.com/8lC0I.png)](https://i.stack.imgur.com/8lC0I.png) [![enter image description here](https://i.stack.imgur.com/QCvYI.png)](https://i.stack.imgur.com/QCvYI.png)<issue_comment>username_1: Reading the answers in [the linked thread](https://stackoverflow.com/q/25785179/6541007) in DonMag's comment: * Using `@font-face` is mandatory * You need multiple `@font-face` declarations to use multiple font files as a single font family * You need to provide `baseURL` to make relative urls like `url(OpenSans-Regular.ttf)` work So, try this: ``` let htmlString = """ @font-face { font-family: 'Open Sans'; font-weight: normal; src: url(OpenSans-Regular.ttf); } @font-face { font-family: 'Open Sans'; font-weight: bold; src: url(OpenSans-Bold.ttf); } @font-face { font-family: 'Open Sans'; font-weight: 900; src: url(OpenSans-ExtraBold.ttf); } @font-face { font-family: 'Open Sans'; font-weight: 200; src: url(OpenSans-Light.ttf); } @font-face { font-family: 'Open Sans'; font-weight: 500; src: url(OpenSans-Semibold.ttf); } (Utils.aboutUsText) """ webView.loadHTMLString(htmlString, baseURL: Bundle.main.bundleURL) //<- ``` --- Or you can use a separate css file if you prefer: ``` let htmlString = """ (Utils.aboutUsText) """ webView.loadHTMLString(htmlString, baseURL: Bundle.main.bundleURL) ``` open-sans.css: ``` @font-face { font-family: 'Open Sans'; font-weight: normal; src: url(OpenSans-Regular.ttf); } @font-face { font-family: 'Open Sans'; font-weight: bold; src: url(OpenSans-Bold.ttf); } @font-face { font-family: 'Open Sans'; font-weight: 900; src: url(OpenSans-ExtraBold.ttf); } @font-face { font-family: 'Open Sans'; font-weight: 200; src: url(OpenSans-Light.ttf); } @font-face { font-family: 'Open Sans'; font-weight: 500; src: url(OpenSans-Semibold.ttf); } ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: ``` class StaticContentViewControlle: WKUIDelegate, WKNavigationDelegate { @IBOutlet weak var webViewContainer: UIView! private var webView: WKWebView? var url = "" override func viewDidLoad() { super.viewDidLoad() initialSetup() } func initialSetup() { url = "https://www.apple.com" let myURL = URL(string: url) if let myURL = myURL { let myRequest = URLRequest(url: myURL) webView?.uiDelegate = self webView?.load(myRequest) } webView?.navigationDelegate = self webView?.scrollView.showsVerticalScrollIndicator = false webView?.scrollView.backgroundColor = .clear webView?.isOpaque = false webView?.backgroundColor = .clear } func webView(_ webView: WKWebView, didCommit navigation: WKNavigation!) { print("Start loading") } func webView(_ webView: WKWebView, didFinish navigation: WKNavigation!) { print("End loading") let textSize = 300 let javascript = "document.getElementsByTagName('body')[0].style.webkitTextSizeAdjust= '\(textSize)%'" webView.evaluateJavaScript(javascript) { (response, error) in print() } } func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, decisionHandler: @escaping (WKNavigationActionPolicy) -> Void) { if navigationAction.navigationType == .linkActivated { if let url = navigationAction.request.url, UIApplication.shared.canOpenURL(url) { UIApplication.shared.open(url) decisionHandler(.cancel) } else { print("Open it locally") decisionHandler(.allow) } } else { print("not a user click") decisionHandler(.allow) } } } ``` Upvotes: -1
2018/03/19
1,589
5,549
<issue_start>username_0: I was trying to load the assembly so I could connect to TFS from powershell but I'm getting an error when I try to Add-Type. The nuget package was successfully downloaded and the file exists for the assembly I want to add. My code is as follows ``` $sourceCodeDirectory = "C:\testing123"; CleanDirectory -directory $sourceCodeDirectory [System.IO.Directory]::SetCurrentDirectory($sourceCodeDirectory); $cwd = [System.IO.Directory]::GetCurrentDirectory(); Write-Output("CurrentWorkingDirectory: $cwd"); $sourceNugetExe = "https://dist.nuget.org/win-x86-commandline/latest/nuget.exe" $targetNugetExe = "$cwd\nuget.exe" Invoke-WebRequest $sourceNugetExe -OutFile $targetNugetExe Set-Alias nuget $targetNugetExe -Scope Global -Verbose nuget install Microsoft.TeamFoundationServer.Client -version '15.112.1' -OutputDirectory $cwd nuget install Microsoft.TeamFoundationServer.ExtendedClient -version '15.112.1' -OutputDirectory $cwd $uri = New-Object System.Uri -ArgumentList $TFSCollectionUri try { Write-Output 'Loading TFS Assemblies...' $assemblyPath = $cwd + "\Microsoft.TeamFoundationServer.ExtendedClient.15.112.1\lib\net45\Microsoft.TeamFoundation.Client.dll"; Write-Output($assemblyPath); Add-Type -Path $assemblyPath } catch { $_.LoaderExceptions { Write-Error $_.Message } } ``` With the following error ``` Write-Error $_.Message [ERROR] new-object : Could not load file or assembly [ERROR] 'Microsoft.VisualStudio.Services.Common, Version=15.0.0.0, Culture=neutral, [ERROR] PublicKeyToken=<KEY>' or one of its dependencies. The system cannot [ERROR] find the file specified. [ERROR] At C:\Users\user\Documents\Visual Studio 2015\Projects\Build_FormsDesigne [ERROR] r\Build_FormsDesigner\Build_FormsDesigner.ps1:107 char:23 [ERROR] + ... lProvider = new-object Microsoft.TeamFoundation.Client.UICredentialsP ... [ERROR] + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ERROR] + CategoryInfo : NotSpecified: (:) [New-Object], FileNotFoundExce [ERROR] ption [ERROR] + FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerS [ERROR] hell.Commands.NewObjectCommand [ERROR] [ERROR] Exception calling "GetTeamProjectCollection" with "2" argument(s): "Could not [ERROR] load file or assembly 'Microsoft.VisualStudio.Services.Common, [ERROR] Version=1192.168.127.12, Culture=neutral, PublicKeyToken=<KEY>' or one of [ERROR] its dependencies. The system cannot find the file specified." [ERROR] At C:\Users\user\Documents\Visual Studio 2015\Projects\Build_FormsDesigne [ERROR] r\Build_FormsDesigner\Build_FormsDesigner.ps1:108 char:1 [ERROR] + $collection = [Microsoft.TeamFoundation.Client.TfsTeamProjectCollecti ... [ERROR] + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ERROR] + CategoryInfo : NotSpecified: (:) [], MethodInvocationException [ERROR] + FullyQualifiedErrorId : FileNotFoundException [ERROR] [ERROR] Method invocation failed because [System.String] does not contain a method [ERROR] named 'Authenticate'. [ERROR] At C:\Users\user\Documents\Visual Studio 2015\Projects\Build_FormsDesigne [ERROR] r\Build_FormsDesigner\Build_FormsDesigner.ps1:109 char:1 [ERROR] + $collection.Authenticate() [ERROR] + ~~~~~~~~~~~~~~~~~~~~~~~~~~ [ERROR] + CategoryInfo : InvalidOperation: (:) [], RuntimeException [ERROR] + FullyQualifiedErrorId : MethodNotFound [ERROR] ``` Also when it errors out, something still has a lock on the DLL I was trying to load, preventing me from clearing the directory the next time I run the script. ( I have to close visual studio and re open it in order to remove the file lock) **EDIT:** Just clarify, when I NuGet the following package ``` nuget install Microsoft.TeamFoundationServer.Client -version '15.112.1' -OutputDirectory $cwd ``` It pulls all the dependencies with it, so they exist in the folder as you can see below [![enter image description here](https://i.stack.imgur.com/USgFl.png)](https://i.stack.imgur.com/USgFl.png)<issue_comment>username_1: Apparently you are getting this error because there are missing dependencies, as your exception is showing ... Could not load file or assembly Microsoft.VisualStudio.Services.Common From <https://www.nuget.org/packages/Microsoft.TeamFoundationServer.Client/> ``` Dependencies .NETFramework 4.5 Microsoft.AspNet.WebApi.Client (>= 5.2.2) Microsoft.TeamFoundation.DistributedTask.Common (= 15.112.1) Microsoft.VisualStudio.Services.Client (= 15.112.1) Newtonsoft.Json (>= 8.0.3) ``` Upvotes: 1 <issue_comment>username_2: You need to add `Microsoft.VisualStudio.Services.Common.dll` too: ``` $assemblyPath1 = $cwd + "\Microsoft.VisualStudio.Services.Client.15.112.1\lib\net45\Microsoft.VisualStudio.Services.Common.dll"; Add-Type -Path $assemblyPath1 ``` Upvotes: 0 <issue_comment>username_3: I had the same error when using power shell ``` Exception calling "GetConfigurationServer" with "1" argument(s): "Could not load file or assembly 'Microsoft.TeamFoundation.Common ``` and the fix is: replace the LoadWithPartialName with Add-Type -Path and worked ``` #[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Client") | Out-Null Add-Type -Path "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Microsoft.TeamFoundation.Common.dll" ``` Adjust the path to the dlls. Upvotes: 0
2018/03/19
1,096
4,294
<issue_start>username_0: I was Implementing the chat application, and I am using RecyclerView to deal with sender and receiver messages, all things are working except my Chat activity UI, when sender send the message then received message shown by scrolling down the activity, Its look like sender message is on one screen and receiver message is on another screen that become visible when is scroll down the screen. ``` messageList = findViewById(R.id.messageList); messageList.setHasFixedSize(true); LinearLayoutManager llm = new LinearLayoutManager(this); llm.setOrientation(LinearLayoutManager.VERTICAL); messageList.setLayoutManager(llm); messageList.setAdapter(mAdapter); ``` here `messageList` is the list the contains both of sender and receiver messages. here is adapter code ``` public class MyAdapter extends RecyclerView.Adapter { private final Context context; private final ArrayList messages; private static final int VIEW\_HOLDER\_TYPE\_1 = 1; private static final int VIEW\_HOLDER\_TYPE\_2 = 2; // Provide a reference to the views for each data item // Complex data items may need more than one view per item, and // you provide access to all the views for a data item in a view holder public static class ViewHolder\_Type1 extends RecyclerView.ViewHolder { // each data item is just a string in this case public TextView mymessageTextView, mytimeTextView; public ViewHolder\_Type1(View v) { super(v); this.mymessageTextView = (TextView) v.findViewById(R.id.mymessageTextView); this.mytimeTextView = (TextView) v.findViewById(R.id.mytimeTextView); } } public static class ViewHolder\_Type2 extends RecyclerView.ViewHolder { // each data item is just a string in this case public TextView messageTextView, timeTextView; public ViewHolder\_Type2(View v) { super(v); this.messageTextView = (TextView) v.findViewById(R.id.messageTextView); this.timeTextView = (TextView) v.findViewById(R.id.timeTextView); } } // Provide a suitable constructor (depends on the kind of dataset) public MyAdapter(Context context, ArrayList messages) { this.context = context; this.messages = messages; } // Create new views (invoked by the layout manager) @Override public RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View v; switch (viewType) { // create a new view case VIEW\_HOLDER\_TYPE\_1: v = LayoutInflater.from(parent.getContext()).inflate(R.layout.mymessage, parent, false); ViewHolder\_Type1 vh1 = new ViewHolder\_Type1(v); return vh1; case VIEW\_HOLDER\_TYPE\_2: v = LayoutInflater.from(parent.getContext()).inflate(R.layout.message, parent, false); ViewHolder\_Type2 vh2 = new ViewHolder\_Type2(v); return vh2; default: break; } return null; } // Replace the contents of a view (invoked by the layout manager) @Override public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { // - get element from your dataset at this position // - replace the contents of the view with that element switch (getItemViewType(position)) { case VIEW\_HOLDER\_TYPE\_1: ViewHolder\_Type1 viewholder1 = (ViewHolder\_Type1) holder; TextView mytimeView = (TextView) viewholder1.mytimeTextView; mytimeView.setText(messages.get(position).getTime()); TextView mymsgView = (TextView) viewholder1.mymessageTextView; mymsgView.setText(messages.get(position).getMessage()); break; case VIEW\_HOLDER\_TYPE\_2: ViewHolder\_Type2 viewholder2 = (ViewHolder\_Type2) holder; TextView timeView = (TextView) viewholder2.timeTextView; timeView.setText(messages.get(position).getTime()); TextView msgView = (TextView) viewholder2.messageTextView; msgView.setText(messages.get(position).getMessage()); break; default: break; } } ```<issue_comment>username_1: I believe you need to notify your adapter always when you get new messages , and when you scroll you make the recycler re-draw with new info. So put ``` mAdapter.notifyDataSetChanged(); ``` when you want to load new messages Upvotes: 0 <issue_comment>username_2: I resolve the problem. i had created two resource file one for sender message and one for receiver message the root element of these file had `match-parent` width and height, i just replace it with `wrap-content` now all is fine. Upvotes: 2 [selected_answer]
2018/03/19
423
1,563
<issue_start>username_0: I'm using C ++ 11. I'm wondering if there are any advantages to using `cin.getline ()` compared to `gets ()`. I need to fill a `char` array. Also, should I use `fgets` or `getline` for files?<issue_comment>username_1: > > I'm wondering if there are any advantages to using cin.getline () compared to gets (). > > > I am assuming you really mean `gets`, not `fgets`. Yes, there definitely is. `gets` is known to be a security problem. `cin.getline()` does not suffer from that problem. It's worth comparing `fgets` and `cin.getline`. The only difference that I see is that `fgets` will include the newline character in the output while `cin.getline` won't. Most of the time, the newline character is ignored by application code. Hence, it is better to use `cin.getline()` or `istream::getline()` in general. If presence of the newline character in the output is important to you for some reason, you should use `fgets`. Another reason to prefer `istream::getline` is that you can specify a character for the delimiter. If you need to parse a comma separated values (CSV) file, you can use: ``` std::ifstream fstr("some file name.csv"); fstr.getline(data, data_size, ','); ``` Upvotes: 3 <issue_comment>username_2: Of course. First of all `gets` doesn't check of length of the input - so if the input if longer than char array, you are getting an overflow. On the other hand `cin.getline` allows to specify the size of stream. Anyway, the consensus among C++ programmers is that you should avoid raw arrays anyway. Upvotes: 2
2018/03/19
1,478
5,444
<issue_start>username_0: Using `iText` we can easily change zoom level for links. There is even a piece of [code](https://developers.itextpdf.com/examples/actions-and-annotations/changing-zoom-factor-link-destination) that does this for `GoTo` destination type. For convienence, please find it below. ``` PdfReader reader = new PdfReader(src); PdfDictionary page = reader.getPageN(11); PdfArray annots = page.getAsArray(PdfName.ANNOTS); for (int i = 0; i < annots.size(); i++) { PdfDictionary annotation = annots.getAsDict(i); if (PdfName.LINK.equals(annotation.getAsName(PdfName.SUBTYPE))) { PdfArray d = annotation.getAsArray(PdfName.DEST); if (d != null && d.size() == 5 && PdfName.XYZ.equals(d.getAsName(1))) d.set(4, new PdfNumber(0)); } } ``` The code deals only with one of destination types found in PDF files. I'm interested in changing zoom in other types of destinations (they are listed in 32000-1 if anyone wondered). Specifically, I'd like to change each destination to `GoTo` type and specify my own coordinates. I want left coordinate to be the same as the page height of the page to jump. To do this, I obviously I need the page number. How do I get it? What have I done so far? The instruction `PdfArray d = annotation.getAsArray(PdfName.DEST)` gives su an array where its first (0 based) element is page reference and not page number as <NAME> explains in his `iText in Action, 2nd edition, p. 202). The array looks like this:`[1931 0 R, /XYZ, 0, 677, 0]`. I cannot find correct command to get page number on my own hence this post.<issue_comment>username_1: According to this: <https://developers.itextpdf.com/fr/node/1750> > > The first example is an array with two elements 8 0 R and /Fit. The > second example is an array with four elements 6 0 R, /XYZ, 0, 806 and > 0. You need the first element. It doesn't give you the page number (because there is no such thing as page numbers), but it gives you a > reference to the /Page object. Based on that reference, you can deduce > the page number by looping over the page tree and comparing the object > number of a specific page with the object number in the destination. > > > And than you can go recursively to extract page number, like this: [Extract page number from PDF file](https://stackoverflow.com/questions/16854706/extract-page-number-from-pdf-file?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa) Hope, you find it's helpful. Good luck! Upvotes: 2 <issue_comment>username_2: > > I want left coordinate to be the same as the page height of the page to jump. To do this, I obviously I need the page number. How do I get it? > > > The assumption that you need the page number is wrong. The `PdfReader` utility methods mostly work based on page numbers, yes, but there is not much in these methods. If you are ok with some low level data access, therefore, you don't need the page number. The following is your code with additional code to retrieve the cropbox (which defines left, bottom, right, and top page coordinates), once directly from the object reference you have in your destination and once via page number. ``` PdfReader reader = new PdfReader(src); PdfDictionary page = reader.getPageN(11); PdfArray annots = page.getAsArray(PdfName.ANNOTS); for (int i = 0; i < annots.size(); i++) { PdfDictionary annotation = annots.getAsDict(i); if (PdfName.LINK.equals(annotation.getAsName(PdfName.SUBTYPE))) { PdfArray d = annotation.getAsArray(PdfName.DEST); if (d == null) { // in case the link has not a Dest but instead a GoTo action PdfDictionary action = annotation.getAsDict(PdfName.A); if (action != null) d = action.getAsArray(PdfName.D); } if (d != null && d.size() > 0) { System.out.println("Next destination -"); PdfIndirectReference pageReference = d.getAsIndirectObject(0); // Work with target dictionary directly PdfDictionary pageDict = d.getAsDict(0); PdfArray boxArray = pageDict.getAsArray(PdfName.CROPBOX); if (boxArray == null) { boxArray = pageDict.getAsArray(PdfName.MEDIABOX); } Rectangle box = PdfReader.getNormalizedRectangle(boxArray); System.out.printf("* Target page object %s has cropbox %s\n", pageReference, box); // Work via page number for (int pageNr = 1; pageNr <= reader.getNumberOfPages(); pageNr++) { PRIndirectReference pp = reader.getPageOrigRef(pageNr); if (pp.getGeneration() == pageReference.getGeneration() && pp.getNumber() == pageReference.getNumber()) { System.out.printf("* Target page %s has cropbox %s\n", pageNr, reader.getCropBox(pageNr)); break; } } } } } ``` *([ProcessLink](https://github.com/username_2-public/testarea-itext5/blob/master/src/test/java/username_2/testarea/itext5/annotate/ProcessLink.java#L43) test `testDetermineTargetPage`)* --- By the way, a destination can also be a named destination. Thus, if the **Dest** value for some PDFs happens to not be an array but a string, you'll simply have to make a look up in the **Dests** name tree. Upvotes: 3 [selected_answer]
2018/03/19
581
2,363
<issue_start>username_0: I have a series of menu options which are all individual user controls on a windows form application. [![enter image description here](https://i.stack.imgur.com/S5y2c.jpg)](https://i.stack.imgur.com/S5y2c.jpg) How do I refresh the user control so that if for example I added a new person to my txt file, when I click the Birthdays button, it preforms all the functions within the Birthday User control again on the file with the new person added. [![enter image description here](https://i.stack.imgur.com/u8SfN.jpg)](https://i.stack.imgur.com/u8SfN.jpg) What's happening now is when I add a new person to my txt file, The user controls don't refresh therefore the Data.updatedata() method isn't called and the data is not updated. Is there a particular event or method that I could use in order to refresh the user control when clicked? I have tried using birthdayUserControl1.refresh() in the main form ``` namespace Project { public partial class ChildrenUi : Form { public ChildrenUi() { InitializeComponent(); homeUserControl1.BringToFront(); } private void button2_Click(object sender, EventArgs e) { birthdaysUserControl1.Refresh(); birthdaysUserControl1.BringToFront(); } } } ``` I have only just started learning about Winforms and came across Data Binding using XAML/XML files on similar questions regarding refreshing user controls however I don't know much about XAML/XML and I would imagine i'd have to redesign a good portion of my project to facilitate that. I'm using a text file.<issue_comment>username_1: To watch the contents of your textfile, you can use the `System.IO.FileSystemWatcher` class. The `Changed`-Event informs your application whenever the content of the watched file is changed. Upvotes: 0 <issue_comment>username_2: Refreshing whole birthdaysUserControl1 won't refresh inner ListBox datasource, you need to manually refresh it. ``` private void button2_Click(object sender, EventArgs e) { birthdaysUserControl1.RefreshList(); } ``` And inside birthdaysUserControl1: ``` public void RefreshList() { listbox1.DataSource=null; listbox1.DataSource=UpcominBdays; } ``` Upvotes: 2 [selected_answer]
2018/03/19
435
1,520
<issue_start>username_0: I have a cvs file with one column and 300,000 individual text lines, which I would like to convert into a list of list. So that I get a list of 300,000 lists, with every sentence readable as a string. When I open the csv as a DataFrame and convert it into a series, every sentence is split into letter. ``` sentence = pd.read_csv("myfile.csv", encoding='utf-8') sentence = pd.Series([sentence]) sentence = sentence.tolist() This gives: [[('W', 'h', 'a', 't', ' ', 'i', 's', ' ', 't', 'h', 'e', ' ', 's', 't', 'e', 'p' ``` Instead, what I would like is for example when I would print(sentence), it would show: ``` [['What is the step by step approach for building a house?'],['The first step is securing an adequate plot.'] etc....] ``` Is there a simple way to do this?<issue_comment>username_1: To watch the contents of your textfile, you can use the `System.IO.FileSystemWatcher` class. The `Changed`-Event informs your application whenever the content of the watched file is changed. Upvotes: 0 <issue_comment>username_2: Refreshing whole birthdaysUserControl1 won't refresh inner ListBox datasource, you need to manually refresh it. ``` private void button2_Click(object sender, EventArgs e) { birthdaysUserControl1.RefreshList(); } ``` And inside birthdaysUserControl1: ``` public void RefreshList() { listbox1.DataSource=null; listbox1.DataSource=UpcominBdays; } ``` Upvotes: 2 [selected_answer]
2018/03/19
1,533
3,846
<issue_start>username_0: I have this vector in a data frame of times in the format of hours:minutes that I want converted to categorical times of day: ``` time <- c("15:03", "08:01", "11:59", "23:47", "14:20") df$time <- format(strptime(df$time, tz = "" , format = "%H: %M"), format = "%H: %M") df <- data.frame(time) ``` I suppose I would consider 5-11 the morning, 11-16 the afternoon, 16-19 evening, and anything beyond that and up until 5 night. The original data is in time format as hours:minutes with strptime(). I found some similar problems on the forum but I couldn't seem to tweak the code to work on my data.<issue_comment>username_1: I think this gets it done, I'm not sure how to get cut to acept duplicate labels, but maybe someone else will. The key was to use `chron::times()` to create a chronological object instead of a datetime object. ``` time <- c("15:03", "08:01", "11:59", "23:47", "14:20") timep <- as.POSIXct(time, format = "%H:%M") %>% format("%H:%M:%S") cut(chron::times(timep) , breaks = (1/24) * c(0,5,11,16,19,24), labels = c("night", "morning", "afternoon", "evening", "night1")) # [1] afternoon morning afternoon night1 afternoon # Levels: night morning afternoon evening night1 ``` update: ------- ``` tod <- cut(chron::times(timep) , breaks = (1/24) * c(0,5,11,16,19,24)) c("night","morning","afternoon","evening","night")[as.numeric(tod)] # "afternoon" "morning" "afternoon" "night" "afternoon" ``` Upvotes: 1 <issue_comment>username_2: ``` time <- as.POSIXct(strptime(c("15:03", "08:01", "11:59", "23:47", "14:20"),"%H:%M"),"UTC") x=as.POSIXct(strptime(c("050000","105959","110000","155959","160000", "185959"),"%H%M%S"),"UTC") library(tidyverse) case_when( between(time,x[1],x[2]) ~"morning", between(time,x[3],x[4]) ~"afternoon", between(time,x[5],x[6]) ~"evening", TRUE ~"night") [1] "afternoon" "morning" "afternoon" "night" "afternoon" ``` Using base R: ``` time <- as.POSIXct(strptime(c("15:03", "08:01", "11:59", "23:47", "14:20"),"%H:%M"),"UTC") x=as.POSIXct(strptime(c("000000","050000","110000","160000","190000","235959"), "%H%M%S"),"UTC") labs=c("night","morning","afternoon","evening","night") labs[findInterval(time,x)] [1] "afternoon" "morning" "afternoon" "night" "afternoon" ``` Upvotes: 2 <issue_comment>username_3: Using some `regex` and `ifelse` ``` df$hour <- as.numeric(gsub("\\:.*$", "", df$time)) df$cat <- with(df, ifelse(hour >= 5 & hour<=11, "morning", ifelse(hour>11 & hour<=16, "evening", "night"))) df time hour cat 1 15:03 15 evening 2 08:01 8 morning 3 11:59 11 morning 4 23:47 23 night 5 14:20 14 evening ``` Upvotes: 1 <issue_comment>username_4: This one is similar to @username_2, only using `plyr`'s `mapvalues()` and `lubridate`'s `hour()`: ``` library(lubridate) library(plyr) df$timeofdat<- mapvalues(hour(df$time),from=c(0:23), to=c(rep("night",times=5), rep("morning",times=6),rep("afternoon",times=5),rep("night", times=8))) ``` Upvotes: 0 <issue_comment>username_5: I was able to use an `ifelse` statement to make the categories. I changed the `strptime` to `as.POSIXct` and only kept the hour to make the groups. In the df there are 3 columns representing the original time, just the hour, and then the group. You can change it to be a factor with `as.factor` if the category needs to be a factor. ``` time <- c("15:03", "08:01", "11:59", "23:47", "14:20") time2 <- format(as.POSIXct(time, tz = "" , format = "%H: %M"), "%H") df <- data.frame(time, time2 = as.numeric(time2)) df$time_category <- ifelse(df$time2 >= 05 & df$time2 <= 11, "Morning", ifelse(df$time2 > 11 & df$time2 <= 16, "Afternoon", ifelse(df$time2 > 16 & df$time2 <= 19, "Evening", "Night"))) ``` Upvotes: 0
2018/03/19
951
3,664
<issue_start>username_0: I am currently trying to create my first **rest service with spring**, I wan't him to return **XML result (based on JAXB)**. on a very basic rest controller: ``` import org.springframework.web.bind.annotation.*; import java.util.ArrayList; import java.util.List; @RestController public class CVIController { @RequestMapping(value = "/resume") public @ResponseBody Integer getResume() { return 5; } } ``` with a spring-servlet.xml pretty empty: ``` ``` And a Tomcat server under JDK8 so I can be sure that all modules are load by default on the JVM... I got the following exception throwed: ``` org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalArgumentException: No converter found for return value of type: class java.lang.Integer ``` I don't wan't to use Jackson because I know that JAXB can handle this. But after juggle with some part of my code, I still can't found the answer of why she is throwed. **Edit 1:** I tried to create a dummy class with getters as answers said ``` public class Dummy { private int value; public int getValue() { return value; } public void setValue(int value) { this.value = value; } } @RestController public class CVIController { @RequestMapping(value = "/resume") public ResponseEntity getResume() { Dummy dummy = new Dummy(); dummy.setValue(5); return new ResponseEntity(dummy, HttpStatus.OK); } } ``` But still got the exact same result : ``` org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalArgumentException: No converter found for return value of type: class fr.urouen.model.Dummy ```<issue_comment>username_1: The simplest solution is to return a String and remove the @ResponseBody annotation. As a rest controller, by definition, returns a string, that's the way to go for this specific example. If you are after a more comprehensive solution, Spring by default uses Jackson to do the conversion from object to JSON. The @ResponseBody does that automatically. Im pretty sure that Integer won't work because it's not a POJO with getters and setters and something that can be converted to JSON. ie { 5 } is not a valid JSON object. if you had a class like this: ``` public class returnObject() { private int value; ... getters/setters } ``` and then did this: ``` @RestController @ResponseBody public ReturnObject getValue() { ReturnObject ret = new ReturnObject(); ret.setValue(5); return ret; } ``` You would then get this: ``` { "value" : 5 } ``` You will also need to add jackson to your pom: ``` com.fasterxml.jackson.core jackson-databind 2.9.4 ``` Upvotes: 1 <issue_comment>username_2: ResponseEntity is used for a return some content from restcontroller, Have you tried with `ResponseEntity` ? ``` @RestController public class CVIController { @RequestMapping(value = "/resume") public @ResponseBody ResponseEntity getResume() { return ResponseEntity.ok(5); } } ``` or you can write ``` @RestController public class CVIController { @RequestMapping(value = "/resume") public @ResponseBody ResponseEntity getResume() { return new ResponseEntity(5, HttpStatus.OK); } } ``` more example you can get from [here](https://www.logicbig.com/tutorials/spring-framework/spring-web-mvc/request-response-entity.html) Upvotes: 0 <issue_comment>username_3: I met this promblem too,try download jackson-databind.jar,ackson-core.jar, ackson-annotations.jar and put them in your WEB-INF/lib directory Upvotes: 0
2018/03/19
1,221
5,519
<issue_start>username_0: I got basic client-server chat application. Server side seems to work, when I connect with it via telnet, it receives the message and sends it back to all connected clients. I can't achieve the same using my own client tho. So from the beginning, `Server` class ``` public class Server { private Properties properties; private ServerSocket serverSocket; private Set clientConnections; public Server() throws IOException { clientConnections = new HashSet<>(); serverSocket = new ServerSocket(9999); while(true){ Socket clientSocket = serverSocket.accept(); ClientConnection clientConnection = new ClientConnection(clientSocket, this); clientConnections.add(clientConnection); clientConnection.start(); } } public Set getClientConnections() { return clientConnections; } } ``` On every connection is new `ClientConnection` created that at the beginning, sends "Hello from server" to new client (working if connects via telnet) and then, listens for all incoming messages and broadcast them to all connected clients, again - working if telnet is a client. ``` public class ClientConnection extends Thread { private final Socket clientSocket; private final Server server; private OutputStream outputStream; private InputStream inputStream; public ClientConnection(Socket clientSocket, Server server) { this.clientSocket = clientSocket; this.server = server; } @Override public void run(){ try { handleClient(); } catch (IOException e) { e.printStackTrace(); } } private void handleClient() throws IOException { outputStream = clientSocket.getOutputStream(); inputStream = clientSocket.getInputStream(); outputStream.write("Hello from server".getBytes()); System.out.println("New client connected"); BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream)); String incomingMessage; while((incomingMessage = bufferedReader.readLine()) != null) { for(ClientConnection connection : server.getClientConnections()) { connection.getOutputStream().write(incomingMessage.getBytes()); } System.out.println(incomingMessage); } clientSocket.close(); } public OutputStream getOutputStream() { return outputStream; } } ``` And then, I got Client application with `ServerConnection` class ``` public class ServerConnection{ private Socket socket; private OutputStream outputStream; private InputStream inputStream; private BufferedReader bufferedReader; private String host; private int port; public ServerConnection(String host, int port) { this.host = host; this.port = port; } public void connect() throws IOException { socket = new Socket(host, port); outputStream = socket.getOutputStream(); inputStream = socket.getInputStream(); outputStream.write("Hello from client".getBytes()); bufferedReader = new BufferedReader(new InputStreamReader(inputStream)); String incommingMessage; while((incommingMessage = bufferedReader.readLine()) != null) { System.out.println(incommingMessage); } } } ``` And it actually is registered by the server side (prints "New client connected"), but it isn't receiving "Hello from client" and the client isn't receiving any messages from the server.<issue_comment>username_1: The simplest solution is to return a String and remove the @ResponseBody annotation. As a rest controller, by definition, returns a string, that's the way to go for this specific example. If you are after a more comprehensive solution, Spring by default uses Jackson to do the conversion from object to JSON. The @ResponseBody does that automatically. Im pretty sure that Integer won't work because it's not a POJO with getters and setters and something that can be converted to JSON. ie { 5 } is not a valid JSON object. if you had a class like this: ``` public class returnObject() { private int value; ... getters/setters } ``` and then did this: ``` @RestController @ResponseBody public ReturnObject getValue() { ReturnObject ret = new ReturnObject(); ret.setValue(5); return ret; } ``` You would then get this: ``` { "value" : 5 } ``` You will also need to add jackson to your pom: ``` com.fasterxml.jackson.core jackson-databind 2.9.4 ``` Upvotes: 1 <issue_comment>username_2: ResponseEntity is used for a return some content from restcontroller, Have you tried with `ResponseEntity` ? ``` @RestController public class CVIController { @RequestMapping(value = "/resume") public @ResponseBody ResponseEntity getResume() { return ResponseEntity.ok(5); } } ``` or you can write ``` @RestController public class CVIController { @RequestMapping(value = "/resume") public @ResponseBody ResponseEntity getResume() { return new ResponseEntity(5, HttpStatus.OK); } } ``` more example you can get from [here](https://www.logicbig.com/tutorials/spring-framework/spring-web-mvc/request-response-entity.html) Upvotes: 0 <issue_comment>username_3: I met this promblem too,try download jackson-databind.jar,ackson-core.jar, ackson-annotations.jar and put them in your WEB-INF/lib directory Upvotes: 0
2018/03/19
963
3,478
<issue_start>username_0: I have a service that contains a method that gets a list of employees: ``` export class EmployeeService { private employeesUrl = 'http://localhost:portnum/api/employees'; getEmployees(): Observable { return this.http.get(this.employeesUrl).catch(this.errorHandler); } } ``` In my Component, I have this snippet which fetches the list of employees from my service: ``` employees = []; constructor(private fb: FormBuilder, private _employeeService: EmployeeService) { this.transactionForm = fb.group ({ 'employee': [null, Validators.required] }); } ngOnInit() { this._employeeService.getEmployees().subscribe(data => this.employees = data, error => this.errorMsg = error); } ``` and in my html code, I have a dropdownlist that binds to list of employees: ``` {{'Employee' | translate}}\* {{'SelectEmployee' | translate}} {{employee}} {{'RequiredField' | translate}} ``` However, instead of having a dropdownlist showing all the employees returned, I get this error: ``` ERROR Error: Error trying to diff '[object Object]'. Only arrays and iterables are allowed at DefaultIterableDiffer.webpackJsonp../node_modules/@angular/core/esm5/core.js.DefaultIterableDiffer.diff (core.js:7495) at NgForOf.webpackJsonp../node_modules/@angular/common/esm5/common.js.NgForOf.ngDoCheck (common.js:2583) at checkAndUpdateDirectiveInline (core.js:12368) at checkAndUpdateNodeInline (core.js:13889) at checkAndUpdateNode (core.js:13832) at debugCheckAndUpdateNode (core.js:14725) at debugCheckDirectivesFn (core.js:14666) at Object.eval [as updateDirectives] (TransactionPortalComponent.html:23) at Object.debugUpdateDirectives [as updateDirectives] (core.js:14651) at checkAndUpdateView (core.js:13798) ``` The object is an array, so I'm not sure why it says only arrays and iterables are allowed. What am I doing wrong above? EDIT: Response from api should look like this: ``` { _employeeList : [{ _employeeKey : ""employee1"" }, { _employeeKey : ""employee2"" }], _errorDetails : { _message : ""successful"", _status : 200 } } ``` But I am only getting back [] array. Could it be related to my interface? : ``` export interface IEmployee { employee: string; } ```<issue_comment>username_1: In your constructor you have injected your service as \_employeeService, but in your ngOnInit() method you are calling this.\_transactionService.getEmployees(). It should be this.\_employeeService.getEmployees(). Upvotes: 0 <issue_comment>username_2: I think the way you are doing the loop is wrong, since you want to have a tag for each employee. Also, since your array might be empty, you can first check if the array contains data or not. Maybe you can consider using and `ng-container`. With this response coming from your api, you have to make the following changes: **service** The api is returning a JSON and not an array ``` getEmployees(): Observable { return this.http.get(this.employeesUrl).catch(this.errorHandler); } ``` **component** Get the array ``` ngOnInit() { this._employeeService.getEmployees().subscribe(data => this.employees = data._employeeList, error => this.errorMsg = error); } ``` **template** Since you want to display not the object itself but its value, you can do the following: ```html {{employee.\_employeeKey}} ``` Upvotes: 2 [selected_answer]
2018/03/19
941
2,687
<issue_start>username_0: I am using bootstrap 4 and it is great but Is there any way I can add certain columns another class like resizable and maybe add min-width or max-width and they will be resizable within these limits. That would be so great. I’m not referring to any existing library specifically. I am just looking for a solution to this problem. An example code would be: ``` ``` Thank you very much<issue_comment>username_1: > > <https://split.js.org/> > > > You can use split js you'd give your elements an id and set default sizes, because you're using bootstrap you already have a dependency on Jquery so that shouldn't be an issue. Github link is: <https://github.com/nathancahill/split> ```js const GUTTER_SIZE = 30; const gutterStyle = dimension => ({ 'flex-basis': `${GUTTER_SIZE}px`, }); const elementStyle = (dimension, size) => ({ 'flex-basis': `calc(${size}% - ${GUTTER_SIZE}px)`, }) Split(['#one', '#two'], { sizes: [500, 100], minSize: 200, elementStyle, gutterStyle }); ``` ```css div { border: 2px solid black; background: #ccc; height: 170px; } .flex { display: flex; flex-direction: row; } ``` ```html ``` Upvotes: 4 <issue_comment>username_2: Split.js just worked perfect for me, If you are using flex (bootstrap 4) you also must add elementstyle and gutterstyle attributes. This is how I did it HTML: ``` A B C ``` CSS: ``` .split {-webkit-box-sizing: border-box;-moz-box-sizing: border-box;box-sizing: border-box;overflow-y: auto;overflow-x: hidden;} .gutter {background-color: transparent;background-repeat: no-repeat;background-position: 50%;} .gutter.gutter-horizontal {cursor: col-resize;background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAeCAYAAADkftS9AAAAIklEQVQoU2M4c+bMfxAGAgYYmwGrIIiDjrELjpo5aiZeMwF+yNnOs5KSvgAAAABJRU5ErkJggg=='); } .gutter.gutter-vertical {cursor: row-resize;background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAB4AAAAFAQMAAABo7865AAAABlBMVEVHcEzMzMzyAv2sAAAAAXRSTlMAQObYZgAAABBJREFUeF5jOAMEEAIEEFwAn3kMwcB6I2AAAAAASUVORK5CYII='); } .split.split-horizontal, .gutter.gutter-horizontal { height: 100%;float: left;} ``` JS Library : ``` ``` Code: ``` var splitobj = Split(["#one","#two","#three"], { elementStyle: function (dimension, size, gutterSize) { $(window).trigger('resize'); // Optional return {'flex-basis': 'calc(' + size + '% - ' + gutterSize + 'px)'} }, gutterStyle: function (dimension, gutterSize) { return {'flex-basis': gutterSize + 'px'} }, sizes: [20,60,20], minSize: 150, gutterSize: 6, cursor: 'col-resize' }); ``` Upvotes: 4 [selected_answer]
2018/03/19
576
1,931
<issue_start>username_0: Say I had this loop: ``` count = 0 for i in 0...9 { count += 1 } ``` and I want to delay it. Delay function: ``` // Delay function func delay(_ delay:Double, closure:@escaping ()->()) { DispatchQueue.main.asyncAfter( deadline: DispatchTime.now() + Double(Int64(delay * Double(NSEC_PER_SEC))) / Double(NSEC_PER_SEC), execute: closure) } ``` . This means if I want to increase `count` by 1 every second, I would do: ``` count = 0 for i in 0...9 { delay(1) { count += 1 } } ``` but this doesn't work as it only delays code in brackets. How do I delay the actual loop? I would like the delay to stop from iterating until the time has passed, and then the loop/code can repeat again.<issue_comment>username_1: Your current code doesn't work because you are doing the increment asynchronously. This means that the for loop will still run at its normal speed. To achieve what you want, you can use a timer like this: ``` var count = 0 let timer = Timer.scheduledTimer(withTimeInterval: 1, repeats: true){ _ in count += 1 print(count) } ``` If you want it to stop after 5 times, do this: ``` var count = 0 var timer = Timer.scheduledTimer(withTimeInterval: 1, repeats: true){ t in count += 1 print(count) if count >= 5 { t.invalidate() } } ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: As @Paulw11 and @username_1 have suggested, you can use a `Timer` to do this. However, if the code does need to be asynchronous for some reason, you can reimplement the loop asynchronously by making it recursive: ``` func loop(times: Int) { var i = 0 func nextIteration() { if i < times { print("i is \(i)") i += 1 DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(1)) { nextIteration() } } } nextIteration() } ``` Upvotes: 2
2018/03/19
375
1,539
<issue_start>username_0: We have the requirement to synchronize the data in a table across multiple MySQL databases. One of the databases would be the source, and all others need to have the synchronized data for one of the tables. We have multiple databases used in microservice architecture, and they all need to have a local copy of a specific table in their database, and not the entire database itself, hence read replica or multi-AZ configuration is not the solution. Database: MySQL hosted on AWS RDS Is there any managed service by AWS or another vendor that can be used to accomplish this? Or do we have to write a custom script to do that?<issue_comment>username_1: It's a simple MySQL replication. But you have to replicate the whole database. So create a MySQL database in AWS and enable Multi-AZ replication and activate the automatic snapshots. The A-Z Replication is synchronous. When you use a "Read replica" it's asynchronous. So if you have very important data you should enable Multi-AZ replication. <https://aws.amazon.com/rds/details/multi-az/?nc1=h_ls> Upvotes: 2 <issue_comment>username_2: I think you have to either isolate the table in one database and replicate that database or write a custom script. If I were writing a custom script I would look at the binlog functionality. Here are some helpful links: <https://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html> <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MySQL.html> (at the bottom). Upvotes: 2 [selected_answer]
2018/03/19
812
3,044
<issue_start>username_0: I am trying to get the value and href link of the closest anchor tag based on a nearby button. As you can see, this is a list of anchor tags and buttons. so want to target that specific anchor tag when clicking on a button. I am logging to see the name and href, but coming as empty and undefined. what am I doing wrong? ```js $(document).ready(function() { $('.input_fields_wrap').on("click",".addlink-edit-icon",function (e) { e.preventDefault(); var target = $('.show_field').closest('a'); var edit_name = target.text(); var edit_address = target.attr('href'); console.log('name', edit_name ); console.log('address', edit_address); }); }); ``` ```html Document - [aaaaaaaa](aaaaaaaaaaaa) edit - [bbbbbbbbbb](bbbb) edit - [cccccccccc](ccc) edit ```<issue_comment>username_1: The function `closest` finds the closest ancestor/parent element using a specific selector rather than a sibling. ***In that markup the link that you want to select is not an ancestor/parent of the buttons.*** **Further, this line must be modified:** ``` var target = $('.show_field').closest('a'); ``` **To this:** ``` +---- This is the current clicked button. | | +--- Gets the desired element 'a' | | v v var target = $(this).closest('li').find('a'); ^ | +---- Finds the closest ancestor 'li'. ``` An alternative is to find the closest element `li` and then find an element `a`. ```js $(document).ready(function() { $('.input_fields_wrap').on("click", ".addlink-edit-icon", function(e) { e.preventDefault(); var target = $(this).closest('li').find('a'); var edit_name = target.text(); var edit_address = target.attr('href'); console.log('name', edit_name); console.log('address', edit_address); }); }); ``` ```html Document - [aaaaaaaa](aaaaaaaaaaaa) edit - [bbbbbbbbbb](bbbb) edit - [cccccccccc](ccc) edit ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to traverse up the DOM to the list item first, with `.closest('li')` and then down to the link using `.find('a')`. Essentially, just change: ``` var target = $('.show_field').closest('a'); ``` to ``` var target = $(this).closest('li').find('a'); ``` ```js $(document).ready(function() { $('.input_fields_wrap').on("click",".addlink-edit-icon",function (e) { e.preventDefault(); var target = $(this).closest('li').find('a'); var edit_name = target.text(); var edit_address = target.attr('href'); console.log('name', edit_name ); console.log('address', edit_address); }); }); ``` ```html - [aaaaaaaa](aaaaaaaaaaaa) edit - [bbbbbbbbbb](bbbb) edit - [cccccccccc](ccc) edit ``` Upvotes: 2
2018/03/19
1,581
6,684
<issue_start>username_0: this is the code: ``` public class MainActivity extends AppCompatActivity { ListView listView; ProgressBar progressBar; String HTTP_JSON_URL = "http://10.0.2.2/positivity/all_subjects.php"; EditText editText; List SubjectArrayList = new ArrayList(); ArrayAdapter arrayAdapter; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); listView = (ListView) findViewById(R.id.listview1); progressBar = (ProgressBar) findViewById(R.id.progressBar); editText = (EditText) findViewById(R.id.edittext1); // Calling Method to Parese JSON data into listView. new GetHttpResponse(MainActivity.this).execute(); // Calling EditText addTextChangedListener method which controls the EditText type sequence. editText.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) { //Updating Array Adapter ListView after typing inside EditText. MainActivity.this.arrayAdapter.getFilter().filter(charSequence); } @Override public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) { } @Override public void afterTextChanged(Editable editable) { } }); // Adding On item click listener on ListView. listView.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView parent, View view, int position, long id) { // TODO Auto-generated method stub String Item = parent.getItemAtPosition(position).toString(); // Showing ListView click item using Toast message on screen. Toast.makeText(MainActivity.this, Item, Toast.LENGTH_LONG).show(); } }); } // Creating GetHttpResponse message to parse JSON. public class GetHttpResponse extends AsyncTask { // Creating context. public Context context; // Creating string to hold Http response result. String ResultHolder; // Creating constructor . public GetHttpResponse(Context context) { this.context = context; } @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected Void doInBackground(Void... arg0) { // Sending the Http URL into HttpServicesClass to parse JSON. HttpServicesClass httpServiceObject = new HttpServicesClass(HTTP\_JSON\_URL); try { httpServiceObject.ExecutePostRequest(); // If the server response code = 200 then JSON parsing start. if (httpServiceObject.getResponseCode() == 200) { // Adding Http response into ResultHolder string. ResultHolder = httpServiceObject.getResponse(); // If there is response present into ResultHolder. if (ResultHolder != null) { // Creating JSONArray and set it to null. JSONArray jsonArray = null; try { // Adding ResultHolder into JSONArray. jsonArray = new JSONArray(ResultHolder); // Creating JSONObject. JSONObject jsonObject; // Starting for loop at the end of jsonArray length. for (int i = 0; i < jsonArray.length(); i++) { // Adding JSON array object . jsonObject = jsonArray.getJSONObject(i); // Adding the JSON parse object into SubjectArrayList. SubjectArrayList.add(jsonObject.getString("subject\_Name").toString()); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } else { // If something goes wrong then showing the error message on screen. Toast.makeText(context, httpServiceObject.getErrorMessage(), Toast.LENGTH\_SHORT).show(); } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } // This block will execute after done all background processing. @Override protected void onPostExecute(Void result) { // Hiding the progress bar after done loading JSON. progressBar.setVisibility(View.GONE); // Showing the ListView after done loading JSON. listView.setVisibility(View.VISIBLE); // Setting up the SubjectArrayList into Array Adapter. arrayAdapter = new ArrayAdapter(MainActivity.this, android.R.layout.simple\_list\_item\_2, android.R.id.text1, SubjectArrayList); // Passing the Array Adapter into ListView. listView.setAdapter(arrayAdapter); } } } ```<issue_comment>username_1: The function `closest` finds the closest ancestor/parent element using a specific selector rather than a sibling. ***In that markup the link that you want to select is not an ancestor/parent of the buttons.*** **Further, this line must be modified:** ``` var target = $('.show_field').closest('a'); ``` **To this:** ``` +---- This is the current clicked button. | | +--- Gets the desired element 'a' | | v v var target = $(this).closest('li').find('a'); ^ | +---- Finds the closest ancestor 'li'. ``` An alternative is to find the closest element `li` and then find an element `a`. ```js $(document).ready(function() { $('.input_fields_wrap').on("click", ".addlink-edit-icon", function(e) { e.preventDefault(); var target = $(this).closest('li').find('a'); var edit_name = target.text(); var edit_address = target.attr('href'); console.log('name', edit_name); console.log('address', edit_address); }); }); ``` ```html Document - [aaaaaaaa](aaaaaaaaaaaa) edit - [bbbbbbbbbb](bbbb) edit - [cccccccccc](ccc) edit ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to traverse up the DOM to the list item first, with `.closest('li')` and then down to the link using `.find('a')`. Essentially, just change: ``` var target = $('.show_field').closest('a'); ``` to ``` var target = $(this).closest('li').find('a'); ``` ```js $(document).ready(function() { $('.input_fields_wrap').on("click",".addlink-edit-icon",function (e) { e.preventDefault(); var target = $(this).closest('li').find('a'); var edit_name = target.text(); var edit_address = target.attr('href'); console.log('name', edit_name ); console.log('address', edit_address); }); }); ``` ```html - [aaaaaaaa](aaaaaaaaaaaa) edit - [bbbbbbbbbb](bbbb) edit - [cccccccccc](ccc) edit ``` Upvotes: 2
2018/03/19
990
3,106
<issue_start>username_0: I have a complex, large pandas dataframe with one column, X that can contain either one list or a list of lists. I'm curious if the solution can apply to any content though, so I give a mock example with one element of X being a string as well: ``` df1 = pd.DataFrame({ 'A': [1, 1, 3], 'B': ['a', 'e', 'f'], 'X': ['something', ['hello'], [['something'],['hello']]]} ) ``` I want to get the subset of that dataframe, df2, for which column X contains the substring "hello", when whatever is in there is read as a string. ``` >>> df2 A B X 0 1 e [hello] 1 3 f [[something], [hello]] ``` I have tried extensive combinations of str() and .str.contains, apply, map, .find(), list comprehensions, and nothing seems to work without getting into loops (related questions [here](https://stackoverflow.com/questions/44126089/pandas-isin-for-list-of-values-in-each-row-of-a-column/44126264#44126264) and [here](https://stackoverflow.com/questions/27300070/use-pandas-string-method-contains-on-a-series-containing-lists-of-strings). What am I missing?<issue_comment>username_1: You can use np.ravel() to flatten nested list and use in operator ``` df1[df1['X'].apply(lambda x: 'hello' in np.ravel(x))] A B X 1 1 e [hello] 2 3 f [[something], [hello]] ``` Upvotes: 3 <issue_comment>username_2: Adding `astype` before `str.contains` ``` df1[df1.X.astype(str).str.contains('hello')] Out[538]: A B X 1 1 e [hello] 2 3 f [[something], [hello]] ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: Borrowing from @wim <https://stackoverflow.com/a/49247980/2336654> The most general solution would be to allow for arbitrarily nested lists. Also, We can focus on the string elements being equal rather than containing. ``` # This import is for Python 3 # for Python 2 use `from collections import Iterable` from collections.abc import Iterable def flatten(collection): for x in collection: if isinstance(x, Iterable) and not isinstance(x, str): yield from flatten(x) else: yield x df1[df1.X.map(lambda x: any('hello' == s for s in flatten(x)))] A B X 1 1 e [hello] 2 3 f [[something], [hello]] ``` So now if we complicate it ``` df1 = pd.DataFrame({ 'A': [1, 1, 3, 7, 7], 'B': ['a', 'e', 'f', 's', 's'], 'X': [ 'something', ['hello'], [['something'],['hello']], ['hello world'], [[[[[['hello']]]]]] ]} ) df1 A B X 0 1 a something 1 1 e [hello] 2 3 f [[something], [hello]] 3 7 s [hello world] 4 7 s [[[[[['hello']]]]]] ``` Our filter does not grab `hello world` and does grab the very nested `hello` ``` df1[df1.X.map(lambda x: any('hello' == s for s in flatten(x)))] A B X 1 1 e [hello] 2 3 f [[something], [hello]] 4 7 s [[[[[['hello']]]]]] ``` Upvotes: 2
2018/03/19
1,370
4,040
<issue_start>username_0: I'm trying to perform multiple inserts from 1 sql query. To break down what I'm trying to do here is the DB structure: ``` links: - id // uuid_generate_v4() - hash permissions: - id // uuid_generate_v4() - name photos: - id // uuid_generate_v4() - url link_permissions: - link_id - permission_id link_photo: - link_id - photo_id ``` Now whenever I insert a link I need to also insert its permissions and photos. This is the sql queries I've attempted so far. ``` WITH link as ( INSERT INTO links(hash) VALUES ('my-random-hash') RETURNING * ) INSERT INTO link_photo(link_id, photo_id) VALUES ((select link.id from link), '095ccacf-ebc1-4991-8ab0-cac13dac02b7'), INSERT INTO link_permission(link_id, permission_id) VALUES ((select link.id from link), '506f3302-fe9f-4982-8439-d6781f646d01') ``` and ``` WITH link as ( INSERT INTO links(hash) VALUES ('my-random-hash') RETURNING * ) (INSERT INTO link_photo(link_id, photo_id) VALUES ((select link.id from link), '095ccacf-ebc1-4991-8ab0-cac13dac02b7')), INSERT INTO link_permission(link_id, permission_id) VALUES ((select link.id from link), '506f3302-fe9f-4982-8439-d6781f646d01') ``` How would I write this query?<issue_comment>username_1: Postgresql's "with query" only works with one statetment. You can do it in "one run" by using an [anonymous code block](https://www.postgresql.org/docs/9.1/static/sql-do.html), but it will probably be a better idea to create a proper plpgsql insert function. Upvotes: 1 <issue_comment>username_2: Wouldn't your first attempt work (I haven't tried myself) if you replaced "RETURNING \*" with "RETURNING id" and selecting that. See [the answer to this question](https://dba.stackexchange.com/questions/139950/insert-into-multiple-tables-with-one-query) for the full query Upvotes: 0 <issue_comment>username_3: > > *Now whenever I insert a link I need to also insert its permissions and photos.* > > > There's several ways you can accomplish this. You can use `returning id into link_id` to place the link id into the variable `link_id` and reference that in later statements. Since this all has to happen in concert, you can write a function to handle it. This ensures it always happens together, you don't have to try and cram it all into one statement, and it makes it easier to use variables. You can also add a transaction and error handling should any part fail. ``` create function add_link( photo_id uuid, permission_id uuid) returns uuid as $func$ declare link_id uuid; begin insert into links(id, hash) values (default, 'probably have the function generate this?') returning id into link_id; insert into link_photo(link_id, photo_id) values (link_id, photo_id); insert into link_permission(link_id, permission_id) values (link_id, permission_id); return link_id; end $func$ language plpgsql; ``` ``` test=# select * from add_link(uuid_generate_v4(), uuid_generate_v4()); add_link -------------------------------------- 1c18b24c-c7ac-4169-8449-1c18c9087f8b (1 row) ``` Upvotes: 1 <issue_comment>username_4: Just put the the second insert into another CTE: ``` WITH link as ( INSERT INTO links(hash) VALUES ('my-random-hash') RETURNING * ) , lp as ( INSERT INTO link_photo(link_id, photo_id) VALUES ((select link.id from link), '095ccacf-ebc1-4991-8ab0-cac13dac02b7') ) INSERT INTO link_permission(link_id, permission_id) VALUES ((select link.id from link), '506f3302-fe9f-4982-8439-d6781f646d01'); ``` Alternatively, don't use `values`, that makes the query a bit easier to read (I think) ``` WITH link as ( INSERT INTO links(hash) VALUES ('my-random-hash') RETURNING * ) , lp as ( INSERT INTO link_photo(link_id, photo_id) select id, '095ccacf-ebc1-4991-8ab0-cac13dac02b7' from link ) INSERT INTO link_permission(link_id, permission_id) select id, '506f3302-fe9f-4982-8439-d6781f646d01' from link; ``` Upvotes: 3 [selected_answer]
2018/03/19
703
2,357
<issue_start>username_0: I am currently trying to test how to completely cover my Angular web page with an image background. I've looked into numerous examples online on how to get this to happen. This is what I've come up with in my .html and .css code: ``` ``` --- ``` html, body{ height: 100%; /*font-family: Arial, Helvetica, sans-serif;*/ width: 100%; margin: 0px; padding: 0px; border: 0px; min-height: 375px; } .mainPage { -webkit-background-size: cover; -moz-background-size: cover; -o-background-size: cover; background: url(/assets/images/background.png) no-repeat center center fixed; background-size: cover; } main { height: 100%; width: 100%; } ``` I'm under the impression that this is how you are supposed to define your CSS if you want an image to cover the background of the screen. However, when I run this with npm start and traverse to localhost:4200, I am greeted with the following output: [Background](https://i.stack.imgur.com/EdAZF.jpg) It appears that instead of the image covering the entire web page, it's only covering whatever is defined within it's HTML tag (e.g. the Angular logo image). Is there something wrong with my html or could it be something deeply rooted in my angular project elsewhere? **SOLUTION:** As per codechick's response, I removed the height/width attributes from the html and main CSS tag definitions. Instead, I added: ``` .mainPage { -webkit-background-size: cover; -moz-background-size: cover; -o-background-size: cover; background: url(/assets/Wood.png) no-repeat center center fixed; background-size: cover; height: 100vh; } ```<issue_comment>username_1: Well, if you want to cover the web page with a background image, you will need to set a width as well as a height. Also, you might have some syntax wrong in your CSS. ``` .mainPage { background-image: url("/assets/images/background.png"); background-size: cover; background-repeat: no-repeat; width: 100%; height: 100vh; background-attachment: fixed; } ``` Try this... Hope it works! Upvotes: -1 <issue_comment>username_2: well first you don't need the `body, html {height:100%}` then remove the height for `main` as well. use `height: 100vh` for `.mainPage` or the image container. check my code here: [codepen](https://codepen.io/anon/pen/NYdNbe) Upvotes: 1 [selected_answer]
2018/03/19
946
2,572
<issue_start>username_0: I have a table `MY_DATES (START_DATE DATE, END_DATE DATE)` with data like : ``` START_DATE END_DATE --------------------------- 18-DEC-17 07-JAN-18 27-JAN-18 06-FEB-18 08-MAR-18 18-MAR-18 ``` I need to generate dates for all the date ranges in my table in a single column using SQL like below: ``` DATES ---------- 18-DEC-17 19-DEC-17 20-DEC-17 . . . 06-JAN-18 07-JAN-18 27-JAN-18 28-JAN-18 29-JAN-18 . . . 05-FEB-18 06-FEB-18 08-MAR-18 09-MAR-18 10-MAR-18 . . . 18-MAR-18 ``` I am using oracle 11G. appreciate any help in this regard.<issue_comment>username_1: Tro to do this: ``` with calendar as ( select :startdate + rownum - 1 as day from dual connect by rownum < :enddate - :startdate ) select rownum as "S.No", to_date(day,'dd_mm_yyyy') as "Cal_Dt", to_char(day,'day') as "DayName" from calendar ``` where You have to replace :startdate and :enddate with meaningful values for your particular case. Upvotes: 1 <issue_comment>username_2: Try this. ``` WITH t (sdt, ldt) AS (SELECT MIN (START_DATE), MAX (END_DATE) FROM MY_DATES) SELECT * FROM ( SELECT sdt + LEVEL - 1 AS dates FROM t CONNECT BY LEVEL <= ldt - sdt + 1) c WHERE EXISTS (SELECT 1 FROM MY_DATES d WHERE c.dates BETWEEN START_DATE AND END_DATE); ``` [**Demo**](http://sqlfiddle.com/#!4/106681/3) Upvotes: 3 [selected_answer]<issue_comment>username_3: In this user-case, AVG fees are generated and distributed across all the end of month dates in a date range. ``` CREATE table #ProductSales (ProjectID Int, ProjectName varchar(100), TotalBillableFees Money, StartDate Date, EndDate Date, DataDate Date) Insert into #ProductSales Values (373104,'Product Sales - Flex Creation Test',40000.00,'2019-04-01','2020-06-01','2019-08-01'), (375111,'Product Sales - SMART',40000.00,'2019-04-01','2019-09-01','2019-08-01') ;WITH Dates AS ( SELECT ProjectiD ,Convert(decimal(10,2),TotalBillableFees/IIF(DATEDIFF(MONTH,StartDate,EndDate)=0,1,DATEDIFF(MONTH,StartDate,EndDate))) AS BillableFeesPerMonths,EndDate ,[Date] = CONVERT(DATETIME,EOMONTH(StartDate)) FROM #ProductSales UNION ALL SELECT ProjectiD,BillableFeesPerMonths,EndDate, [Date] = DATEADD(MONTH, 1, [Date]) FROM Dates WHERE Date < EOMONTH(EndDate) ) SELECT ProjectID,BillableFeesPerMonths, CAST([Date] as Date) Date FROM Dates OPTION (MAXRECURSION 45) ``` Upvotes: 0
2018/03/19
263
992
<issue_start>username_0: I have been through similar topics here and went through documentation, but still have a problem with the following block of code: ```js var distanceToPump = 50; var mpg = 25; var fuelLeft = 2; var zeroFuel = (distanceToPump, mpg, fuelLeft) => { return ( mpg * fuelLeft ) == distanceToPump ? true : false; } console.log(zeroFuel()) ``` The function returns false, while the ternary operator returns true. Where is the mistake? UPD. My question is where does this difference comes from, where is the mistake in the function zeroFuel?<issue_comment>username_1: When you call `zeroFuel` you aren't passing any arguments. The local variables defined as the parameters, therefore, get the value `undefined`. `(undefined * undefined) == undefined` is false. The variables with the same names in the wider scope are never used. Upvotes: 1 <issue_comment>username_2: Just tested in console and `zeroFuel(distanceToPump, mpg, fuelLeft)` returns true. Upvotes: -1
2018/03/19
912
2,565
<issue_start>username_0: I am trying to setup stunnel with REDIS in windows.After installing stunnel and redis, I have following configuration fro server and client mode : Client Machine Configuration: ``` [redis-stunnel] client = yes cert = stunnel.pem accept = 127.0.0.1:6379 connect = 172.30.12.28:6390 verifyChain = yes CAfile = stunnel.pem checkHost = 172.30.12.28:6390 OCSPaia = yes ``` Server Machine configuration : ``` [redis-stunnel-server] accept = 6380 connect = 6379 cert = stunnel.pem ``` Configuration is loading successfully, But when i try to connect from client machine to the port number :6379, I am getting the following error : ``` Service [redis-stunnel] connected remote server from 172.30.12.120:65484 2018.03.19 21:03:41 LOG4[229]: CERT: No matching host name found 2018.03.19 21:03:41 LOG4[229]: Rejected by CERT at depth=0: C=IN, ST=KARNATAKA, L=BANGALORE, O=AHC, OU=healthcare, CN=172.30.12.120 2018.03.19 21:03:41 LOG3[229]: SSL_connect: 14090086: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed 2018.03.19 21:03:41 LOG5[229]: Connection reset: 0 byte(s) sent to TLS, 0 byte(s) sent to socket ``` What is wrong here ?<issue_comment>username_1: `checkHost =` checks that the CN in the subject of the peer certificate corresponds to . According to the log, the CN in the certificate is `172.30.12.120`. So you should change the client config to `checkHost = 172.30.12.120`. This might make it work, although I'm not sure your TLS configuration is very well-thought. It looks like you are not doing client side authentication, so you can remove `cert` from the client config. Also, if you have the server certificate on the client machine, you could use the "certificate pinning technique": remove the `checkHost` option and replace `verifyChain` with `verifyPeer = yes`. And `stunnel.pem` on the client machine must be the same as `stunnel.pem` on the server. [Stunnel documentation contains some simple examples for this.](https://www.stunnel.org/auth.html) Upvotes: 0 <issue_comment>username_2: You should replace `checkHost = 172.30.12.28:6390` with `checkIP = 172.30.12.28`. `checkHost` option checks if the specified DNS host name or wildcard matches any of those provided in peer certificate's SAN or CN fields. To check against an IP address one has to use `checkIP` option. Under the hood **stunnel** uses **OpenSSL**'s `X509_check_host` and `X509_check_ip_asc` functions. Here's their [description](https://www.openssl.org/docs/man1.1.0/man3/X509_check_host.html). Upvotes: 1
2018/03/19
875
2,855
<issue_start>username_0: I'm getting an error when trying to run a simple dashboard using Dash. I'm using Spyder with Python 3.4. I've `pip installed` `dash, dash_core_components, dash_html_compenents`.. My code: ``` import dash import dash_core_components as dcc import dash_html_components as html app = dash.Dash() app.layout = html.Div(children=[ html.H1(children='Hello Dash'), html.Div(children=''' Dash: A web application framework for Python. '''), dcc.Graph( id='example-graph', figure={ 'data': [ {'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'SF'}, {'x': [1, 2, 3], 'y': [2, 4, 5], 'type': 'bar', 'name': u'Montréal'}, ], 'layout': { 'title': 'Dash Data Visualization' } } ) ]) if __name__ == '__main__': app.run_server(debug=True) ``` This was taken right from [Dash/Plotly website tutorial](https://dash.plot.ly/getting-started) I get the following error: ``` * Running on http://127.0.0.1:8050/ (Press CTRL+C to quit) * Restarting with stat C:\Users\mwolfe\AppData\Local\Continuum\anaconda3\envs\py34\lib\site-packages\IPython\core\interactiveshell.py:2889: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D. An exception has occurred, use %tb to see the full traceback. SystemExit: 1 ``` When I go to `http://127.0.0.1:8050/` to try to view the example dashboard It won't load. I've tried [this](https://stackoverflow.com/questions/10888045/simple-ipython-example-raises-exception-on-sys-exit/20292104#20292104) in order to fix the issue, but haven't been able to get that to work.<issue_comment>username_1: Setting `debug=False` does solve the problem, but not with Jupyter notebook and Spyder. The code goes into limbo when executed with notebook/spyder. **Change your code to `debug=False` and execute it in PyQt console** that comes with Anaconda Navigator. It works. Upvotes: 2 <issue_comment>username_2: Update on username_1's answer: now setting debug=False works inside Jupyter Notebook as well. I tested Matt's code in there and the URL works fine. However, I think the problem is mainly with the use\_reloader. You have to set that to False in Jupyter Notebook (according to Plotly documentation). So, this also works in notebook for me: ``` import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output app = dash.Dash() app.layout = html.Div( html.H1(children="Hello000") ) if __name__ == '__main__': app.run_server(debug=True, use_reloader=False). <---- Here ``` Also, make sure any previous app you have initiated is stopped already (ctrl + c or simply use the square button in notebook (or press 'i' twice on your keyboard) for stopping the execution). Upvotes: 3
2018/03/19
1,480
5,824
<issue_start>username_0: I'm trying to write a macro, which writes accesors for a class automatically for all slots. I came up with this ``` (defmacro defacc (class) (loop for name in (mapcar #'slot-definition-name (class-slots (class-of (make-instance `,class)))) do `(defun ,name (,class) (slot-value ,class ',name)))) ``` But it does not define any function( Without loop, it works for single function and single slot name. I don't get where s the problem. I also tried `(class-of (make-instance 'class))` and `(class-of ',class)`. Still no function appears. P.S. I did not forget to actually write this macro with my classes.<issue_comment>username_1: Assuming you have the class definitions you want to add automatic slot-readers for then the easy way to do this is by using a customised `defclass` macro, for instance this: ``` (defmacro defclass/auto-reader (class supers slot-definitions &rest class-options) ;; This is exactly like DEFCLASS, except it will automagically add a ;; reader method for each slot, named after the slot. `(defclass ,class ,supers ,(mapcar #'(lambda (slot-definition) ;; Rewrite the slot definitions as needed (etypecase slot-definition ;; A slot definition can either be ... (symbol ;; ... a symbol naming the slot, in which case we ;; need to create a listy slot definition with a ;; reader specified ... `(,slot-definition :reader ,slot-definition)) (list ;; ... or a list, whose first element is the name ;; of the slot and whose remaining elements ;; define various options. In this case we just ;; append the specification of the reader method ;; we need to the list (this might be slightly ;; fragile: I don't know what happens if you end ;; up with something like ;; (x ... :reader x ... :reader x) ;; but this is not likely a problem in practice). (append slot-definition `(:reader ,(first slot-definition)))))) slots) ,@class-options)) ``` This will automatically define readers, and will also make those readers work robustly (for instance they will work if you have two classes which define the same slot name). --- Note that it's not clear to me that a separate `defacc` macro can work. In particular consider trying to compile a file containing something like this: ``` (defclass foo (...) (s1 s2 ...)) ... (defacc foo ...) ``` Here `defacc` needs to be able to find out `foo`'s slots *at compile time*, and I am fairly sure that it can't reliably do this: the [spec](http://www.lispworks.com/documentation/HyperSpec/Body/m_defcla.htm) tells you that `defclass` needs to make some information about the class available at compile-time (for instance that it is a class) but I am not at all sure it needs to make things like slot definitions available. However, the MOP is outside the specification in any case and it may be that MOPpy implementations (all of them I guess) do make this information available. Upvotes: 3 [selected_answer]<issue_comment>username_2: `Defclass` already has the slot options to define accessors, readers, and writers: ``` (defclass foo () ((bar :accessor foo-bar))) ``` This defines, along with the class, a method `foo-bar` specialized on `foo` that returns the value of the `bar` slot, as well as a method `setf foo-bar` to write to that slot. There is also a `:reader` and a `:writer` option to define them separately. A lot of people have written their own wrappers around `defclass` to make this even more convenient (e. g. automatically using a naming convention). However, the drawback is that the locally used wrapper is most likely not familiar to others, and the added cognitive load is often not seen as worth the minor convenience improvement. Upvotes: 2 <issue_comment>username_3: **defining individual functions** Let's say you have two forms defining functions `foo` and `bar`. ``` CL-USER 68 > (defun foo (baz) baz) FOO CL-USER 69 > (defun bar (baz) baz) BAR ``` **defining more functions in one Lisp form** Now you want to use these two forms in one Lisp form. They typical way is to use an operator, which provides a body of forms. `progn` is such an operator. Its subforms will be executed one by one and the last results are returned. ``` CL-USER 70 > (progn (defun foo (baz) baz) (defun bar (baz) baz)) BAR ``` **generating a PROGN form** Now imagine that you have Lisp code as data, here as a list of Lisp forms. ``` CL-USER 71 > '((defun foo (baz) baz) (defun bar (baz) baz)) ((DEFUN FOO (BAZ) BAZ) (DEFUN BAR (BAZ) BAZ)) ``` To create a valid `progn` form, you just need to cons `progn` to the front of the list: ``` CL-USER 72 > (cons 'progn '((defun foo (baz) baz) (defun bar (baz) baz))) (PROGN (DEFUN FOO (BAZ) BAZ) (DEFUN BAR (BAZ) BAZ)) ``` **a macro generating a PROGN form** That's what the macro needs to generate: a valid `progn` form. Example for a macro generating a `progn` form: ``` CL-USER 74 > (defmacro baz (sym) (cons 'progn `((defun foo (,sym) ,sym) (defun bar (,sym) ,sym)))) BAZ CL-USER 75 > (pprint (macroexpand-1 '(baz fourtytwo))) (PROGN (DEFUN FOO (FOURTYTWO) FOURTYTWO) (DEFUN BAR (FOURTYTWO) FOURTYTWO)) ``` Upvotes: 3
2018/03/19
418
1,412
<issue_start>username_0: I'm on a Mac (High Sierra 10.13.3). I switched from NPM to yarn a few weeks back. Recently, yarn has been giving me issues when attempting to global install various tools and libraries (webpack, etc) saying that "an incompatible module "node" was found. Expected version 6.11.0". I've tried using NVM, but I only get back a response that says "This is not the package you are looking for". Now, my NPM commands return the following error: "Cannot find module './lib/utils/unsupported.js' ... " I've also already tried simply reinstalling NPM, Node, and yarn.<issue_comment>username_1: Updating your node.js to the newest version could resolve your issue. And if not try running this: ``` yarn add prettier --ignore-engines ``` Upvotes: 0 <issue_comment>username_2: Here are few important links that would help you: Uninstall yarn: [How Do I Uninstall Yarn](https://stackoverflow.com/questions/42334978/how-do-i-uninstall-yarn) Uninstall `node` and `npm`: [How do I completely uninstall Node.js, and reinstall from beginning (Mac OS X)](https://stackoverflow.com/questions/11177954/how-do-i-completely-uninstall-node-js-and-reinstall-from-beginning-mac-os-x) I would recommend you to use `node` and `npm` with the help of `NVM`. you can refer link below for same: <https://www.liquidweb.com/kb/how-to-install-node-js-via-nvm-node-version-manager-on-ubuntu-14-04-lts/> Upvotes: 2
2018/03/19
295
940
<issue_start>username_0: What are the advantages of saving file in .pkl format over .txt or .csv format in Python?<issue_comment>username_1: Updating your node.js to the newest version could resolve your issue. And if not try running this: ``` yarn add prettier --ignore-engines ``` Upvotes: 0 <issue_comment>username_2: Here are few important links that would help you: Uninstall yarn: [How Do I Uninstall Yarn](https://stackoverflow.com/questions/42334978/how-do-i-uninstall-yarn) Uninstall `node` and `npm`: [How do I completely uninstall Node.js, and reinstall from beginning (Mac OS X)](https://stackoverflow.com/questions/11177954/how-do-i-completely-uninstall-node-js-and-reinstall-from-beginning-mac-os-x) I would recommend you to use `node` and `npm` with the help of `NVM`. you can refer link below for same: <https://www.liquidweb.com/kb/how-to-install-node-js-via-nvm-node-version-manager-on-ubuntu-14-04-lts/> Upvotes: 2
2018/03/19
1,508
5,749
<issue_start>username_0: I am trying to build a small app in C# to retrieve suggested meeting times from the Microsoft Graph API. After authenticating, I call `graphClient.HttpProvider.SendAsync(t);` to hopefully get suggested meeting times. However, stepping through with breakpoints everything seems to go fine until that call and then the FindMeetingTimes request content is empty/null. Calling with: `eventsService.RunAsync();` ``` internal async Task RunAsync() { try { // Create request object var findMeetingTimeRequest = new FindMeetingTimeRequestModel { Attendees = new List { new AttendeeBase { EmailAddress = new EmailAddress {Address = "<EMAIL>" }, Type = AttendeeType.Required } }, LocationConstraint = new LocationConstraint { IsRequired = true, SuggestLocation = false, Locations = new List { new LocationItemModel{ DisplayName = "A116", Address = null, Coordinates = null } } }, TimeConstraint = new TimeConstraintModel { TimeSlots = new List { new TimeSlotModel { Start = new DateTimeValueModel { Date = "2018-03-23", Time = "08:00:00", TimeZone = "Central Standard Time" }, End = new DateTimeValueModel { Date = "2018-03-23", Time = "09:00:00", TimeZone = "Central Standard Time" } } } }, MeetingDuration = new Duration("PT1H"), MaxCandidates = 99, IsOrganizerOptional = false, ReturnSuggestionHints = false }; GraphServiceClient graphClient = SDKHelper.GetAuthenticatedClient(); var t = graphClient.Me.FindMeetingTimes(findMeetingTimeRequest.Attendees, findMeetingTimeRequest.LocationConstraint, findMeetingTimeRequest.TimeConstraint, findMeetingTimeRequest.MeetingDuration, findMeetingTimeRequest.MaxCandidates, findMeetingTimeRequest.IsOrganizerOptional).Request().GetHttpRequestMessage(); await graphClient.AuthenticationProvider.AuthenticateRequestAsync(t); var response = await graphClient.HttpProvider.SendAsync(t); var jsonString = await response.Content.ReadAsStringAsync(); Console.WriteLine(jsonString); return; }catch(Exception ex) { Console.WriteLine(ex.Message); return; } } ``` I'm sort of at a loss as to what to try next. I have looked through examples and there's only a handful out there so far that use GraphServiceClient/SDKHelper to authenticate. Could this be part of the problem? I get both exceptions during `await graphClient.HttpProvider.SendAsync(t);`: ``` Exception thrown: 'Microsoft.Graph.ServiceException' in Microsoft.Graph.Core.dll Exception thrown: 'System.NullReferenceException' in System.Web.dll ``` --- Update: Using both the reference in Michael's comment below and the original code with an empty argument list for FindMeetingTimes(), I'm getting a credentials exception: `"Code: ErrorAccessDenied\r\nMessage: Access is denied. Check credentials and try again.\r\n\r\nInner error\r\n"` Calling with `await eventsService.EventFindMeetingsTimes(graphClient);` ``` public async System.Threading.Tasks.Task EventFindMeetingsTimes(GraphServiceClient graphClient) { try { User me = await graphClient.Me.Request().GetAsync(); // Get the first three users in the org as attendees unless user is the organizer. var orgUsers = await graphClient.Users.Request().GetAsync(); List attendees = new List(); Attendee attendee = new Attendee(); attendee.EmailAddress = new EmailAddress(); attendee.EmailAddress.Address = "<EMAIL>"; attendees.Add(attendee); // Create a duration with an ISO8601 duration. Duration durationFromISO8601 = new Duration("PT1H"); MeetingTimeSuggestionsResult resultsFromISO8601 = await graphClient.Me.FindMeetingTimes(attendees, null, null, durationFromISO8601, 2, true, false, 10.0).Request().PostAsync(); List suggestionsFromISO8601 = new List(resultsFromISO8601.MeetingTimeSuggestions); } catch (Exception e) { Console.WriteLine("Something happened, check out a trace. Error code: {0}", e.Message); } } ``` The account I'm using to log in works when I test with the GraphExplorer. Is it possible the credentials/token aren't being passed down through the web form and into the graph client? --- SOLUTION: [Graph Docs example](https://github.com/microsoftgraph/msgraph-sdk-dotnet/blob/dev/tests/Microsoft.Graph.Test/Requests/Functional/EventTests.cs#L88) <-- Provided by Michael helped set up the formatting correctly. [Find meeting times problem #559](https://github.com/microsoftgraph/microsoft-graph-docs/issues/559) <-- Tipped off by Marc that the permissions in the end needed updated and eventually solved my updated problem.<issue_comment>username_1: Without more detail, it is difficult to determine what is going wrong here. That said, you should start with simplifying this code. That will at least reduce the number of moving parts: ``` var result = await graphClient.Me.FindMeetingTimes() .Request() .PostAsync(); if (!string.IsNullOrWhiteSpace(result.EmptySuggestionsReason)) { Console.WriteLine(result.EmptySuggestionsReason); } else { foreach (var item in result.MeetingTimeSuggestions) { Console.WriteLine($"Suggestion: {item.SuggestionReason}"); } } ``` If this fails, be sure to capture the entire exception and update your question. Upvotes: 1 <issue_comment>username_2: You forgot to set the HttpMethod before `SendAsync(t)`. It is using GET instead of POST. ``` t.Method = System.Net.Http.HttpMethod.Post; ``` With that said, I agree with Mark. Use the built in functionality of the client library: <https://github.com/microsoftgraph/msgraph-sdk-dotnet/blob/dev/tests/Microsoft.Graph.Test/Requests/Functional/EventTests.cs#L88> Upvotes: 3 [selected_answer]
2018/03/19
333
1,290
<issue_start>username_0: I want to display a Tableau dashboard in a web form in a visual studio project that I have hosted on AWS. I have used C# to code. I don't even know where to get started with this!<issue_comment>username_1: Without more detail, it is difficult to determine what is going wrong here. That said, you should start with simplifying this code. That will at least reduce the number of moving parts: ``` var result = await graphClient.Me.FindMeetingTimes() .Request() .PostAsync(); if (!string.IsNullOrWhiteSpace(result.EmptySuggestionsReason)) { Console.WriteLine(result.EmptySuggestionsReason); } else { foreach (var item in result.MeetingTimeSuggestions) { Console.WriteLine($"Suggestion: {item.SuggestionReason}"); } } ``` If this fails, be sure to capture the entire exception and update your question. Upvotes: 1 <issue_comment>username_2: You forgot to set the HttpMethod before `SendAsync(t)`. It is using GET instead of POST. ``` t.Method = System.Net.Http.HttpMethod.Post; ``` With that said, I agree with Mark. Use the built in functionality of the client library: <https://github.com/microsoftgraph/msgraph-sdk-dotnet/blob/dev/tests/Microsoft.Graph.Test/Requests/Functional/EventTests.cs#L88> Upvotes: 3 [selected_answer]
2018/03/19
385
1,447
<issue_start>username_0: my android studo 3.0.1 is not showing attributes of any widgets (e.g. textview, edittext, button, etc) I have saw the post here [same problem](https://stackoverflow.com/questions/41637499/android-studio-not-showing-properties) but file->invalidate and restart didn't solved my problem. Can someone please help me what is the solution?<issue_comment>username_1: Without more detail, it is difficult to determine what is going wrong here. That said, you should start with simplifying this code. That will at least reduce the number of moving parts: ``` var result = await graphClient.Me.FindMeetingTimes() .Request() .PostAsync(); if (!string.IsNullOrWhiteSpace(result.EmptySuggestionsReason)) { Console.WriteLine(result.EmptySuggestionsReason); } else { foreach (var item in result.MeetingTimeSuggestions) { Console.WriteLine($"Suggestion: {item.SuggestionReason}"); } } ``` If this fails, be sure to capture the entire exception and update your question. Upvotes: 1 <issue_comment>username_2: You forgot to set the HttpMethod before `SendAsync(t)`. It is using GET instead of POST. ``` t.Method = System.Net.Http.HttpMethod.Post; ``` With that said, I agree with Mark. Use the built in functionality of the client library: <https://github.com/microsoftgraph/msgraph-sdk-dotnet/blob/dev/tests/Microsoft.Graph.Test/Requests/Functional/EventTests.cs#L88> Upvotes: 3 [selected_answer]
2018/03/19
1,757
4,934
<issue_start>username_0: I have several character vectors of genes containing names of the species in which they're found, and I made an UpSetR plot to show the number of species in common across genes. Now I'd like to do the opposite: Plotting the number of genes in common across species, yet I don't know how to do it. Example of what I have: ``` gene1 <- c("Panda", "Dog", "Chicken") gene2 <- c("Human", "Panda", "Dog") gene3 <- c("Human", "Panda", "Chicken") ...#About 20+ genes with 100+ species each ``` Example of what I would like to have as a result: ``` Panda <- c("gene1", "gene2", "gene3") Dog <- c("gene1", "gene2") Human <- c("gene2", "gene3") Chicken <- c("gene1", "gene3") ... ``` I know it is conceptually easy, yet logistically more complicated. Can anyone give me a clue? Thank you!<issue_comment>username_1: You can try this. ``` gene <-unique(c(gene1,gene2,gene3)) TF <-data.frame(Species = gene) TF$gene1 <- gene%in%gene1 TF$gene2 <- gene%in%gene2 TF$gene3 <- gene%in%gene3 > TF Species gene1 gene2 gene3 1 Panda TRUE TRUE TRUE 2 Dog TRUE TRUE FALSE 3 Chicken TRUE FALSE TRUE 4 Human FALSE TRUE TRUE ``` Upvotes: 1 <issue_comment>username_2: First of all I think for most purposes it's better to store `gene` vectors in a list, as in ``` genes <- list(gene1 = gene1, gene2 = gene2, gene3 = gene3) ``` Then one base R approach would be ``` genes.v <- unlist(genes) names(genes.v) <- rep(names(genes), times = lengths(genes)) species <- lapply(unique(genes.v), function(g) names(genes.v)[g == genes.v]) names(species) <- unique(genes.v) species # $Panda # [1] "gene1" "gene2" "gene3" # # $Dog # [1] "gene1" "gene2" # # $Chicken # [1] "gene1" "gene3" # # $Human # [1] "gene2" "gene3" ``` `genes.v` is a named vector of all the species with the genes being their names. However, when to species have the same, e.g., `gene1`, then those names are `gene11` and `gene12`. That's what I fix in the second line. Then in the third line I go over all the species and create the resulting list, except that in the fourth line I add species names. Upvotes: 2 <issue_comment>username_3: You can use `unstack` from base R: ``` unstack(stack(mget(ls(pattern="gene"))),ind~values) $Chicken [1] "gene1" "gene3" $Dog [1] "gene1" "gene2" $Human [1] "gene2" "gene3" $Panda [1] "gene1" "gene2" "gene3" ``` You can end up listing this to the environment by `list2env` function Breakdown: ``` l = mget(ls(pattern="gene"))#get all the genes in a list m = unstack(stack(l),ind~values)# Stack them, then unstack with the required formula m $Chicken [1] "gene1" "gene3" $Dog [1] "gene1" "gene2" $Human [1] "gene2" "gene3" $Panda [1] "gene1" "gene2" "gene3" list2env(m,.GlobalEnv) Dog [1] "gene1" "gene2" ``` Upvotes: 4 [selected_answer]<issue_comment>username_4: Put the data in a list, to begin with. That makes it easier to work with. ``` genes <- list( gene1 = c("Panda", "Dog", "Chicken"), gene2 = c("Human", "Panda", "Dog"), gene3 = c("Human", "Panda", "Chicken") ) ``` Then we can get the species names from there. ``` species <- unique(unlist(genes)) ``` With this data ``` > species [1] "Panda" "Dog" "Chicken" "Human" ``` For each of these, we want to check if the name is contained in a gene. That is a job for `Map` (or its cousin `lapply`, but I like `Map`): ``` get_genes_for_species <- function(s) { contained <- unlist(Map(function(gene) s %in% gene, genes)) names(genes)[contained] } genes_per_species <- Map(get_genes_for_species, species) ``` Now you have a list of lists, one list per species, containing the genes found in that species. ``` > genes_per_species $Panda [1] "gene1" "gene2" "gene3" $Dog [1] "gene1" "gene2" $Chicken [1] "gene1" "gene3" $Human [1] "gene2" "gene3" ``` Upvotes: 2 <issue_comment>username_5: Here's a variation that embraces the tidyverse and puts the result in a neat dataframe. The trick is to concatenate results with `str_c` and `summarise`. ``` tibble(gene1 = gene1, gene2 = gene2, gene3 = gene3) %>% gather(gene_name, gene_type) %>% group_by(gene_type) %>% summarise(genes = str_c(gene_name, collapse = ", ")) # A tibble: 4 x 2 gene_type genes 1 Chicken gene1, gene3 2 Dog gene1, gene2 3 Human gene2, gene3 4 Panda gene1, gene2, gene3 ``` I agree with Julius (above) that best way to store gene vectors is with a list. A named list would be even better, as: ``` my_gene_list <- set_names(list(gene1, gene2, gene3), str_c("gene", 1:3) ) ``` This would neatly produce the same result... ``` my_gene_list %>% as_tibble() %>% gather(gene_name, gene_type) %>% group_by(gene_type) %>% summarise(genes = str_c(gene_name, collapse = ", ")) # A tibble: 4 x 2 gene_type genes 1 Chicken gene1, gene3 2 Dog gene1, gene2 3 Human gene2, gene3 4 Panda gene1, gene2, gene3 ``` Upvotes: 1
2018/03/19
447
1,398
<issue_start>username_0: I am trying to display content depending on page id, however when I add nothing to the url, so just index.php I get an error saying that `$p` is not defined, how can I give this var a default value that's outside the switch case? ``` php $p = $_GET['p']; switch ($p) { case 1: $content1 = new login(); $content = $content1-displayLogin(); break; case 2: echo "ID is 2"; break; case 3: echo "ID is 3"; break; default: $content1 = new dbconnection(); $content = $content1->displayTable(); } ```<issue_comment>username_1: Replace: ``` $p = $_GET['p']; ``` With: ``` $p = !empty($_GET['p']) ? $_GET['p'] : default_id_value_here; ``` Upvotes: 2 <issue_comment>username_2: I understand that you want to make `$p` have a default value if `$_GET['p']` is not defined. You can do it like this: ``` $p = isset($_GET['p']) ? $_GET['p'] : 'defaultValue'; ``` or, if you're on PHP 7: ``` $p = $_GET['p'] ?? 'defaultValue'; ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: You can do the following: ``` $p = $_GET['p'] ?? ; // e.g 1 or 3 etc. ``` Read more about it at this question: [PHP syntax question: What does the question mark and colon mean?](https://stackoverflow.com/questions/1276909/php-syntax-question-what-does-the-question-mark-and-colon-mean) Upvotes: 1
2018/03/19
2,507
7,048
<issue_start>username_0: I have a set of paired data, and I'm using ggplot2.boxplot (of the easyGgplot2 package) with added (jittered) individual data points: ``` ggplot2.boxplot(data=INdata,xName='condition',yName='vicarious_pain',groupName='condition',showLegend=FALSE, position="dodge", addDot=TRUE,dotSize=3,dotPosition=c("jitter", "jitter"),jitter=0.2, ylim=c(0,100), backgroundColor="white",xtitle="",ytitle="Pain intenstity",mainTitle="Pain intensity", brewerPalette="Paired") ``` INdata: ``` ID,condition,pain 1,Treatment,4.5 3,Treatment,12.5 4,Treatment,16 5,Treatment,61.75 6,Treatment,23.25 7,Treatment,5.75 8,Treatment,5.75 9,Treatment,5.75 10,Treatment,44.5 11,Treatment,7.25 12,Treatment,40.75 13,Treatment,17.25 14,Treatment,2.75 15,Treatment,15.5 16,Treatment,15 17,Treatment,25.75 18,Treatment,17 19,Treatment,26.5 20,Treatment,27 21,Treatment,37.75 22,Treatment,26.5 23,Treatment,15.5 25,Treatment,1.25 26,Treatment,5.75 27,Treatment,25 29,Treatment,7.5 1,No Treatment,34.5 3,No Treatment,46.5 4,No Treatment,34.5 5,No Treatment,34 6,No Treatment,65 7,No Treatment,35.5 8,No Treatment,48.5 9,No Treatment,35.5 10,No Treatment,54.5 11,No Treatment,7 12,No Treatment,39.5 13,No Treatment,23 14,No Treatment,11 15,No Treatment,34 16,No Treatment,15 17,No Treatment,43.5 18,No Treatment,39.5 19,No Treatment,73.5 20,No Treatment,28 21,No Treatment,12 22,No Treatment,30.5 23,No Treatment,33.5 25,No Treatment,20.5 26,No Treatment,14 27,No Treatment,49.5 29,No Treatment,7 ``` The resulting plot looks like this: [![enter image description here](https://i.stack.imgur.com/Bldwy.png)](https://i.stack.imgur.com/Bldwy.png) However, since this is paired data, I want to represent this in the plot - specifically to add lines between paired datapoints. I've tried adding ``` ... + geom_line(aes(group = ID)) ``` ..but I am not able to implement this into the ggplot2.boxplot code. Instead, I get this error: > > Error in if (addMean) p <- p + stat\_summary(fun.y = mean, geom = "point", : > argument is not interpretable as logical > In addition: Warning message: > In if (addMean) p <- p + stat\_summary(fun.y = mean, geom = "point", : > the condition has length > 1 and only the first element will be used > > > Grateful for any input on this!<issue_comment>username_1: I do not know the package from which `ggplot2.boxplot` comes from but I will show you how perform the requested operation in `ggplot`. The requested output is a bit problematic for `ggplot` since you want both points and lines connecting them to be jittered by the same amount. One way to perform that is to jitter the points prior making the plot. But the `x` axis is discrete, here is a workaround: ``` b <- runif(nrow(df), -0.1, 0.1) ggplot(df) + geom_boxplot(aes(x = as.numeric(condition), y = pain, group = condition))+ geom_point(aes(x = as.numeric(condition) + b, y = pain)) + geom_line(aes(x = as.numeric(condition) + b, y = pain, group = ID)) + scale_x_continuous(breaks = c(1,2), labels = c("No Treatment", "Treatment"))+ xlab("condition") ``` [![enter image description here](https://i.stack.imgur.com/EUdNt.png)](https://i.stack.imgur.com/EUdNt.png) First I have made a vector to jitter by called `b`, and converted the `x` axis to numeric so I could add `b` to the `x` axis coordinates. Latter I relabeled the x axis. I do agree with eipi10's comment that the plot works better without jitter: ``` ggplot(df, aes(condition, pain)) + geom_boxplot(width=0.3, size=1.5, fatten=1.5, colour="grey70") + geom_point(colour="red", size=2, alpha=0.5) + geom_line(aes(group=ID), colour="red", linetype="11") + theme_classic() ``` [![enter image description here](https://i.stack.imgur.com/JZBKt.png)](https://i.stack.imgur.com/JZBKt.png) and the updated plot with jittered points eipi10 style: ``` ggplot(df) + geom_boxplot(aes(x = as.numeric(condition), y = pain, group = condition), width=0.3, size=1.5, fatten=1.5, colour="grey70")+ geom_point(aes(x = as.numeric(condition) + b, y = pain), colour="red", size=2, alpha=0.5) + geom_line(aes(x = as.numeric(condition) + b, y = pain, group = ID), colour="red", linetype="11") + scale_x_continuous(breaks = c(1,2), labels = c("No Treatment", "Treatment"), expand = c(0.2,0.2))+ xlab("condition") + theme_classic() ``` [![enter image description here](https://i.stack.imgur.com/YZLoO.png)](https://i.stack.imgur.com/YZLoO.png) Upvotes: 5 [selected_answer]<issue_comment>username_2: Although I like the oldschool way of plotting with ggplot as shown by @username_1's answer, I wanted to check whether using your ggplot2.boxplot-based code this was also possible. I loaded your data: ``` 'data.frame': 52 obs. of 3 variables: $ ID : int 1 3 4 5 6 7 8 9 10 11 ... $ condition: Factor w/ 2 levels "No Treatment",..: 2 2 2 2 2 2 2 2 2 2 ... $ pain : num 4.5 12.5 16 61.8 23.2 ... ``` And called your code, adding geom\_line at the end as you suggested your self: ``` ggplot2.boxplot(data = INdata,xName = 'condition', yName = 'pain', groupName = 'condition',showLegend = FALSE, position = "dodge", addDot = TRUE, dotSize = 3, dotPosition = c("jitter", "jitter"), jitter = 0, ylim = c(0,100), backgroundColor = "white",xtitle = "",ytitle = "Pain intenstity", mainTitle = "Pain intensity", brewerPalette = "Paired") + geom_line(aes(group = ID)) ``` Note that I set jitter to 0. The resulting graph looks like this: ![R plot](https://raw.githubusercontent.com/arjanboltjes/pics/master/Rplot.jpeg) If you don't set jitter to 0, the lines still run from the middle of each boxplot, ignoring the horizontal location of the dots. Not sure why your call gives an error. I thought it might be a factor issue, but I see that my ID variable is not factor class. Upvotes: 1 <issue_comment>username_3: I implemented username_1's jitter solution into the ggplot2.boxplot approach in order to align the dots and lines. Instead of using "addDot", I had to instead add dots using geom\_point (and lines using geom\_line) after, so I could apply the same jitter vector to both dots and lines. ``` b <- runif(nrow(df), -0.2, 0.2) ggplot2.boxplot(data=df,xName='condition',yName='pain',groupName='condition',showLegend=FALSE, ylim=c(0,100), backgroundColor="white",xtitle="",ytitle="Pain intenstity",mainTitle="Pain intensity", brewerPalette="Paired") + geom_point(aes(x=as.numeric(condition) + b, y=pain),colour="black",size=3, alpha=0.7) + geom_line(aes(x=as.numeric(condition) + b, y=pain, group=ID), colour="grey30", linetype="11", alpha=0.7) ``` [![enter image description here](https://i.stack.imgur.com/dJMGN.png)](https://i.stack.imgur.com/dJMGN.png) Upvotes: 1
2018/03/19
827
1,942
<issue_start>username_0: I have dataframe like this: ``` import pandas as pd data = [{'id': 'Jones', 'tf': [(0, 0.5), (1,2.0)]}, {'id': 'Alpha', 'tf': [(1,2.0)]}, {'id': 'Blue', 'tf': [(2,0.1),(1,0.2)]}] df = pd.DataFrame(data) ``` ` I want to have dataframe in this form: ``` 'id', 'var', 'value' Jones, 0, 0.5 Jones, 1, 2.0 Alpha, 1, 2.0 Blue, 2, 0.1 Blue, 1, 0.2 ``` I can do it in two steps: i) unnest to form: id,0,1,2 - columns ``` id ,0 ,1 ,2 Jones,0.5,NaN,2.0 Alpha,NaN,2.0,NaN Blue ,0.2,NaN,0.1 ``` ii) melt with id But there is a problem with step i). My dataset is rather sparse so **unnesting takes a lot of memory** for NaNs. I'm looking for pandastic solution that avoids unnesting and it is memory efficient.<issue_comment>username_1: Should be fast ``` s=df.tf.str.len() t=pd.DataFrame({'id':df.id.repeat(s),'V':df.tf.sum()}) t[['var','value']]=pd.DataFrame(t.V.tolist()).values t Out[550]: V id var value 0 (0, 0.5) Jones 0.0 0.5 0 (1, 2.0) Jones 1.0 2.0 1 (1, 2.0) Alpha 1.0 2.0 2 (2, 0.1) Blue 2.0 0.1 2 (1, 0.2) Blue 1.0 0.2 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: This is the loopy way. It will not be fast, but requires minimal memory. I use `.iat` for fast integer-based lookup, so care is required if you have other columns in your dataframe. ``` import pandas as pd data = [{'id': 'Jones', 'tf': [(0, 0.5), (1,2.0)]}, {'id': 'Alpha', 'tf': [(1,2.0)]}, {'id': 'Blue', 'tf': [(2,0.1),(1,0.2)]}] df = pd.DataFrame(data) df = df.join(pd.DataFrame(columns=[0, 1, 2])) for idx, lst in enumerate(df['tf']): for tup in lst: df.iat[idx, tup[0]+2] = tup[1] df = df.drop('tf', 1).melt('id').dropna(subset=['value']) # id variable value # 0 Jones 0 0.5 # 3 Jones 1 2 # 4 Alpha 1 2 # 5 Blue 1 0.2 # 8 Blue 2 0.1 ``` Upvotes: 0
2018/03/19
564
1,597
<issue_start>username_0: I have a file like this: ``` A_City,QQQQ B_State,QQQQ C_Country,QQQQ A_Cityt,YYYY B_State,YYYY C_Country,YYYY ``` I want to add one more column at end of the line on the same file with the first letter of each column. ``` A_City,QQQQ,AQ B_State,QQQQ,BQ C_Country,QQQQ,CQ A_Cityt,YYYY,AY B_State,YYYY,BY C_Country,YYYY,CY ``` I would like to get this using sed but if there is an awk code would help.<issue_comment>username_1: `awk` to the rescue! ``` $ awk '{print $0 "," substr($0,1,1) substr($0,length($0))}' file A_City,QQQQ,AQ B_State,QQQQ,BQ C_Country,QQQQ,CQ A_Cityt,YYYY,AY B_State,YYYY,BY C_Country,YYYY,CY ``` or, perhaps ``` $ awk -F, '{print $0 FS substr($1,1,1) substr($2,1,1)}' file ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: When you have only one `,` you can use ``` sed -r 's/^(.).*,(.).*/&,\1\2/' file ``` Upvotes: 0 <issue_comment>username_3: This might work for you (GNU sed): ``` sed -r 's/^|,+/&\n/g;s/$/,\n/;:a;s/\n(.).*,\n.*/&\1/;s/\n//;/\n.*,\n/ba;s/\n//g' file ``` Insert a newline at the start of a line or following one or more `,`'s. Append an additional `,` and a newline to the end of the line. Append a character following a newline followed by zero or more characters followed by a `,` and a final newline and any following characters to its match. Remove the first newline. If there are two or more newlines repeat. Finally remove all newlines. N.B. If the line is initially empty, this will add a `,` to such lines. Empty fields are catered for and will be represented by no first character. Upvotes: 0
2018/03/19
551
2,016
<issue_start>username_0: In Elisp, I've encountered different APIs for modeling boolean values. I was under the impression that `t` and `nil` were the idiomatic ways of representing true and false respectively. However, I've also seen `1` and `-1` used to model the same thing. What confuses me is that I have come across APIs that won't work if `nil` is supplied but will work if `-1` is used. Can someone help me understand which is in fact the preferred way. And if the answer is `t` and `nil`, I welcome any theories on why some developers use `1` and `-1` for their APIs...<issue_comment>username_1: Lisp ==== In lisp (both Emacs Lisp and Common Lisp) the only false value is `nil`. Everything else is true, including 0, -1 &c &c. Emacs ===== Emacs uses negative arguments (including -1) to indicate "turn off this mode". E.g., `C-h f toggle-truncate-lines` > > With prefix argument ARG, truncate long lines if ARG is positive, > otherwise fold them. > > > Upvotes: 3 <issue_comment>username_2: sds and sepp2k have covered the main misconception, but in answer to: > > why some developers use 1 and -1 for their APIs... > > > The reason that Emacs minor modes use positive and negative numbers (or rather non-positive numbers, including zero) to mean "enable" and "disable", rather than using `t` and `nil`, is that the argument is *optional*, and when an optional argument is not supplied its value will be `nil`. Consequently it would not be possible to distinguish between passing an argument of `nil` explicitly, and not passing an argument at all. Historically, passing no argument (i.e. an argument of `nil`) meant that the mode would be *toggled*. These days the mode is only toggled when calling it interactively, and an argument of `nil` means "enable". This change was made so that the likes of `(add-hook 'prog-mode-hook 'some-minor-mode)` -- which will result in `some-minor-mode` being called with no arguments -- is guaranteed to *enable* that mode. Upvotes: 4 [selected_answer]
2018/03/19
519
1,530
<issue_start>username_0: I need to sort by `source` value size() descending: ``` def source = [(firstString): [3, 2, 1], (secondString): [3, 2, 1, 4], (thirdString): [3]] expected: [(secondString): [3, 2, 1, 4], (firstString): [3, 2, 1], (thirdString): [3]] ``` I've tried to sort doing this: ``` source.sort { -it.value.size() } ``` How can I achieve this?<issue_comment>username_1: The following is the working code for your expected result: ``` def source = [ "(firstString)": [3, 2, 1], "(secondString)": [3, 2, 1, 4], "(thirdString)": [3] ] def sortedResult = source.sort { -it.value.size()} println sortedResult ``` Working example here on groovy console : <https://groovyconsole.appspot.com/script/5104124974596096> Upvotes: 2 <issue_comment>username_2: The sort that takes a Closure as an argument does not mutate the original Map. It only returns a new map, so you need to assign it (you can assign it to itself). ``` source = source.sort { -it.value.size() } ``` With [Collections](http://docs.groovy-lang.org/latest/html/groovy-jdk/java/util/Collection.html "Collections"), there is another type of sort that takes a Boolean as well as a Closure. In this case, the Boolean indicates whether you want to mutate the original Collection or just return a new Collection. ``` a = [1,3,2] a.sort (true) { it } assert a = [1,2,3] ``` This doesn't apply to [Map](http://docs.groovy-lang.org/latest/html/groovy-jdk/java/util/Map.html "Map"). So use the assignment syntax above. Upvotes: 3 [selected_answer]
2018/03/19
1,824
6,252
<issue_start>username_0: I have created an API endpoint using the Django python framework that I host externally. I can access my endpoint from a browser (`mydomain.com/endpoint/`) and verify that there is no error. The same is true when I run my test django server on locally on my development machine (`localhost:8000/endpoint/`). When I use my localhost as an endpoint, my json data comes through without issue. When I use my production domain, axios gets caught up with a network error, and there is not much context that it gives... from the debug console I get this: ``` Error: Network Error at createError (createError.js:16) at XMLHttpRequest.handleError (xhr.js:87) at XMLHttpRequest.dispatchEvent (event-target.js:172) at XMLHttpRequest.setReadyState (XMLHttpRequest.js:554) at XMLHttpRequest.__didCompleteResponse (XMLHttpRequest.js:387) at XMLHttpRequest.js:493 at RCTDeviceEventEmitter.emit (EventEmitter.js:181) at MessageQueue.__callFunction (MessageQueue.js:353) at MessageQueue.js:118 at MessageQueue.__guardSafe (MessageQueue.js:316) ``` This is my axios call in my react native component: ``` componentDidMount() { axios.get('mydomain.com/get/').then(response => { // localhost:8000/get works this.setState({foo:response.data}); }).catch(error => { console.log(error); }); } ```<issue_comment>username_1: It seems that unencrypted network requests are blocked by default in iOS, i.e. `https` will work, `http` will not. [From the docs](https://facebook.github.io/react-native/docs/network.html): > > By default, iOS will block any request that's not encrypted using SSL. > If you need to fetch from a cleartext URL (one that begins with http) > you will first need to add an App Transport Security exception. > > > Upvotes: 6 [selected_answer]<issue_comment>username_2: I was facing the same issue. i looked deeper and my > > endpoint `url` was not correct. > > > By giving `axios` right exact url, my api worked like charm. Hope it may help anyone Upvotes: 0 <issue_comment>username_3: If you are trying to call localhost on android simulator created with AVD, replacing localhost with 10.0.2.2 solved the issue for me. Upvotes: 6 <issue_comment>username_4: 1. change from **localhost** to your **ip(192.168.43.49)** 2. add **http://** <http://192.168.43.49:3000/user/> Upvotes: 5 <issue_comment>username_5: For me, the issue was because my Remote URL was incorrect. If you have the URL is a .env file, please crosscheck the naming and also ensure that it's prefixed with REACT\_APP\_ as react might not be able to find it if named otherwise. In the .env file Something like REACT\_APP\_BACKEND\_API\_URL=<https://appurl/api> can be accessed as const { REACT\_APP\_BACKEND\_API\_URL } = process.env; Upvotes: 2 <issue_comment>username_6: Try "Content-Type": "application/x-www-form-urlencoded", Accept: "application/json" Upvotes: 2 <issue_comment>username_7: Make sure to change `localhost` to `your_ip_address` which you can find by typing `ipconfig` in Command Prompt Trying adding to your `AndroidManifest.xml` Upvotes: 0 <issue_comment>username_8: **If you do not find your answer in other posts** In my case, I use Rails for the backend and I tried to make requests to `http://localhost:3000` using Axios but every time I got `Network Error` as a response. Then I found out that I need to make a request to `http://10.0.2.2:3000` in the case of the android simulator. For the iOS simulator, it works fine with `http://localhost:3000`. Conclusion ---------- use ``` http://10.0.2.2:3000 ``` instead of ``` http://localhost:3000 ``` update ------ might worth trying ``` adb reverse tcp:3000 tcp:3000 ``` Upvotes: 5 <issue_comment>username_9: If you are using android then open your command prompt and type ipconfig. Then get your ip address and replce it with localhost. In my case, first I used http://localhost:8080/api/admin/1. Then I changed it to <http://192.168.1.10:8080/api/admin/1>. It worked for me. Upvotes: 2 <issue_comment>username_10: Above mentioned answers only works if you are using localhost but if your code is hosted on a server and Axios throwing Network Error then you can solve this by adding one line. ``` const config = { method: 'post', url: `${BASE_URL}/login`, headers: { 'Content-Type': 'multipart/form-data'. <----- Add this line in your axios header }, data : formData }; axios(config).then((res)=> console.log(res)) ``` Upvotes: 0 <issue_comment>username_11: I'm using ***apisauce*** dependancy & Adding header work for me with React Native Android. **Attach header with request like below:** ``` import { create } from 'apisauce'; const api = create({ baseURL: {baseUrl}, headers: { Accept: 'application/json', 'Content-Type': 'application/json' } }); export async function empLogin(data) { try { const response = api.post('Login', data); return await response; } catch (error) { console.log(error); return []; } } ``` Upvotes: 0 <issue_comment>username_12: before: axios.get("http://localhost:3456/apt") .then( response => { alert(JSON.stringify(response)); .... } ) .catch(function(error) { alert(error.message); console.warn(error.response.\_response); ``` }); ``` I get Error "Network error" Failed to connect to the localhost after that, I make some steps to resolved the error. Network Error related to axios resloved by the disabling the system firewall and access from the system IP Address like axios.get("http://192.168.12.10:3456/apt") .then( response => { alert(JSON.stringify(response)); .... } ) .catch(function(error) { alert(error.message); console.warn(error.response.\_response); ``` }); ``` Upvotes: 0 <issue_comment>username_13: For me adding "Accept" in headers resolved the problem: ``` Accept: 'application/json' ``` Upvotes: 0 <issue_comment>username_14: If you are having issues while connecting to localhost from react-native app. you can do following steps 1- goto cmd type ipconfig. 2- replace localhost with your ip version 4 3- try again.. it will succeed. it works for me. Upvotes: 0
2018/03/19
795
2,863
<issue_start>username_0: I have the following model: ``` public class User { public Guid Id {get;set;} public string Username {get;set;} public string Address Useraddress {get;set;} } public class Address { public string Street {get;set;} public string Zipcode {get;set;} } ``` I want to save the data in `Useraddress` to the same `User` table. So I added an `OwnsOne` configuration to the context builder. ``` class UserEntityTypeConfiguration : IEntityTypeConfiguration { public void Configure(EntityTypeBuilder builder) { builder.HasKey(x => x.Id); builder.OwnsOne(x => x.UserAddress); } } ``` When I run the migrations tool then it all seems to be fine. This is the relevant part from the migrations script that is generated: ``` migrationBuilder.CreateTable( name: "Users", columns: table => new { Id = table.Column(nullable: false), Username = table.Column(nullable: false), Useraddress\_Street = table.Column(nullable: true), Useraddress\_Zipcode = table.Column(nullable: true) }, constraints: table => { table.PrimaryKey("PK\_Users", x => x.Id); }); ``` Then when I later on try to add a `User`: ``` await _dbContext.Users.AddAsync(user); await _dbContext.SaveChangesAsync(); ``` I then get the following error: > > The entity of 'User' is sharing the table 'Users' with 'User.Useraddress#Address', but there is no entity of this type with the same key value that has been marked as 'Added' > > > Is there something that I'm doing wrong? PS. I'm using Entity Framework Core 2.0.<issue_comment>username_1: EF Core 2.0 by default creates a primary key as a shadow property for the owned entity since it supports table splitting, therefore, the value of the `UserAddress` property in the `User` instance cannot be null and must be defined. ``` var user = new User { Id = Guid.NewGuid(), Username = "...", UserAddress = new Address { Street = "...", Zipcode = "..." } }; await _dbContext.Users.AddAsync(user); await _dbContext.SaveChangesAsync(); ``` If you want the values of the owned entity to be nulls then just define a default instance, i.e.: ``` var user = new User { Id = Guid.NewGuid(), Username = "...", UserAddress = new Address() }; ``` You can read more about owned entity implicit keys here: <https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities#implicit-keys> Upvotes: 5 [selected_answer]<issue_comment>username_2: The presented solution does not work in EF Core 5.0 for me. I've added constructor into the Address class ``` public Address() { Line1 = string.Empty; } ``` (Just at least one field should not be null). Now it works fine for new root entities. But I still fix existing entities in the database ``` UPDATE RootEntityTable SET Address_Line1 = '' WHERE Id IN (...) ``` Upvotes: 0