date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/20
1,644
4,501
<issue_start>username_0: I am trying to call an API (localhost) that has JWT using an Ajax, But i am getting and error i have tried the following ``` $.ajax({ url:'http://localhost:50298/api/Validate', Method :'GET', dataType: 'json', beforeSend : function(xhr) { xhr.setRequestHeader("Accept", "application/json"); xhr.setRequestHeader("Content-Type", "application/json"); xhr.setRequestHeader("Authorization", "<KEY>"); }, success: function (result) { console.log(result) }, error: function Failed(result) { console.log(result) }, ``` the respond **is 404 Not Found** and the Request Headers is ``` OPTIONS /api/Validate HTTP/1.1 Host: localhost:50298 Connection: keep-alive Access-Control-Request-Method: GET Origin: null User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36 Access-Control-Request-Headers: authorization,content-type Accept: */* Accept-Encoding: gzip, deflate, br Accept-Language: en-GB,en-US;q=0.9,en;q=0.8 ``` i have tried the following as well ``` var settings = { "async": true, "crossDomain": true, "url": "http://localhost:50298/api/Validate", "method": "GET", "headers": { "Authorization": "<KEY>", "Cache-Control": "no-cache", } } $.ajax(settings).done(function (response) { console.log(response); }); ``` but still the same error come up 404 not found if i remove the ``` xhr.setRequestHeader("Authorization", "<KEY>"); ``` or ``` "headers": { "Authorization": "<KEY>", "Cache-Control": "no-cache", } ``` I do get the 401 Unauthorized error which it is correct. The API is .NET Core 2 and it has been tested using Post Man and there is no issue with the API. P.S. I am using CORS chrome extension to run my JS any help would be welcomed to solve this issue<issue_comment>username_1: Can you please try it this way? Check if it works? ``` $.ajax({ url: 'http://localhost:50298/api/Validate', headers: { 'Accept':'application/json', 'Content-Type':'application/json', 'Authorization':'<KEY>' }, method: 'GET', dataType: 'json', success: function(data){ console.log('succes: '+data); } }); ``` This works for me every time.Hope it will work for you. Upvotes: 1 <issue_comment>username_2: In my case I have taken **404 Not Found** while calling web API from client that is a method with authentication attribute. After I have implemented server with just JWT without Identity authentication and this error has gone but it giving me **401 error** this time so I have noticed that, if you have using payload knowledge like issuer audience etc. You have to add this contents to exist request header otherwise it couldn't understand who has calling the api and doesn't make permission to use. At the end my client consumer function transformed like below: ``` request.post('https://localhost:44557/api/controller1/method1') .set('Authorization', 'Bearer '+ pgtoken.toString()) .set('Accept', 'application/json') .set('Content-Type', 'application/json') .set('issuer', 'west-world.xxxxx.com') .set('Audience', 'yyyyy.xxxxxx.com') .send({ key: value, key2: value2 // etc... }) .end(function(err, res){ localStorage.setItem("result: ", res.text); } ); ``` Upvotes: 0
2018/03/20
642
2,248
<issue_start>username_0: I'm implementing a search engine and so far I am done with the part for web crawling, storing the results in the index and retrieving results for the search keywords entered by the user. However I would like the search results to be more specific. Let's say I'm searching "Shoe shops in Hyderabad". Is there any NLP library in python that can just process the text and assign higher weights on important words like in this case "shoes" and "Hyderabad". Thanks.<issue_comment>username_1: Can you please try it this way? Check if it works? ``` $.ajax({ url: 'http://localhost:50298/api/Validate', headers: { 'Accept':'application/json', 'Content-Type':'application/json', 'Authorization':'<KEY>' }, method: 'GET', dataType: 'json', success: function(data){ console.log('succes: '+data); } }); ``` This works for me every time.Hope it will work for you. Upvotes: 1 <issue_comment>username_2: In my case I have taken **404 Not Found** while calling web API from client that is a method with authentication attribute. After I have implemented server with just JWT without Identity authentication and this error has gone but it giving me **401 error** this time so I have noticed that, if you have using payload knowledge like issuer audience etc. You have to add this contents to exist request header otherwise it couldn't understand who has calling the api and doesn't make permission to use. At the end my client consumer function transformed like below: ``` request.post('https://localhost:44557/api/controller1/method1') .set('Authorization', 'Bearer '+ pgtoken.toString()) .set('Accept', 'application/json') .set('Content-Type', 'application/json') .set('issuer', 'west-world.xxxxx.com') .set('Audience', 'yyyyy.xxxxxx.com') .send({ key: value, key2: value2 // etc... }) .end(function(err, res){ localStorage.setItem("result: ", res.text); } ); ``` Upvotes: 0
2018/03/20
670
2,176
<issue_start>username_0: Need to remove non-printable characters from rdd. Sample data is below ``` "@TSX•","None" "@MJU•","None" ``` expected output ``` @TSX,None @MJU,None ``` Tried below code but its not working ``` sqlContext.read.option("sep", ","). \ option("encoding", "ISO-8859-1"). \ option("mode", "PERMISSIVE").csv().rdd.map(lambda s: s.replace("\xe2","")) ```<issue_comment>username_1: Can you please try it this way? Check if it works? ``` $.ajax({ url: 'http://localhost:50298/api/Validate', headers: { 'Accept':'application/json', 'Content-Type':'application/json', 'Authorization':'<KEY>' }, method: 'GET', dataType: 'json', success: function(data){ console.log('succes: '+data); } }); ``` This works for me every time.Hope it will work for you. Upvotes: 1 <issue_comment>username_2: In my case I have taken **404 Not Found** while calling web API from client that is a method with authentication attribute. After I have implemented server with just JWT without Identity authentication and this error has gone but it giving me **401 error** this time so I have noticed that, if you have using payload knowledge like issuer audience etc. You have to add this contents to exist request header otherwise it couldn't understand who has calling the api and doesn't make permission to use. At the end my client consumer function transformed like below: ``` request.post('https://localhost:44557/api/controller1/method1') .set('Authorization', 'Bearer '+ pgtoken.toString()) .set('Accept', 'application/json') .set('Content-Type', 'application/json') .set('issuer', 'west-world.xxxxx.com') .set('Audience', 'yyyyy.xxxxxx.com') .send({ key: value, key2: value2 // etc... }) .end(function(err, res){ localStorage.setItem("result: ", res.text); } ); ``` Upvotes: 0
2018/03/20
2,105
8,534
<issue_start>username_0: In Flutter, is there an option to draw a vertical lines between components as in the image. [![enter image description here](https://i.stack.imgur.com/rzquW.png)](https://i.stack.imgur.com/rzquW.png)<issue_comment>username_1: Not as far as I know. However, it is quite simple to create one — if you look at the source for [Flutter's Divider](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/divider.dart) you'll see that it is simply a `SizedBox` with a single (bottom) border. You could do the same but with dimensions switched. --- **Update** (Oct 4, 2018): a `VerticalDivider` implementation has been [merged in](https://github.com/flutter/flutter/pull/22641) by the Flutter team. Check out the [docs](https://api.flutter.dev/flutter/material/VerticalDivider-class.html) but it's very simple to use — simply put it between two other items in a row. **Note**: If you are using `VerticalDivider` as separator in `Row` widget then wrap `Row` with `IntrinsicHeight` , `Container` or `SizedBox` else `VerticalDivider` will not show up. For `Container` and `SizedBox` widget you need define `height`. Upvotes: 9 [selected_answer]<issue_comment>username_2: ``` import 'package:flutter/material.dart'; class VerticalDivider extends StatelessWidget { @override Widget build(BuildContext context) { return new Container( height: 30.0, width: 1.0, color: Colors.white30, margin: const EdgeInsets.only(left: 10.0, right: 10.0), ); } } ``` Upvotes: 5 <issue_comment>username_3: Try RotatedBox in combination with a divider to get it vertical, RotatedBox is a widget of flutter that automatically rotates it's child based on the quarterTurn property you have to specify. Head over to here for a detailed explanation <https://docs.flutter.io/flutter/widgets/RotatedBox-class.html> Upvotes: 0 <issue_comment>username_4: As @rwynnchristian suggested, this seems to be the simplest solution IMO. Just leaving the code here: ``` import 'package:flutter/material.dart'; class VerticalDivider extends StatelessWidget { @override Widget build(BuildContext context) => RotatedBox( quarterTurns: 1, child: Divider(), ); } ``` Upvotes: 2 <issue_comment>username_5: add this method anywhere. ``` _verticalDivider() => BoxDecoration( border: Border( right: BorderSide( color: Theme.of(context).dividerColor, width: 0.5, ), ), ); ``` now wrap your content in container ``` Container( decoration: _verticalDivider(), child: //your widget code ); ``` Upvotes: 2 <issue_comment>username_6: As of 10 days ago, flutter [has merged](https://github.com/flutter/flutter/pull/22641) a `VerticalDivider` implementation. It will be available in the default channel very soon, but for now you have to switch to the dev channel to use it: `flutter channel dev`. Here is a example of how to use it: ``` IntrinsicHeight( child: new Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: [ Text('Foo'), VerticalDivider(), Text('Bar'), VerticalDivider(), Text('Baz'), ], )) ``` Upvotes: 6 <issue_comment>username_7: Tried with `VerticalDivider()` but cannot get any divider. I Solved it with ``` Container(color: Colors.black45, height: 50, width: 2,), ``` Upvotes: 4 <issue_comment>username_8: ### Vertical divider: * **As a direct child:** ```dart VerticalDivider( color: Colors.black, thickness: 2, ) ``` * **In a `Row`:** [![enter image description here](https://i.stack.imgur.com/ZG7l4.png)](https://i.stack.imgur.com/ZG7l4.png) ```dart IntrinsicHeight( child: Row( children: [ Text('Hello'), VerticalDivider( color: Colors.black, thickness: 2, ), Text('World'), ], ), ) ``` --- ### Horizontal divider: * **As a direct child:** ```dart Divider( color: Colors.black, thickness: 2, ) ``` * **In a `Column`:** [![enter image description here](https://i.stack.imgur.com/KbLHF.png)](https://i.stack.imgur.com/KbLHF.png) ```dart IntrinsicWidth( child: Column( children: [ Text('Hello'), Divider( color: Colors.black, thickness: 2, ), Text('World'), ], ), ) ``` Upvotes: 6 <issue_comment>username_9: Try to wrap it inside the `Container` with some height as ``` Container(height: 80, child: VerticalDivider(color: Colors.red)), ``` Upvotes: 5 <issue_comment>username_10: You can use a vertical divider with a thickness of 1. ``` VerticalDivider( thickness: 1, color: Color(0xFFF6F4F4), ), ``` And if you can't see the vertical divider wrap the row with a **IntrinsicHeight** widget. Upvotes: 3 <issue_comment>username_11: **Use Container for divider is easy**, wrap your row in IntrinsicHeight() [![enter image description here](https://i.stack.imgur.com/dJjdJ.png)](https://i.stack.imgur.com/dJjdJ.png) ``` IntrinsicHeight( child: Row( children: [ Text( 'Admissions', style: TextStyle(fontSize: 34), ), Container(width: 1, color: Colors.black), // This is divider Text('another text'), ], ), ``` Upvotes: 1 <issue_comment>username_12: You need to wrap `VerticalDivider()` widget with the `IntrinsicHeight` widget. Otherwise, the vertical divider will not show up. And to gain some padding over the top and bottom you can add indent. ``` IntrinsicHeight( child: Row( mainAxisSize: MainAxisSize.min, children: [ Flexible( child: VerticalDivider( thickness: 0.8, color: Colors.grey, ), ), Flexible( child: Text( "Random Text", style: TextStyle( fontSize: 12, color: AppColor.darkHintTextColor,), ), ), ], ), ) ``` Upvotes: 1 <issue_comment>username_13: Just wrap your Row in IntrinsicHeight widget and you should get the desired result: ``` IntrinsicHeight( child: Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: [ Text('Name'), VerticalDivider(), Text('Contact'), ], )) ``` Upvotes: 3 <issue_comment>username_14: I guess i found a more robust solution when dealing with this problem;[![enter image description here](https://i.stack.imgur.com/8fxNh.png)](https://i.stack.imgur.com/8fxNh.png) ``` IntrinsicHeight( child: Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, crossAxisAlignment: CrossAxisAlignment.center, children: [ Expanded( child: Container( decoration: BoxDecoration(borderRadius: BoxRadius.circular(),color: Colors.gray), height: 5, margin: CustomPaddings.horizontal(), ), ), Text( "TEST", style: Theme.of(context) .textTheme .subtitle1! .copyWith( color: Colors.black, fontWeight: FontWeight.bold), ), Expanded( child: Container( decoration: BoxDecoration(borderRadius: BoxRadius.circular(),color: Colors.gray), height: 5, margin: CustomPaddings.horizontal(), ), ), ], ), ), ``` Upvotes: 1 <issue_comment>username_15: you can use a Container as a divider, with the desired with 'thickness if for horizontal divider' height 'thickness if for vertical divider' ``` Container( color: Colors.blue, width: 7, height: 77, ), ``` This class is relatively **expensive**. Avoid using it where possible. (As mentioned in the documentation) – <NAME> Upvotes: 0
2018/03/20
488
1,547
<issue_start>username_0: Is there an idiomatic way of initialising arrays in Rust. I'm creating an array of random numbers and was wondering if there is a more idiomatic way then just doing a for loop. My current code works fine, but seems more like C than proper Rust: ``` let mut my_array: [u64; 8] = [0; 8]; for i in 0..my_array.len() { my_array[i] = some_function(); } ```<issue_comment>username_1: [Various sized arrays](https://docs.rs/rand/0.7.3/rand/distributions/struct.Standard.html) can be directly randomly generated: ``` use rand; // 0.7.3 fn main() { let my_array: [u64; 8] = rand::random(); println!("{:?}", my_array); } ``` Currently, this only works for arrays of size from 0 to 32 (inclusive). Beyond that, you will want to see related questions: * [How can I initialize an array using a function?](https://stackoverflow.com/q/36925673/155423) * [What is the proper way to initialize a fixed length array?](https://stackoverflow.com/q/31360993/155423) Upvotes: 5 [selected_answer]<issue_comment>username_2: The other solution is nice and short, but does not apply to the case where you need to initialize an array of random numbers in a specific range. So, here's an answer that addresses that case. ``` use rand::{thread_rng, Rng}; fn main() { let a = [(); 8].map(|_| thread_rng().gen_range(0.0..1.0)); println!("The array of random float numbers between 0.0 and 1.0 is: {:?}", a); } ``` I would be happy to know if there's a better (shorter and more efficient) solution than this one. Upvotes: 1
2018/03/20
617
2,347
<issue_start>username_0: I was trying to add some satellite resource files to display different language I followed these two posts in general [stackoverflow post](https://stackoverflow.com/questions/1142802/how-to-use-localization-in-c-sharp) [Localization of a Site in MVC5 Using Resource File](https://www.c-sharpcorner.com/UploadFile/b8e86c/localization-of-a-site-in-mvc5-using-resource-file/) here's what I have done in nutshell (using visual studio 2017, .net framework 4.5) I created a folder named it "Resources" I right clicked this folder and created a file named it "Lang.resx", added a string pair (name: Welcome, value: Hello), then switch it to public and save I right clicked this folder and created another file named it "Lang.es.resx", added a string pair (name: Welcome, value: Hola), then switch it to public and save I opened up web.config and added following line to System.Web ``` ``` I opened up a blank view, write the following to that view ``` @{ System.Threading.Thread.CurrentThread.CurrentCulture = System.Globalization.CultureInfo.GetCultureInfo("es"); System.Threading.Thread.CurrentThread.CurrentUICulture = System.Globalization.CultureInfo.GetCultureInfo("es"); } @MyProject.Resources.Lang.Welcome @Thread.CurrentThread.CurrentCulture.DisplayName @Thread.CurrentThread.CurrentUICulture.DisplayName ``` The output is as following: Hello Spanish Spanish As you can see, the culture is changed but somehow the Lang.es.resx is not read, I have created multiple new project in different hosting VPS, the result is always the same. So there must be something I've done incorrectly, or some steps missing<issue_comment>username_1: ok, after a whole day debugging, I've found out the cause 1. When using resource file, it creates additional folder to the current bin folder, say Language.es.resx, then it creates a folder called es 2. When I upload the project to the ftp server, this language folder didn't get created and the es recource.dll file was not uploaded I think this is it, hopefully this can help anyone facing the same problem Upvotes: 2 [selected_answer]<issue_comment>username_2: In the file properties in Visual Studio, set: Copy to output directory: Copy always If you want for multiple files within a folder, you can always edit your .csproj file ``` Always ``` Upvotes: 0
2018/03/20
1,089
4,041
<issue_start>username_0: I'm have a spreadsheet that takes 31 different tabs with daily data, then summarizes it into a monthly tab, then converts each day's data into a software upload. In order to expedite my process I'm trying to combine every daily upload into another tab at once instead of copy/pasting each day manually. Currently to see the upload for each day I am changing the number in cell B3 to the day I need and it will give me the upload data. EDIT: How I hope this will work is the macro will put "1" in cell B3 on the Upload tab, take the data in A10:I34, paste it over to the first empty cell in column A on the Upload Files tab, then go back to the upload tab, change cell B3 to "2", copy the data in A10:I34, paste it to the next empty cell in column A on the Upload Files tab... repeat until the data from day 31 has been pasted onto the Upload Files tab. 1. Data is in tab called "Upload" 2. The only cell that can change on tab "Upload" is cell "B3" 3. Data range is "A10:I34" 4. Data needs to paste values on tab "Upload Files" 5. "Upload Files" has formatting data in row 1 needed for the software EDIT: 6. The macro needs to looks for the next empty row after each day's data has been pasted 7. Loop needs to stop at day 31 My issue now is that I can't get it to go back to the upload tab and change the date to the next day and then continue with the empty cell loop. It ends up just pasting the data over the original data, or not changing the cell value to the next day. Below is what I have for changing the days. EDIT #3: I tweaked it, it works now. Please take a look and let me know if you think it could be improved. Added For/next. --- ``` Dim Count as integer Dim x as Long Count = 2 For x = 1 to 30 Do While Worksheets("Upload Files").Range("A" & Count).Value <> "" Count = Count +1 Loop Worksheets ("Upload").Range("B3").Value = Worksheets("Upload").Range("B3").Value +1 Worksheets("Upload").Range("A10:I34").Copy Worksheets("Upload Files").Range("A" & Count).PasteSpecial xl PasteValues Next x ``` --- Any suggestions? Previous attempts just simply selected the exact cell where the previous data ended on the "Upload Files" tab, but adding rows in the "Upload" tab means I have to manually recalculate which cells the data will be pasted, which is about as time consuming as doing the whole thing manually. Thanks,<issue_comment>username_1: I added some comments for your understanding inside the code: ``` Sub Test1() Dim Count As Integer Dim lLastRow As Long Dim i As Long Count = 2 ' This would give you the last used row in the Sheet lLastRow = Worksheets("Upload Files").Cells(Worksheets("Upload Files").Rows.Count, 1).End(xlUp).Row For i = 1 To lLastRow ' You don't specify if you are trying to do anything in this section inside de Loop ' but if you just want to cound for the last row, you can remove the loop. Next i If Worksheets("Upload").Range("B3").Value < 32 Then ' Dont need this. 'Worksheets("Upload").Range("B3").Select Worksheets("Upload").Range("B3").Value = Worksheets("Upload").Range("B3").Value + 1 ' Here it would be nice if you specify from what Sheet you are copying this range. I guess is Upload. Worksheets("Upload").Range("A10:I34").Copy Worksheets("Upload Files").Range("A" & Count).PasteSpecial xlPasteValues End If End Sub ``` Upvotes: 1 <issue_comment>username_2: I guess you're after something like follows: ``` Option Explicit Sub upload() Dim targetSht As Worksheet Set targetSht = Worksheets("Upload Files") Dim i As Long With Worksheets("Upload") For i = 1 To 31 .Range("B3").Value = i Application.Calculate With .Range("A10:I34") targetSht.Cells(targetSht.Rows.Count, 1).End(xlUp).Offset(1).Resize(.Rows.Count, .Columns.Count).Value = .Value End With Next End With End Sub ``` Upvotes: 0
2018/03/20
708
2,955
<issue_start>username_0: **Question** Is there a way to implement a generic type or something similar in a superclass, which will always refer to the subclass it is in? **Explanation of what i need** I have the superclass ``` public abstract class MySuperclass{ public abstract *magicalTypeReference* getInstance(); } ``` And when i create a subclass of it, i want the \* magicalTypeReference \* to become the type of the subclass it is in. ``` public class FirstSubclass extends MySuperclass{ @Override public FirstSubclass getInstance(){return this;} } public class SecondSubclass extends MySuperclass{ @Override public SecondSubclass getInstance(){return this;} } ``` **Why a normal generic type wont do the job** Something similar can technically be done with generic types. ``` public abstract class MySuperclass{ public abstract E getInstance(); } public class FirstSubclass extends MySuperclass{ @Override public FirstSubclass getInstance(){return this;} } public class SecondSubclass extends MySuperclass{ @Override public SecondSubclass getInstance(){return this;} } ``` But generic types do not guarantee that the type later is forced to be the subclass. Technically you can create a Subclass using another Subclass as the generic type. Like this ``` public class SecondSubclass extends MySuperclass{ @Override public FirstSubclass getInstance(){return this;} } ``` Is there some construct in java that can provide a "type safe" way of implementing this?<issue_comment>username_1: Instead of declaring your `MySuperClass` like ``` public abstract class MySuperClass ``` if you declare like ``` public abstract class MySuperClass> ``` you can make subclasses avoid referencing *each other* in a partial sense. This solution will definitely avoid situations like this ``` public class SecondSubclass extends MySuperclass{ @Override public FirstSubclass getInstance(){return this;} } public class FirstSubclass extends MySuperclass{ @Override public SecondSubclass getInstance(){return this;} } ``` where one class contains the other and vice versa. Upvotes: -1 <issue_comment>username_2: This is not possible in Java. Consider if hypothetically what you want were possible, you could implement `FirstSubclass` like this: ``` public class FirstSubclass extends MySuperclass { @Override public FirstSubclass getInstance() { return new FirstSubclass(); } } ``` Then you could have a subclass of it that doesn't override the `getInstance()` method: ``` public class SubSubclass extends FirstSubclass { } ``` `SubSubclass` also implements the `MySuperclass` interface, due to inheritance. However, `SubSubclass`'s `getInstance()` method is inherited from `FirstSubclass`, which returns a `FirstSubclass`. Therefore, `SubSubclass` does not satisfy the contract of the `MySuperclass` interface since its `getInstance()` method does not return a `SubSubclass`. Upvotes: 0
2018/03/20
452
1,651
<issue_start>username_0: I need this grid layout, view structure with Storyboard. Is it an easier way to set up, or I need to calculate size / 4, and multiply it by the index, and calculate the center X, Y coordinates, and adjust `NSLayoutConstraint` at each rotation? [![enter image description here](https://i.stack.imgur.com/VdaMH.png)](https://i.stack.imgur.com/VdaMH.png)<issue_comment>username_1: You can use `UIStackView` with default vertical and change it's axis to horizontal in landscape size class , with distribution `Fill-Equally` ``` override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) { if UIDevice.current.orientation == .portrait { self.stackV.axis = .vertical } else { self.stackV.axis = .horizontal } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: The above answer is perfect but we can also handle it by using storyboard.To explain I get 3 `UIView` into a `stack view` with below property. [![enter image description here](https://i.stack.imgur.com/qtHqG.png)](https://i.stack.imgur.com/qtHqG.png) Now when I go to `landscape` from `portrait` mode the height of the screen is converting to `compact` from `regular`.So we can change `stack view axis` from `vertical` to `horizontal` according to height change of the screen.below **gif** explain you visually. [![enter image description here](https://i.stack.imgur.com/cHaPW.gif)](https://i.stack.imgur.com/cHaPW.gif) for further information you can visit this [link](https://www.natashatherobot.com/magical-view-rotation-with-stackview/). Upvotes: 0
2018/03/20
939
3,587
<issue_start>username_0: I'm developing a Building Block for Blackboard, and have run into a database related issue. I'm trying to insert four rows into a pgsql table, but only if the table is empty. The query runs as a post-schema update, and is therefore run whenever I re-install the building block. It is vital that I do not simply drop exsisting values and/or replace them (which would be a simple and effective solution otherwise). Below is my existing query, that does the job, but only for ***one*** row. As I mentioned, I'm trying to insert ***four*** rows. I can't simply run the insert multiple times, as after the first run, the table would no longer be empty. Any help will be appriciated. ``` BEGIN; INSERT INTO my_table_name SELECT nextval('my_table_name_SEQ'), 'Some website URL', 'Some image URL', 'Some website name', 'Y', 'Y' WHERE NOT EXISTS ( SELECT * FROM my_table_name ); COMMIT; END; ```<issue_comment>username_1: It is better if you count the rows because it gets the number of input rows. This should work: ``` BEGIN; INSERT INTO my_table_name SELECT nextval('my_table_name_SEQ'), 'Some website URL', 'Some image URL', 'Some website name', 'Y', 'Y' WHERE (SELECT COUNT(*) FROM my_table_name)>0 COMMIT; END; ``` Upvotes: -1 <issue_comment>username_2: Inserts won't overwrite, so I'm not understanding that part of your question. Below are two ways to insert multiple rows; the second example is a single sql statement: create table test (col1 int, col2 varchar(10) ) ; ``` insert into test select 1, 'A' ; insert into test select 2, 'B' ; insert into test (col1, col2) values (3, 'C'), (4, 'D'), (5, 'E') ; select * from test ; 1 "A" 2 "B" 3 "C" 4 "D" 5 "E" ``` Upvotes: -1 <issue_comment>username_3: I managed to fix the issue. In [this](https://stackoverflow.com/questions/24769157/insert-multiple-rows-where-not-exists-postgresql) post, @a\_horse\_with\_no\_name suggest using ***UNION ALL*** to solve a similar issue. Also thanks to @username_1 for suggesting using ***COUNT***, rather than ***EXISTS*** My final query: ``` BEGIN; INSERT INTO my_table (pk1, coll1, coll2, coll3, coll4, coll5) SELECT x.pk1, x.coll1, x.coll2, x.coll3, x.coll4, x.coll5 FROM ( SELECT nextval('my_table_SEQ') as pk1, 'Some website URL' as coll1, 'Some image URL' as coll2, 'Some website name' as coll3, 'Y' as coll4, 'Y' as coll5 UNION SELECT nextval('my_table_SEQ'), 'Some other website URL', 'Some other image URL', 'Some other website name', 'Y', 'N' UNION SELECT nextval('my_table_SEQ'), 'Some other other website URL', 'Some other other image URL', 'Some other other website name', 'Y', 'N' UNION SELECT nextval('my_table_SEQ'), 'Some other other other website URL', 'Some other other other image URL', 'Some other other other website name', 'Y', 'Y' ) as x WHERE (SELECT COUNT(*) FROM my_table) <= 0; COMMIT; END; ``` Upvotes: 1 [selected_answer]
2018/03/20
970
3,544
<issue_start>username_0: I used customTask in Universal Analytics tags via tag Manager to tackle PII. After that, I regularly monitored reports in Google Analytics and found all the hits were cleaned.(i.e. PII redacted) But I still get emails from <EMAIL>, that some of the URLs linked to particular 'remarketing lists' are violating PII. Now there's no way to confirm that PII is actually violated. I dont know where I am supposed to look, in which section (of Adwords account or Analytics account) can I see the URLs being passing PII info to Google? This is what I'm seeing in GA reports: <https://siteurl/u/password-reset/reset?email=ja[> REDACTED EMAIL]l.com&authenticationCode=8127489044212 this is what is sent in the violation email: <https://siteurl/u/password-reset/reset?email=<EMAIL>&authenticationCode=8127489044212> If I have redacted the PII using tag manager, do I still need to do something in the Adwords account ?<issue_comment>username_1: It is better if you count the rows because it gets the number of input rows. This should work: ``` BEGIN; INSERT INTO my_table_name SELECT nextval('my_table_name_SEQ'), 'Some website URL', 'Some image URL', 'Some website name', 'Y', 'Y' WHERE (SELECT COUNT(*) FROM my_table_name)>0 COMMIT; END; ``` Upvotes: -1 <issue_comment>username_2: Inserts won't overwrite, so I'm not understanding that part of your question. Below are two ways to insert multiple rows; the second example is a single sql statement: create table test (col1 int, col2 varchar(10) ) ; ``` insert into test select 1, 'A' ; insert into test select 2, 'B' ; insert into test (col1, col2) values (3, 'C'), (4, 'D'), (5, 'E') ; select * from test ; 1 "A" 2 "B" 3 "C" 4 "D" 5 "E" ``` Upvotes: -1 <issue_comment>username_3: I managed to fix the issue. In [this](https://stackoverflow.com/questions/24769157/insert-multiple-rows-where-not-exists-postgresql) post, @a\_horse\_with\_no\_name suggest using ***UNION ALL*** to solve a similar issue. Also thanks to @username_1 for suggesting using ***COUNT***, rather than ***EXISTS*** My final query: ``` BEGIN; INSERT INTO my_table (pk1, coll1, coll2, coll3, coll4, coll5) SELECT x.pk1, x.coll1, x.coll2, x.coll3, x.coll4, x.coll5 FROM ( SELECT nextval('my_table_SEQ') as pk1, 'Some website URL' as coll1, 'Some image URL' as coll2, 'Some website name' as coll3, 'Y' as coll4, 'Y' as coll5 UNION SELECT nextval('my_table_SEQ'), 'Some other website URL', 'Some other image URL', 'Some other website name', 'Y', 'N' UNION SELECT nextval('my_table_SEQ'), 'Some other other website URL', 'Some other other image URL', 'Some other other website name', 'Y', 'N' UNION SELECT nextval('my_table_SEQ'), 'Some other other other website URL', 'Some other other other image URL', 'Some other other other website name', 'Y', 'Y' ) as x WHERE (SELECT COUNT(*) FROM my_table) <= 0; COMMIT; END; ``` Upvotes: 1 [selected_answer]
2018/03/20
598
2,172
<issue_start>username_0: For a webscraper I will store 5 million+ full urls in a table. I will need to check if the url does not exist in the table very often. What column type and strategy is best for performance? * url with varchar and an index. Or * second char column with an hash ( md5?) and check for that. Or a completely different strategy ?<issue_comment>username_1: Second option. If you need to check if url doesn't exist, then a `unique` constraint on the hash of the URL is optimal method. `md5`, even though dated, is ok for your use case as you have ~5 million rows, it's not likely you'd get a collision. For the column, use `BINARY(16)`. You can store `UNHEX(MD5('your url here'));` into `BINARY(16)`, having fixed index length. `md5` is 128 bits, you don't need human-readable representation so you can save the "raw" binary which lets you cut the storage requirements in half - hence `BINARY(16)` instead of `BINARY(32)`. Upvotes: 2 <issue_comment>username_2: The maximum width of an index is 767 bytes - but IIRC there is no upper limit on the size of a URL. This is rather long even for a URL. OTOH, the longer the data, the more work the DBMS has to do in order to compare 2 values. Hence using a hash is probably not required but will enhance performance. Some mysql engines support an index type of 'hash' which avoids the need to create a new column (but I don't think this is supported on MyISAM and InnoDB). OTOH recent versions of MySQL (and MariaDB, PerconaDB) support "[generated](https://dev.mysql.com/doc/refman/5.7/en/generated-column-index-optimizations.html)" or virtual columns. So you don't need to explicitly set the value for the hash when inserting/updating (but you would need to use `table.hashed=MD5('$yourURL')` in the WHERE clause to allow the DBMS to use the index. Personally, if it were me, I'd be concerned about indexing the same page twice under different URLs - particularly where the query changes: ``` http://www.example.com/?r=32323 ``` and ``` http://www.example.com/?r=51515 ``` might refer to different content or may be the same - I'd also consider keeping a hash of the content. Upvotes: 1
2018/03/20
660
2,523
<issue_start>username_0: I need to add `Codable` to two classes. One is `CLLocationCoordinate2D` and the second one `CLCircularRegion`. I have no issue with `CLLocationCoordinate2D` and it works by doing that: ``` extension CLLocationCoordinate2D: Codable { public enum CodingKeys: String, CodingKey { case latitude case longitude } public func encode(to encoder: Encoder) throws { var container = encoder.container(keyedBy: CodingKeys.self) try container.encode(latitude, forKey: .latitude) try container.encode(longitude, forKey: .longitude) } public init(from decoder: Decoder) throws { let values = try decoder.container(keyedBy: CodingKeys.self) latitude = try values.decode(Double.self, forKey: .latitude) longitude = try values.decode(Double.self, forKey: .longitude) } } ``` But I have a lot of issues trying to do the same thing with `CLCircularRegion`. Since `radius` and `center` are read-only properties I cannot really create them in the same way. The other problem is why I was able to create `public init` in `CLLocationCoordinate2D` and for `Codable` extension in `CLCircularRegion` I'm getting error: [![enter image description here](https://i.stack.imgur.com/Z02Tl.png)](https://i.stack.imgur.com/Z02Tl.png) I see the point of public, but required? And of course `required` will fail beacouse it's extension not a class. So one solution for this issue will be create abstraction class that will contain all fields and build region in initializer I know that, but there is any other way to extend `Codable` to existing class that's not support by Apple yet?<issue_comment>username_1: I moved CLCircularRegion to computed variable, and it automatically works with Codable now: ``` var region: CLCircularRegion { return CLCircularRegion(center: self.coordinates, radius: 10.0, identifier: identifier) } ``` Upvotes: 2 <issue_comment>username_2: Inspired by [Artem answer](https://stackoverflow.com/a/52354550/2064585) I solved my issue by storing in the object latitude and longitude in `Double` format. The `CLLocation` is then returned as computed property reading the two stored values: ``` private var latitude: Double = 0.0 private var longitude: Double = 0.0 var location: CLLocation { return CLLocation(latitude: latitude, longitude: longitude) } ``` In your case with `CLCircularRegion` you will also need to save `radius` and `center` in a similar fashion. Upvotes: 4 [selected_answer]
2018/03/20
378
1,444
<issue_start>username_0: I have a "Custom VPC" with two subnets (A private subnet & a public subnet). In ECS, when I try and create a cluster with "Custom VPC" and both subnets selected. The EC2 instances launched, are by default launched in the private subnets. 1. Why?, Is there a way to change the instance subnet after it is launched? 2. Should ECS cluster only have public subnets of a VPC? and launch an instance on the private subnet through the 'Launch instance' wizard? 3. Also, these instances have a public DNS even when the private subnet 'auto assign public IP' is disabled. Why?<issue_comment>username_1: I moved CLCircularRegion to computed variable, and it automatically works with Codable now: ``` var region: CLCircularRegion { return CLCircularRegion(center: self.coordinates, radius: 10.0, identifier: identifier) } ``` Upvotes: 2 <issue_comment>username_2: Inspired by [Artem answer](https://stackoverflow.com/a/52354550/2064585) I solved my issue by storing in the object latitude and longitude in `Double` format. The `CLLocation` is then returned as computed property reading the two stored values: ``` private var latitude: Double = 0.0 private var longitude: Double = 0.0 var location: CLLocation { return CLLocation(latitude: latitude, longitude: longitude) } ``` In your case with `CLCircularRegion` you will also need to save `radius` and `center` in a similar fashion. Upvotes: 4 [selected_answer]
2018/03/20
589
1,920
<issue_start>username_0: I want my macro to read from a list of phone numbers on another sheet, count the rows, then construct an array from A1:An...from the row count. The array will always start at A1. ``` Dim lrow As Variant lrow = Cells(Rows.Count, 1).End(xlUp).Row Dim PhonesArray as Variant PhonesArray = " [A1:A" & lrow & "].Value2 ``` I'm unable to pass the upper boundary (lrow) to PhonesArray. it should run as ``` PhonesArray = [A1:A**40**].Value2 ``` The lrow variable is calculating correctly, but I'm unable to pass it into the array construction. A static range works as expected. Any assistance is greatly appreciated and apologies in advance if the issue has been addressed before. I was unable to find a solution through my search.<issue_comment>username_1: You require `Range("a1:a" & lRow).value2` I believe Upvotes: 3 [selected_answer]<issue_comment>username_2: In general, it's a bad idea to refer to your cells with the `[A1]` type shorthand and it doesn't support being put together in a string like that. Use `Range()` instead and you'll have a few options: * `Range("A1:A" & lRow)` * `Range("A1").Resize(lrow,1)` Given the code you've provided, I'd scrap the `lrow` variable and just use this: ``` Dim PhonesArray As Variant With ThisWorkbook.Worksheets("Sheet1") PhonesArray = Range(.Range("A1"), .Cells(.Rows.Count, 1).End(xlUp)).Value2 End With ``` Upvotes: 3 <issue_comment>username_3: I generally prefer to reference the parent worksheet and define the start and stop of the range with cells. ``` Dim lrow As long, phonesArray as variant with worksheets("your_worksheet's_name") lrow = .cells(.rows.Count, 1).End(xlUp).Row phonesArray = .range(.cells(1, "A"), .cells(lrow, "A")).Value2 debug.print lbound(phonesArray, 1) & " to " & ubound(phonesArray, 1) debug.print lbound(phonesArray, 2) & " to " & ubound(phonesArray, 2) end with ``` Upvotes: 1
2018/03/20
365
1,237
<issue_start>username_0: I would like to combine these two foreach statements together. I've seen a few solutions around here, but nothing really works for me. This is my username list from database. ``` $digits = [1,2,3,4]; $results = $db->table($usernames) ->where('memberID', $mID)->limit(10) ->getAll(); foreach ($results as $result) { echo $result->userName; } ``` I tried this: ``` $combined = array_merge($digits, $results); foreach (array_unique($dogrularVeSiklar) as $single) : { ?> { echo $single.' '; echo $results->userName; }, } ```<issue_comment>username_1: You don't show what `$dogrularVeSiklar` is or where you get it, but as an example; combine into `$key => $value` pairs and `foreach` exposing the key and value: ``` $combined = array_combine($digits, $results); foreach ($combined as $digit => $result) { echo $digit . ' ' . $result; } ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: foreach operates on only one array at a time. The way your array is structured, you can use [array\_combine()](http://php.net/manual/en/function.array-combine.php) function to combine them into an array of key-value pairs then foreach that single array Upvotes: -1
2018/03/20
633
2,573
<issue_start>username_0: I want to bind my custom `TreeViewItem` to `IsExpanded`. The normal way without a custom TreeView Item would look like this. ``` <Setter Property="IsExpanded" Value="{Binding IsExpanded}"></Setter> ``` But I want to bind it to my own TreeViewItem for example named `CoolTreeItemModel`. CoolTreeItemModel could look like this: ``` public class CoolTreeItemModel : XY { public LocalTreeItemModel() { TreeViewItems = new List(); } public List TreeViewItems { get; set; } public SomeType IsValid { get; set; } public bool IsExpanded { get; set; } } ``` How CoolTreeItemModel is bound: ``` ``` How can I bind to the `IsExpanded` property of `CoolTreeItemModel` ? Thanks for your help.<issue_comment>username_1: The "normal way" is applicable in this case. Each `CoolTreeItemModel` will be implicitly wrapped in a `TreeViewItem` container so you should be able to bind to your `IsExpanded` property. You may want to set the `Mode` of the binding to `TwoWay` though: ``` <Setter Property="IsExpanded" Value="{Binding IsExpanded, Mode=TwoWay}"></Setter> ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Binding requires the use of dependency properties. [Dependency Properties On MSDN](https://learn.microsoft.com/en-us/dotnet/framework/wpf/advanced/dependency-properties-overview) You would need to define a dependency property like so: ``` public ClassName { public static readonly DependencyProperty IsExpandedProperty = DependencyProperty.Register("IsExpanded", typeof(bool), typeof(ClassName)); public bool IsExpanded { get { return (bool)GetValue(IsExpandedProperty); } set { SetValue(IsExpandedProperty, value); } } } ``` From there you can then include the namespace of your class in the top of your xaml and then bind to `IsExpanded` like normal ``` ``` I Would like to raise one concern of my though. When developers new to XAML/WPF start developing their own controls, the distinction between view data and business data gets muddled. If this is your own control to be consumed by others, there is an entire style template that may need to be created and used. All of this binding should be taking place in this style template, following the example set forth by Microsoft. Also a custom control should have no view model or defined data context as it is expected to be provided by the consumer, meaning a lot of your logic is going to be in the code behind, which is okay as a custom control is only view related and shouldn't have any business logic in it. Upvotes: -1
2018/03/20
402
1,244
<issue_start>username_0: I am making a website about pets. I crafted the pictures that were needed but .. When I put the banner , it does not show up. When inspecting the page - it is not there. If I put it right after the body tag , it will show , also when inspected. But it won't be what I want. ``` [Pets](index.html) * [Home](index.html) * [Pets](pets.html) * [Rates](rates.html) * [Contacts](contacts.html) ``` I don't get what the problem is.<issue_comment>username_1: You can't put a div directly in ul. Assuming you want the banner inherit the nav class, you can add a parent div that contain your banner and the list like this : ``` [Pets](index.html) * [Home](index.html) * [Pets](pets.html) * [Rates](rates.html) * [Contacts](contacts.html) ``` otherwise if you are not bothered by the bullets or if you hide them with css you can put the banner between `-` : ``` * [Pets](index.html) * [Home](index.html) * [Pets](pets.html) * [Rates](rates.html) * [Contacts](contacts.html) ``` Upvotes: -1 <issue_comment>username_2: There are two problems with this I can see. ``` * [Home](index.html) * [Pets](pets.html) * [Rates](rates.html) * [Contacts](contacts.html) ![](testbanner.png) ``` Upvotes: 1 [selected_answer]
2018/03/20
787
2,716
<issue_start>username_0: I've looked at a bunch of questions here and read the docs over and over, however this just doesn't seem to want to work no matter what I do. This is supposed to return one thing if X is true and return something else if it's not. It's inside a map function because I need this to be done for multiple things at once. ``` function ContentProcessing(props) { return ( props.content.map(content => { {content.type === "card" ? ( ) : ( )} }) ); } ``` both and return one string ``` However I get the error ./src/App.js Syntax error: /src/App.js: Unexpected token, expected , (79:13) 77 | 78 | props.content.map(content => { > 79 | {content.type === "card" ? ( | ^ 80 | 81 | ) ? ( 82 | ``` I don't get why this isn't working.<issue_comment>username_1: Issues: 1- Use `{}` to put expressions inside jsx (to put map inside div). 2- you are using `{}` means block body of `arrow function`, so you need to use return inside the function body, otherwise by default map returns undefined. 3- You are using `{}` twice, so 2nd `{}` will be treated as object and `content.type` will be treated as key and that key is not valid, thats why you are getting error. 4- Forgot to define the key on elements. Use this: ``` return ( { props.content.map(content => content.type === "card" ? ( ) : ( ) )} ); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Your syntax for the ternary operator is wrong. You have `condition ? a ? b`. The correct syntax is `condition ? a : b`. Try ``` function ContentProcessing(props) { return ( {props.content.map(content => content.type === "card" ? () : () )} ); } ``` Upvotes: 1 <issue_comment>username_3: A couple of things are wrong I believe. You didn't add the curly braces in the first div. Inside the map you added two times the curly braces so you either remove one or add a return statement. You also added to "?" (the second one should be ":"). This should work: ``` function ContentProcessing(props) { return ( {props.content.map(content => content.type === "card" ? : )} ); } ``` You can also add if else statements inside the map if you add braces: ``` function ContentProcessing(props) { return ( {props.content.map((content) => { if (content.type === "card") { return (); } return (); })} ); } ``` Upvotes: 2 <issue_comment>username_4: Multiple issues with the code. ``` return ( {props.content.map(content => content.type === "card" ? ( ) : ( ) )} ); ``` Extra brackets removed. Conditional operator syntax was wrong.`expression ? expression : expression` Upvotes: 0
2018/03/20
1,665
5,913
<issue_start>username_0: I am using Puppeteer in a Node.js module. I retrieve an HTML element with an XPath selector and need to extract the text property. Currently I use: ```js // Get the element let ele = await element.$x(`//div[@class="g"][${i}]/div/div/h3/a`); // Get the text property const title = await(await ele[0].getProperty('text')).jsonValue(); ``` Is there any way to do this without being so verbose?<issue_comment>username_1: I prefer to use the `eval()` function so I can use less verbose code: ``` page.eval(() => { let element = document.querySelector('#mySelector') return element.innerText }).then(text => { console.log(text) }) ``` You can also pass an element you previously grabbed like your `ele` var: **Using Promise syntax** ``` page.eval(element => { return element.innerText }, ele).then(text => { // Do whatever you want with text }) ``` **Using async/await syntax** ``` const text = await page.eval(element => element.innerText), ele) // Do whatever you want with text ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ...or write a tiny helper function. ``` public async GetProperty(element: ElementHandle, property: string): Promise { return await (await element.getProperty(property)).jsonValue(); } ``` use: ``` let inner = await GetProperty(ele, 'innerHTML'); ``` Upvotes: 3 <issue_comment>username_3: I would rather extend ElementHandle for missing methods like: ``` // [email protected] let { ElementHandle } = require( "puppeteer/lib/ExecutionContext" ); // [email protected] if ( ElementHandle === undefined ) { ElementHandle = require( "puppeteer/lib/JSHandle" ).ElementHandle; } /** * Set value on a select element * @param {string} value * @returns {Promise} \*/ ElementHandle.prototype.select = async function( value ) { await this.\_page.evaluateHandle( ( el, value ) => { const event = new Event( "change", { bubbles: true }); event.simulated = true; el.querySelector( `option[value="${ value }"]` ).selected = true; el.dispatchEvent( event ); }, this, value ); }; /\*\* \* Check if element is visible in the DOM \* @returns {Promise} \*\*/ ElementHandle.prototype.isVisible = async function(){ return (await this.boundingBox() !== null); }; /\*\* \* Get element attribute \* @param {string} attr \* @returns {Promise} \*/ ElementHandle.prototype.getAttr = async function( attr ){ const handle = await this.\_page.evaluateHandle( ( el, attr ) => el.getAttribute( attr ), this, attr ); return await handle.jsonValue(); }; /\*\* \* Get element property \* @param {string} prop \* @returns {Promise} \*/ ElementHandle.prototype.getProp = async function( prop ){ const handle = await this.\_page.evaluateHandle( ( el, prop ) => el[ prop ], this, prop ); return await handle.jsonValue(); }; ``` As soon as you import this module once in you code you can play with the handles as follows: ``` const elh = await page.$( `#testTarget` ); console.log( await elh.isVisible() ); console.log( await elh.getAttr( "class" ) ); console.log( await elh.getProp( "innerHTML" ) ); ``` Upvotes: 2 <issue_comment>username_4: My way ``` async function getVisibleHandle(selector, page) { const elements = await page.$$(selector); let hasVisibleElement = false, visibleElement = ''; if (!elements.length) { return [hasVisibleElement, visibleElement]; } let i = 0; for (let element of elements) { const isVisibleHandle = await page.evaluateHandle((e) => { const style = window.getComputedStyle(e); return (style && style.display !== 'none' && style.visibility !== 'hidden' && style.opacity !== '0'); }, element); var visible = await isVisibleHandle.jsonValue(); const box = await element.boxModel(); if (visible && box) { hasVisibleElement = true; visibleElement = elements[i]; break; } i++; } return [hasVisibleElement, visibleElement]; } ``` Usage ``` let selector = "a[href='https://example.com/']"; let visibleHandle = await getVisibleHandle(selector, page); if (visibleHandle[1]) { await Promise.all([ visibleHandle[1].click(), page.waitForNavigation() ]); } ``` Upvotes: 0 <issue_comment>username_5: In the accepted answer `page.eval()` is mentioned, however, with puppeteer such a method has never existed and I think what is really meant is in fact [page.evaluate()](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pageevaluatepagefunction-args). However, Using `page.evaluate()` requires you to split your operation into two parts (one for getting the element, one to select the value). > > Is there any way to do this not as verbose? > > > In such cases, [page.$eval()](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pageevalselector-pagefunction-args-1) appears to be more appropriate as it allows you to directly pass your selector as argument, thus reducing the number of operations or variable you need to introduce: Now in your particular case, you want to perform the `$eval` not just on the whole page but on an `ElementHandle`, which is possible since [May 9, 2018](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pageevalselector-pagefunction-args-1) via [elementHandle.$eval()](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#elementhandleevalselector-pagefunction-args-1): > > This method runs document.querySelector within the element and passes it as the first argument to pageFunction. > > > [![enter image description here](https://i.stack.imgur.com/TrwKC.png)](https://i.stack.imgur.com/TrwKC.png) This translates to your example as follows (here using a css selector instead of xpath): ``` await elementHandle.$eval('/div/div/h3/a', el => el.text); ``` Upvotes: 2
2018/03/20
1,657
5,897
<issue_start>username_0: My html page doesn't show the edits that should be made with my css page. I have tried to fix the problem for over an hour now and done everything I can think of. The page is linked with ``` link href="stlyes/css-ba.css" type="text/css" rel="stylesheet" ``` Which i believe is correct. I've checked and rechecked the name and any other things. It is saved in the folder styles as css-ba.css.<issue_comment>username_1: I prefer to use the `eval()` function so I can use less verbose code: ``` page.eval(() => { let element = document.querySelector('#mySelector') return element.innerText }).then(text => { console.log(text) }) ``` You can also pass an element you previously grabbed like your `ele` var: **Using Promise syntax** ``` page.eval(element => { return element.innerText }, ele).then(text => { // Do whatever you want with text }) ``` **Using async/await syntax** ``` const text = await page.eval(element => element.innerText), ele) // Do whatever you want with text ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ...or write a tiny helper function. ``` public async GetProperty(element: ElementHandle, property: string): Promise { return await (await element.getProperty(property)).jsonValue(); } ``` use: ``` let inner = await GetProperty(ele, 'innerHTML'); ``` Upvotes: 3 <issue_comment>username_3: I would rather extend ElementHandle for missing methods like: ``` // [email protected] let { ElementHandle } = require( "puppeteer/lib/ExecutionContext" ); // [email protected] if ( ElementHandle === undefined ) { ElementHandle = require( "puppeteer/lib/JSHandle" ).ElementHandle; } /** * Set value on a select element * @param {string} value * @returns {Promise} \*/ ElementHandle.prototype.select = async function( value ) { await this.\_page.evaluateHandle( ( el, value ) => { const event = new Event( "change", { bubbles: true }); event.simulated = true; el.querySelector( `option[value="${ value }"]` ).selected = true; el.dispatchEvent( event ); }, this, value ); }; /\*\* \* Check if element is visible in the DOM \* @returns {Promise} \*\*/ ElementHandle.prototype.isVisible = async function(){ return (await this.boundingBox() !== null); }; /\*\* \* Get element attribute \* @param {string} attr \* @returns {Promise} \*/ ElementHandle.prototype.getAttr = async function( attr ){ const handle = await this.\_page.evaluateHandle( ( el, attr ) => el.getAttribute( attr ), this, attr ); return await handle.jsonValue(); }; /\*\* \* Get element property \* @param {string} prop \* @returns {Promise} \*/ ElementHandle.prototype.getProp = async function( prop ){ const handle = await this.\_page.evaluateHandle( ( el, prop ) => el[ prop ], this, prop ); return await handle.jsonValue(); }; ``` As soon as you import this module once in you code you can play with the handles as follows: ``` const elh = await page.$( `#testTarget` ); console.log( await elh.isVisible() ); console.log( await elh.getAttr( "class" ) ); console.log( await elh.getProp( "innerHTML" ) ); ``` Upvotes: 2 <issue_comment>username_4: My way ``` async function getVisibleHandle(selector, page) { const elements = await page.$$(selector); let hasVisibleElement = false, visibleElement = ''; if (!elements.length) { return [hasVisibleElement, visibleElement]; } let i = 0; for (let element of elements) { const isVisibleHandle = await page.evaluateHandle((e) => { const style = window.getComputedStyle(e); return (style && style.display !== 'none' && style.visibility !== 'hidden' && style.opacity !== '0'); }, element); var visible = await isVisibleHandle.jsonValue(); const box = await element.boxModel(); if (visible && box) { hasVisibleElement = true; visibleElement = elements[i]; break; } i++; } return [hasVisibleElement, visibleElement]; } ``` Usage ``` let selector = "a[href='https://example.com/']"; let visibleHandle = await getVisibleHandle(selector, page); if (visibleHandle[1]) { await Promise.all([ visibleHandle[1].click(), page.waitForNavigation() ]); } ``` Upvotes: 0 <issue_comment>username_5: In the accepted answer `page.eval()` is mentioned, however, with puppeteer such a method has never existed and I think what is really meant is in fact [page.evaluate()](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pageevaluatepagefunction-args). However, Using `page.evaluate()` requires you to split your operation into two parts (one for getting the element, one to select the value). > > Is there any way to do this not as verbose? > > > In such cases, [page.$eval()](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pageevalselector-pagefunction-args-1) appears to be more appropriate as it allows you to directly pass your selector as argument, thus reducing the number of operations or variable you need to introduce: Now in your particular case, you want to perform the `$eval` not just on the whole page but on an `ElementHandle`, which is possible since [May 9, 2018](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pageevalselector-pagefunction-args-1) via [elementHandle.$eval()](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#elementhandleevalselector-pagefunction-args-1): > > This method runs document.querySelector within the element and passes it as the first argument to pageFunction. > > > [![enter image description here](https://i.stack.imgur.com/TrwKC.png)](https://i.stack.imgur.com/TrwKC.png) This translates to your example as follows (here using a css selector instead of xpath): ``` await elementHandle.$eval('/div/div/h3/a', el => el.text); ``` Upvotes: 2
2018/03/20
716
2,207
<issue_start>username_0: I am trying out multiclass semantic segmentation in Keras. Right now i'm using the Unet architecture, and have a model similar to this (but deeper): ``` inputs = Input(shape=(512,512,3)) # 128 down1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs) down1 = BatchNormalization()(down1) down1 = Dropout(0.1)(down1) down1 = Conv2D(32, (3, 3), padding='same', activation='relu')(down1) down1 = BatchNormalization()(down1) down1_pool = MaxPooling2D((2, 2))(down1) center = Conv2D(64, (3, 3), padding='same', activation='relu')(down1_pool) center = BatchNormalization()(center) center = Dropout(0.1)(center) center = Conv2D(64, (3, 3), padding='same', activation='relu')(center) center = BatchNormalization()(center) # center up1 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(center), down1], axis=3) up1 = Conv2D(32, (3, 3), padding='same', activation='relu')(up1) up1 = BatchNormalization()(up1) up1 = Dropout(0.1)(up1) up1 = Conv2D(32, (3, 3), padding='same', activation='relu')(up1) up1 = BatchNormalization()(up1) # 128 classify = Conv2D(3, (1, 1), activation='softmax')(up1) model = Model(inputs=inputs, outputs=classify] model.compile(optimizer=Adam(lr=lr), loss='categorical_crossentropy, metrics=[losses.dice_coeff]) ``` My dataset consists of 680k images (512, 512, 3) and 680k corresponding labels. The labels are one-hot encoded and has shape (512, 512, 3) i.e 3 classes. And then my question(s): Is this the right way to set up my model? Or should i use 'sigmoid' activation and 'binary\_crossentropy'?<issue_comment>username_1: if your label are binary go with sigmoid activation and if it is other way around via one hot code i.e the way you are implementing then softmax should be used as activation Upvotes: 1 <issue_comment>username_2: I have the same problem. I didn't found a loss function that made my model to converge. So I used 3 separate model for each label. with dice loss function I had good results for each label. Now I am checking ways to unite all 3 models predictions. In your model softmax is the right activation, and binary and categorial cross entropy are same since your data is binary. Upvotes: 0
2018/03/20
2,296
6,760
<issue_start>username_0: This is part of an ongoing series of issues I'm having trying to condense a csv file with multiple rows for each client based on the number of medical services they received. For each service, they have a row. I've included the dataframe at the bottom. I'm trying to calculate how many times a client (identified with an ID\_profile number) got each type of service and add that to a column named for the type of service. So, if a client got 3 Early Intervention Services, I would add the number "3" to the "eisserv" column. Once that is done, I want to combine all the client rows into one. Where I'm getting stuck is populating 3 different columns with data based off one column. I am trying to iterate through the rows using some strings for the function to compare to. The function works, but for reasons I can't understand, all the strings change to "25" as the function works. ``` import pandas as pd df = pd.read_csv('fakeRWclient.csv') df['PrimaryServiceCategory'] = df['PrimaryServiceCategory'].map({'Referral for Health Care/Supportive Services': '33', 'Health Education/Risk reduction': '25', 'Early Intervention Services (Parts A and B)': '11'}) df['ServiceDate'] = pd.to_datetime(df['ServiceDate'], format="%m/%d/%Y") df['id_profile'] = df['id_profile'].apply(str) df['served'] = df['id_profile'] + " " + df['PrimaryServiceCategory'] df['count'] = df['served'].map(df['served'].value_counts()) eis = "11" ref = "33" her = "25" print("Here are the string values") print(eis) print(ref) print(her) df['herrserv']="" df['refserv']="" df['eisserv']="" for index in df.itertuples(): for eis in df['PrimaryServiceCategory']: df['eisserv'] = df['count'] for her in df['PrimaryServiceCategory']: df['herrserv'] = df['count'] for ref in df['PrimaryServiceCategory']: df['refserv'] = df['count'] print("Here are the string values") print(eis) print(ref) print(her) ``` Here is the output: ``` Here are the string values 11 33 25 Here are the string values 25 25 25 id_profile ServiceDate PrimaryServiceCategory served count herrserv \ 0 439 2017-12-05 25 439 25 1 1 1 444654 2017-01-25 25 444654 25 2 2 2 56454 2017-12-05 33 56454 33 1 1 3 56454 2017-01-25 25 56454 25 2 2 4 444654 2017-03-01 25 444654 25 2 2 5 56454 2017-01-01 25 56454 25 2 2 6 12222 2017-01-05 11 12222 11 1 1 7 12222 2017-01-30 25 12222 25 3 3 8 12222 2017-03-01 25 12222 25 3 3 9 12222 2017-03-20 25 12222 25 3 3 refserv eisserv 0 1 1 1 2 2 2 1 1 3 2 2 4 2 2 5 2 2 6 1 1 7 3 3 8 3 3 9 3 3 ``` Why do the string values switch? And is this even the right function to do what I'm hoping to do?<issue_comment>username_1: You can use [`pandas.get_dummies`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) after mapping your integers to categories, then merge with your dataframe. You can add a 'count' column summing the 3 category counts afterwords. ``` df = pd.DataFrame({'id_profile': [439, 444654, 56454, 56454, 444654, 56454, 12222, 12222, 12222, 12222], 'ServiceDate': ['2017-12-05', '2017-01-25', '2017-12-05', '2017-01-25', '2017-03-01', '2017-01-01', '2017-01-05', '2017-01-30', '2017-03-01', '2017-03-20'], 'PrimaryServiceCategory': [25, 25, 33, 25, 25, 25, 11, 25, 25, 25]}) d = {11: 'eis', 33: 'ref', 25: 'her'} df['Service'] = df['PrimaryServiceCategory'].map(d) df = df.set_index('id_profile')\ .join(pd.get_dummies(df.drop('PrimaryServiceCategory', 1), columns=['Service'])\ .groupby(['id_profile']).sum()) # ServiceDate PrimaryServiceCategory Service Service_eis \ # id_profile # 439 2017-12-05 25 her 0 # 12222 2017-01-05 11 eis 1 # 12222 2017-01-30 25 her 1 # 12222 2017-03-01 25 her 1 # 12222 2017-03-20 25 her 1 # 56454 2017-12-05 33 ref 0 # 56454 2017-01-25 25 her 0 # 56454 2017-01-01 25 her 0 # 444654 2017-01-25 25 her 0 # 444654 2017-03-01 25 her 0 # Service_her Service_ref # id_profile # 439 1 0 # 12222 3 0 # 12222 3 0 # 12222 3 0 # 12222 3 0 # 56454 2 1 # 56454 2 1 # 56454 2 1 # 444654 2 0 # 444654 2 0 ``` Upvotes: 2 <issue_comment>username_2: I have made changes to your existing code only. ``` import pandas as pd df = pd.read_csv('fakeRWclient.csv') df['PrimaryServiceCategory'] = df['PrimaryServiceCategory'].map({'Referral for Health Care/Supportive Services': '33', 'Health Education/Risk reduction': '25', 'Early Intervention Services (Parts A and B)': '11'}) df['ServiceDate'] = pd.to_datetime(df['ServiceDate'], format="%m/%d/%Y") df['id_profile'] = df['id_profile'].apply(str) print(df.groupby('id_profile').PrimaryServiceCategory.count()) ``` Above code will give output like this: ``` id_profile 439 1 12222 4 56454 3 444654 2 ``` Upvotes: 1 <issue_comment>username_3: The values of `eis`, `ref` and `her` switch to "25" because you are looping over the variable `PrimaryServiceCategory`, and the last value in that serie is "25". You are using `eis`, `ref` and `her` as the names of the iterator variable, so they change in every loop. I think this is an inefficient way to do it. It's better if you use groupby and transform: ``` df['count'] = df.groupby(['id_profile','PrimaryServiceCategory']).transform('count') ``` Upvotes: 2 [selected_answer]
2018/03/20
1,803
7,062
<issue_start>username_0: I have made a user interface to fetch data from a MySQL table and visualize it. It is running on a bokeh server. My users connect remotely to the server using their browser (firefox). This works perfectly fine: I simply import the table into a pandas dataframe. My users also need to download the table as excel. This means I cannot use the [export\_csv](https://github.com/bokeh/bokeh/tree/master/examples/app/export_csv) example which is pure javascript. I have no experience with JavaScript. **All I want is to transfer a file from the directory where my main.py is to the client side.** The technique I have tried so far is to join a normal `on_click` callback to a button, export the information I need to 'output.xls', then change a parameter from a dummy glyph which in turn runs a Javascript code. I got the idea from [Bokeh widgets call CustomJS and Python callback for single event?](https://stackoverflow.com/questions/44212250/bokeh-widgets-call-customjs-and-python-callback-for-single-event) . Note I haven't set the alpha to 0, so that I can see if the circle is really growing upon clicking the download button. At the bottom of my message you can find my code. You can see I have tried with both XMLHttpRequest and with Fetch directly. In the former case, nothing happens. In the latter case I obtain a file named "mydata.xlsx" as expected, however it contains **only** this raw text: `404: Not Found404: Not Found`. **Code:** ``` p = figure(title='mydata') #download button download_b = Button(label="Download", button_type="success") download_b.on_click(download) #dummy idea from https://stackoverflow.com/questions/44212250/bokeh-widgets-call-customjs-and-python-callback-for-single-event dummy = p.circle([1], [1],name='dummy') JScode_xhr = """ var filename = p.title.text; filename = filename.concat('.xlsx'); alert(filename); var xhr = new XMLHttpRequest(); xhr.open('GET', '/output.xlsx', true); xhr.responseType = 'blob'; xhr.onload = function(e) { if (this.status == 200) { var blob = this.response; alert('seems to work...'); if (navigator.msSaveBlob) { navigator.msSaveBlob(blob, filename); } else { var link = document.createElement("a"); link = document.createElement('a'); link.href = URL.createObjectURL(blob); window.open(link.href, '_blank'); link.download = filename; link.target = "_blank"; link.style.visibility = 'hidden'; link.dispatchEvent(new MouseEvent('click')); URL.revokeObjectURL(url); } } else { alert('Ain't working!'); } }; """ JScode_fetch = """ var filename = p.title.text; filename = filename.concat('.xlsx'); alert(filename); fetch('/output.xlsx').then(response => response.blob()) .then(blob => { alert(filename); //addresses IE if (navigator.msSaveBlob) { navigator.msSaveBlob(blob, filename); } else { var link = document.createElement("a"); link = document.createElement('a') link.href = URL.createObjectURL(blob); window.open(link.href, '_blank'); link.download = filename link.target = "_blank"; link.style.visibility = 'hidden'; link.dispatchEvent(new MouseEvent('click')) URL.revokeObjectURL(url); } return response.text(); }); """ dummy.glyph.js_on_change('size', CustomJS(args=dict(p=p), code=JScode_fetch)) plot_tab = Panel(child=row(download_b,p), title="Plot", closable=True, name=str(self.test)) def download(): writer = pd.ExcelWriter('output.xlsx') data.to_excel(writer,'data') infos.to_excel(writer,'info') dummy = p.select(name='dummy')[0] dummy.glyph.size = dummy.glyph.size +1 ```<issue_comment>username_1: `bokeh serve` creates just a few predefined handlers to serve some static files and a WebSocket connection - by default, it doesn't have anything to serve files from the root of the project. Instead of using the one-file format, you can try using the [directory format](https://bokeh.pydata.org/en/latest/docs/user_guide/server.html#directory-format), save your files to `static` directory and download them from `/static/`. One downside of this approach is that you still have to write that convoluted code to just make your backend create the file before a user downloads it. The best solution would be to go one step further and embed Bokeh Server as a library into your main application. Since you don't have any non-Bokeh code, the simplest way would be to go with Tornado ([an example](https://github.com/bokeh/bokeh/blob/0.12.14/examples/howto/server_embed/tornado_embed.py)). `bokeh.server.server.Server` accepts `extra_patterns` argument - you can add a handler there to dynamically create Excel files and serve them from, say, `/data/`. After all that, the only thing that you need in your front-end is a single link to the Excel file. Upvotes: 1 <issue_comment>username_2: Trying out username_1's answer, I found what was the issue. The javascript code I named `JScode_fetch` is *almost* correct, however I get a 404 because it is not pointing correctly to the right path. I made my application in the directory format: I changed my .py file to `main.py`, placed it into a folder called `app`, and changed this one line of code in `JScode_fetch`: ``` fetch('/app/static/output.xlsx', {cache: "no-store"}).then(response => response.blob()) [...] ``` You can see the problem was that it was trying to access `localhost:5006/output.xlsx`, instead of `localhost:5006/app/output.xlsx`. As it is in directory format, the right link is now `localhost:5006/app/static/output.xlsx` to count for the `static` directory. I also changed a few lines in the `download` function: ``` def download(): dirpath = os.path.join(os.path.dirname(__file__),'static') writer = pd.ExcelWriter(os.path.join(dirpath,'output.xlsx')) writer = pd.ExcelWriter('output.xlsx') data.to_excel(writer,'data') infos.to_excel(writer,'info') dummy = p.select(name='dummy')[0] dummy.glyph.size = dummy.glyph.size +1 ``` Now it is working flawlessly! edit: I have added `, {cache: "no-store"}` within the `fetch()` function. Otherwise the browser thinks the file is the same if you have to download a different dataframe excel while using the same `output.xlsx` filename. More info [here](https://hacks.mozilla.org/2016/03/referrer-and-cache-control-apis-for-fetch/). Upvotes: 3 [selected_answer]
2018/03/20
2,574
7,844
<issue_start>username_0: I have a small test page for Brython related tests and recently added xml.etree.Elementree module there, but it doesn't work for some reason. I've following code (actually there is more stuff but I removed the irrelevant parts): ``` import sys from browser import alert, document as docu from browser import ajax from xml.etree import ElementTree as ET def ajaxReceive(req): alert("Input value: " + docu["numinput"].value) alert('Ajax response: \n %s' % req.text ) if req.status == 200 or req.status == 0: d = docu['messagebox'] d.clear() r = ET.fromstring(req.text) #n = r.findall('./person/name') #a = r.findall('./person/age') #d.text = 'Dude %s is %s old.' % (n,a) else: docu['messagebox'] <= 'error: ' + req.text def ajaxSend(): req = ajax.ajax() url = '/bryt/' x = 1 y = 2 z = docu['numinput'] req.open('POST', url, True) req.bind('complete', ajaxReceive) req.set\_header('content-type', 'application/x-www-form-urlencoded' ) req.send( { 'action': 'calc', 'x': x, 'y': y, 'z': z.value }) docu['ajaxbutton'].bind('click', ajaxSend ) d = docu['messagebox'] d.clear() d.text = 'ready' Ajax run ``` And at server side it only adds up 3 to given number. The problem is XML-formatted Ajax-response that is being received. It comes in in clear XML, but even building a etree root-element by calling .fromstring() funtion, it tracebacks as follows: ``` Error 500 means that Python module pyexpat was not found at url http://example.com/bryt/pyexpat.py brython.js:7171:1 Error 500 means that Python module pyexpat was not found at url http://example.com/bryt/pyexpat/__init__.py brython.js:7171:1 Error 500 means that Python module pyexpat was not found at url http://example.com/js/brython/Lib/site-packages/pyexpat.py brython.js:7171:1 Error 500 means that Python module pyexpat was not found at url http://example.com/js/brython/Lib/site-packages/pyexpat/__init__.py brython.js:7171:1 Error for module xml.parsers.expat brython.js:7242:21 Error: Stack trace: _b_.ImportError.$factory@http://example.com/js/brython/brython.js line 6466 > eval:49:371 import_hooks@http://example.com/js/brython/brython.js:11605:7 $B.$__import__@http://example.com/js/brython/brython.js:7430:33 $B.$import@http://example.com/js/brython/brython.js:7460:43 $module<@http://example.com/js/brython/brython.js line 7242 > eval:14:9 @http://example.com/js/brython/brython.js line 7242 > eval:1:16 run_py@http://example.com/js/brython/brython.js:7242:1 exec_module@http://example.com/js/brython/brython.js:7276:1 cl_method@http://example.com/js/brython/brython.js:4729:43 import_hooks@http://example.com/js/brython/brython.js:11629:5 $B.$__import__@http://example.com/js/brython/brython.js:7430:33 $B.$import@http://example.com/js/brython/brython.js:7473:5 __init__205@http://example.com/js/brython/brython.js line 7242 > eval:5653:25 type.__call__@http://example.com/js/brython/brython.js:4674:20 factory@http://example.com/js/brython/brython.js:4741:47 XML194@http://example.com/js/brython/brython.js line 7242 > eval:5190:41 ajaxReceive3@http://example.com/js/brython/brython.js line 4294 > eval:176:32 @http://example.com/js/brython/brython.js line 7188 > eval:69:24 ajax.$factory/xmlhttp.onreadystatechange@http://example.com/js/brython/brython.js line 7188 > eval:161:13 brython.js:7243:1 16:21:17.002 args Array [ "No module named pyexpat" ] ``` And there has been something similar in the past [brython issue 613](https://github.com/brython-dev/brython/issues/613) where Pierre states that **there is no such thing as pure python pyexpat** (July 2017). However [Brython standard distribution](https://brython.info/static_doc/en/stdlib.html) lists Lib/xml/etree and expat.py - does it mean that it's still not available? Brython **Lib/xml/etreeElementTree.py** lines 1511 and onwards starts with: ``` class XMLParser: def __init__(self, html=0, target=None, encoding=None): try: from xml.parsers import expat except ImportError: try: import pyexpat as expat except ImportError: raise ImportError( "No module named expat; use SimpleXMLTreeBuilder instead" ) ``` In my understanding it should succeed in first import **from xml.parsers import expat** but apparently it doesn't when it tries to pyexpat version that does not exist. So, the question is, has anyone else stumbled into same problem and/or does anyone have a solution for this? --- Some addtional (next day) observations: **Cloning** and **checking out a tag**, **building** from the git repository does not really work as you would expect (no pun intentended). ``` % git clone https://github.com/brython-dev/brython.git brython.git % cd brython.git/scripts brython.git/scripts% python3 ./make_dist.py /usr/bin Traceback (most recent call last): File "./make_dist.py", line 207, in run() File "./make\_dist.py", line 88, in run import make\_stdlib\_list File "brython.git/scripts/make\_stdlib\_list.py", line 53, in with open(os.path.join(static\_doc\_folder,lang,'stdlib.html'), 'w', encoding="utf-8") as out: FileNotFoundError: [Errno 2] No such file or directory: 'brython.git/www/static\_doc/en/stdlib.html' brython.git% ``` That is caused by missing directories: ``` brython.git/scripts% mkdir -p ../www/static_doc/{en,es,fr} brython.git/scripts% python3 ./make_dist.py ``` Very last build lines were: ``` adding xml.etree adding xml.etree.cElementTree adding xml.parsers adding xml.parsers.expat adding xml.sax ``` So maybe those are included. Once targets are created, they (apparently, not really sure) appear into **brython.git/setup/data** directory, release zip files and naked .js files for live website. So I link into that directory in my Apache httpd webroot. But that building did not solve the traceback problem. As side note, for an old OpenSource fart as myself, I feel very alien in this source tree, this project is done in *Mouse camp* (Microsoft Windows) and even one rare Makefile I managed to find, does not work with GNU Make because conflicting use of whitespace. Let alone there would be regular INSTALL, README, setup, Makefile etc files with expected content. I'm literally reading sources and guessing how all this is supposed to work. But I guess that only tells that Python is truely a cross platform language. As being an "Open Source project", it's funny that its discussion is not for everyone: **Your application to join the [Google group brython](https://groups.google.com/forum/?fromgroups=#!forum/brython) has been refused**<issue_comment>username_1: Well, digging deeper and it appears that **Lib/xml/parses/expat.py** contains: ``` """Interface to the Expat non-validating XML parser.""" import sys from pyexpat import * # provide pyexpat submodules as xml.parsers.expat submodules sys.modules['xml.parsers.expat.model'] = model sys.modules['xml.parsers.expat.errors'] = errors ``` I tried to comment that pyexpat part and rebuild the pkgs, the traceback is now different. So there is no expat, no pyexpat and no ElementTree then. No XML in Ajax responses then. Upvotes: 1 [selected_answer]<issue_comment>username_2: The answer from the Issue is still relevant. Expat is a C-library which can't be used. You'll need to make do with `window.DOMParser`. The preferred way to use `brython` is not via cloning the repo, but using pip: ``` $ pip install brython ``` and then ``` $ python3 -m brython install ``` in the directory where you are developing your app. This will copy all the necessary javascript files and create an example app from which you can start. Note that this is all described in the "nonexistent" [README.md](https://github.com/brython-dev/brython/blob/master/README.md) file. Upvotes: 1
2018/03/20
287
1,023
<issue_start>username_0: Im trying to figure out to convert the following statement to a trygetvalue. Any help would be greatly appreciated. ``` filters.ContainsKey("companyCode") ? filters["companyCode"] : string.Empty; ``` I have code that I inherited that has about 25 of these that builds one query statement.<issue_comment>username_1: You have to create an `if` statement where you assign the default in case the filter wasn't found: For C# 7: ``` if (!filters.TryGetValue("companyCode", out string value)) value = string.Empty; ``` And pre-C# 7: ``` string value; if (!filters.TryGetValue("companyCode", out value)) value = string.Empty; ``` Upvotes: 3 <issue_comment>username_2: With C#7 you can use this one liner: ``` string result = filters.TryGetValue("companyCode", out string code) ? code : string.Empty; ``` If you can't use inline `out` variables you need two lines: ``` string code; string result = filters.TryGetValue("companyCode", out code) ? code : string.Empty; ``` Upvotes: 2
2018/03/20
642
2,243
<issue_start>username_0: I have the following method which works fine, however I was hoping to find ``` ... handleClick = (e) => { this.props.onClick() if(this.props.getAnswer(e)) { console.log('correct') //this.setState({ [this.state.button+ e.target.getAttribute('data-option')]: 'correct' }); if(e.target.getAttribute('data-option') == 1){ this.setState({ button1: 'correct' }) } if(e.target.getAttribute('data-option') == 2){ this.setState({ button2: 'correct' }) } if(e.target.getAttribute('data-option') == 3){ this.setState({ button3: 'correct' }) } if(e.target.getAttribute('data-option') == 4){ this.setState({ button4: 'correct' }) } } } ... ``` would it be possible to do something like this? ``` this.setState({ button[e.target.getAttribute('data-option')]: 'correct' }) ``` obviously this doesn't work but I didn't want to repeat unnecessary if statements. The "data-option" attr returns an integer so I wanted to use that to update the state property dynamically instead of button1, button2, button3 .....<issue_comment>username_1: Yes, and very nearly like you have: ``` this.setState({ ["button" + e.target.getAttribute('data-option')]: 'correct' }); ``` That assumes the value is 1, 2, 3, or 4 (whereas your current code doesn't). If it may not be, we need a guard: ``` var option = e.target.getAttribute('data-option'); if (option >= 1 && option <= 4) { // Coerces to number, NaN won't match this.setState({ ["button" + option]: 'correct' }); } ``` Upvotes: 2 <issue_comment>username_2: ``` this.setState({ ["button"+e.target.getAttribute('data-option')] : 'correct' }) ``` Upvotes: 1 <issue_comment>username_3: You can use ES6 template literal: ``` this.setState({ [`button${e.target.getAttribute('data-option')}`]: 'correct' }); ``` Edit: use brackets for computed property name, without brackets you would get SyntaxError: expected property name, got template literal. See [username_1](https://stackoverflow.com/users/157247/t-j-crowder)'s answer, it's the best one so far. Upvotes: 1
2018/03/20
367
1,386
<issue_start>username_0: I am trying to get image that inside drawable folder by path so i try this but not working ``` String path = "android.resource:///" + BuildConfig.APPLICATION_ID + "/drawable/logo_ataturk"; File imgFile = new File(path); if (imgFile.exists()) { Bitmap myBitmap = BitmapFactory.decodeFile(imgFile.getAbsolutePath()); ImageView imgLogo = findViewById(R.id.imageView_logo); imgLogo.setImageBitmap(myBitmap); } ```<issue_comment>username_1: Resources are not files on the device. They are files on your developer machine, nothing more. You can replace all of this code with: ``` ImageView imgLogo = findViewById(R.id.imageView_logo); imgLogo.setImageResource(R.drawable.logo_ataturk); ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: ``` Drawable drawable = getResources().getDrawable(getResources() .getIdentifier($name_of_the_image, "drawable", getPackageName())); ``` or ``` Bitmap myBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.large_icon); imgLogo.setImageBitmap(myBitmap); ``` Upvotes: 0 <issue_comment>username_3: I found this simplest solution ``` Uri path = Uri.parse("android.resource://" + BuildConfig.APPLICATION_ID + "/drawable/logo_ataturk"); ImageView imgLogo = findViewById(R.id.imageView_logo); imgLogo.setImageURI(path); ``` Upvotes: 2
2018/03/20
433
1,606
<issue_start>username_0: Access 2016, VBA; I am trying to declare a public variable (User's window login), and it keeps "Forgetting" the variable between the function and the form\_load Module: ``` Option Compare Database Public vUser As String ``` Start Form: ``` Public Function fnUserID() As String Set Wshnetwork = CreateObject("wscript.Network") fnUserID = Wshnetwork.UserName End Function Public Function SetUserID() vUser = fnUserID End Function Private Sub Form_Load() Call SetUserID txtBox.Value = vUser End Sub ``` Now I have this exact code (same var names, etc) working in a second Access db. What am I missing that is making this one not work correctly?<issue_comment>username_1: Resources are not files on the device. They are files on your developer machine, nothing more. You can replace all of this code with: ``` ImageView imgLogo = findViewById(R.id.imageView_logo); imgLogo.setImageResource(R.drawable.logo_ataturk); ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: ``` Drawable drawable = getResources().getDrawable(getResources() .getIdentifier($name_of_the_image, "drawable", getPackageName())); ``` or ``` Bitmap myBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.large_icon); imgLogo.setImageBitmap(myBitmap); ``` Upvotes: 0 <issue_comment>username_3: I found this simplest solution ``` Uri path = Uri.parse("android.resource://" + BuildConfig.APPLICATION_ID + "/drawable/logo_ataturk"); ImageView imgLogo = findViewById(R.id.imageView_logo); imgLogo.setImageURI(path); ``` Upvotes: 2
2018/03/20
532
1,892
<issue_start>username_0: I was trying to teach myself verilog programming from "The Verilog HDL" book by <NAME>. In one of the exercises, they asked to generate a clock using structural verilog only (except for the $monitor part of course). I tried the following: ``` module clock(); wor clk; assign clk=0; initial begin $monitor($time,,"clk=%b", clk); #100 $finish; end assign #5 clk = ~clk; endmodule ``` Problem is, it works in iVerilog 0.9.7, but for version 10.0 and above, it does not work.. I simply get undefined value for clk! Does not seem like a bug in iVerilog, otherwise it would probably have been fixed in one of the 10.x releases. Is there any other way to get this working? Also, what is wrong with the current code (if any) ?<issue_comment>username_1: Strange code, you are resolving `clk` drives using an or-gate behaviour. First `assign` is constantly driving `0`. Second `assign` is inverting the resolved value. But what is the initial value of the second `wor` input? Wouldn't that second `assign` produce `X` in the first place (`X` ored with `0` would give you `X`)? Have your tried running it in the simulator or at least drawing somewhere what hardware do you want to get? It's like you're feeding and inverter with `0 or'ed with X` which will produce `X`. If you want to model a clock you can: 1) convert first `assign` into `initial begin clk = 0; end` 2) second `assign` to `always` 3) make `clk` `reg` type If you want a synthesizable clock generator you would require a source of oscillations, PLL, etc. Upvotes: 0 <issue_comment>username_2: this is a messy code you have. usually clock generation done with regs as one of the following ``` reg clk; initial begin clk = 0; forever #5 clk = ~clk; end ``` or ``` always #5 clk = ~clk; initial clk = 0; ``` Upvotes: 2
2018/03/20
567
1,788
<issue_start>username_0: Somehow the form is not sending the main data which is basically the name, email and number. I's appreciate any/all help! Here is the PHP i used. ``` php $browser = $_SERVER['HTTP_USER_AGENT']; require_once('geoplugin.class.php'); $geoplugin = new geoPlugin(); //get user's ip address $geoplugin-locate(); if (!empty($_SERVER['HTTP_CLIENT_IP'])) { $ip = $_SERVER['HTTP_CLIENT_IP']; } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) { $ip = $_SERVER['HTTP_X_FORWARDED_FOR']; } else { $ip = $_SERVER['REMOTE_ADDR']; } $mess .= "---------------|OFFERS USER EMAIL SUBSCRIPTION|---------------\n"; $mess .= "Full Name : " . $_POST['cn'] . "\n"; $mess .= "Email Address : " . $_POST['em'] . "\n"; $mess .= "Number: " . $_POST['pn'] . "\n"; $mess .= "IP Address : " .$ip. "\n"; $mess .= "--------------------------------------------\n"; $mess .= "City: {$geoplugin->city}\n"; $mess .= "Region: {$geoplugin->region}\n"; $mess .= "Country Name: {$geoplugin->countryName}\n"; $mess .= "Country Code: {$geoplugin->countryCode}\n"; $mess .= "---------------------------------------------\n"; $from = "XOXLERT"; $to = "<EMAIL>"; $hi = mail($to, "OFFERS EMAIL SUBSCRIPTION | ".$ip , $mess); ?> ``` And here goes the HTML Form Element. ``` #### Sign up now And Save BIG For Later Use On your Custom Logo & Website ``` What i don't understand is why i am receiving just the IP (of course those are pre-populated by the geo-plugin script) and other form data omitted<issue_comment>username_1: Set the form method to POST. HTML forms default to GET method. Upvotes: 1 <issue_comment>username_2: You need to specify the method type. You're trying to access via POST yet the default method is GET. ``` ``` Upvotes: 0
2018/03/20
937
3,567
<issue_start>username_0: I am just getting used to GitHub from the instructions I got as a beginner, and got stuck at the step below. I am wondering how to get the name of the local repo to be able to create remote repo with same name. So far, I have run: a) git init b) git add readme, c)git commit -m "first". In my directory, I see a .git directory, but don't know the name of the local repo.Thank you. 1. Create a remote repository on GitHub that has the same name as your local repository. 2. Add the remote repository (origin) URL to local repository. 3. Push local repostiory to GitHub. 4. Create a local branch, create/add/commit a new file. 5. Merge new local branch commit(s) into local master. 6. Push updated master branch to GitHub.<issue_comment>username_1: You can name your remote repo on GitHub anything you want. People just usually name it the same as the folder they are working in, assuming that is the name of their project. Upvotes: 1 <issue_comment>username_2: EDIT-Credit to @ username_1 in the comments for this. You can create a project/repository on GitHub from the command line, but only if want to install the "hub" utility. Turns out it's actually super useful, but, since your new, I would highly recommend doing it the regular, manual way first before installing bash aliases to extend functionality. Here's the link to the "hub" utility: <https://github.com/github/hub> From the man page (note this is only **AFTER** installing "hub"): ``` git create [NAME] [-p] [-d DESCRIPTION] [-h HOMEPAGE] Create a new public GitHub repository from the current git repository and add remote origin at "<EMAIL>:USER/REPOSITORY.git"; USER is your GitHub username and REPOSITORY is the current working directory name. To explicitly name the new repository, pass in NAME, optionally in ORGANIZATION/NAME form to create under an organization you're a member of. With -p, create a private repository, and with -d and -h set the repository's description and homepage URL, respectively. ``` Normally, through vanilla git, you can't do this. You need to have a project to push to first. However, you can just name the project whatever you want, and then just push to it. The name of the repository doesn't matter (ie, if you upload the "Foo" repo, you can change the folder name and it won't affect the remote repo and vice versa). All that matters is what's inside your repo, not what it's called. **Try the following:** 1. Get the url for the reporistory. [![picture of where to get repo url](https://i.stack.imgur.com/06e4k.png)](https://i.stack.imgur.com/06e4k.png) 2. `git remote add origin "remote repository URL"` That should tell your repo where you are pushing to. 3. `git remote -v` Verifies that it's a valid remote address. 4. `git push origin master` Push the commit you made and make sure it shows up on your repo. Upvotes: 0 <issue_comment>username_3: Ok, I assume you have your local git already created, if not you will have to do this on the terminal in the directory of your project: ``` git init git add . git commit -m "Initial commit" ``` Next in your github account create a new repo: [An image of how to create a repo on GitHub](https://i.stack.imgur.com/iGxVr.png) Then you go to de button that says "clone or download" and copy the link you wish (https or SSH) something like this `https://github.com/YourUsername/YourRepo.git` And by the end on the console you do this: ``` git remote add origin https://github.com/YourUsername/YourRepo.git git push -u origin master ``` Upvotes: 1 [selected_answer]
2018/03/20
971
3,664
<issue_start>username_0: still learning programming and have a question. im trying to download image from url and put it in cells, ive successfully done it with text but not with images. ``` func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return posts.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier:"searchCell", for: indexPath) as! CustomTableViewCell cell.titleField?.text = posts[indexPath.row].caption cell.descriptionField?.text = posts[indexPath.row].description cell.tagsField?.text = posts[indexPath.row].tags let photoUrl = posts[indexPath.row].photoUrl cell.SearchImage.image = ... ``` //url is stored in photoUrl ``` return cell } ``` }<issue_comment>username_1: You can name your remote repo on GitHub anything you want. People just usually name it the same as the folder they are working in, assuming that is the name of their project. Upvotes: 1 <issue_comment>username_2: EDIT-Credit to @ username_1 in the comments for this. You can create a project/repository on GitHub from the command line, but only if want to install the "hub" utility. Turns out it's actually super useful, but, since your new, I would highly recommend doing it the regular, manual way first before installing bash aliases to extend functionality. Here's the link to the "hub" utility: <https://github.com/github/hub> From the man page (note this is only **AFTER** installing "hub"): ``` git create [NAME] [-p] [-d DESCRIPTION] [-h HOMEPAGE] Create a new public GitHub repository from the current git repository and add remote origin at "[email protected]:USER/REPOSITORY.git"; USER is your GitHub username and REPOSITORY is the current working directory name. To explicitly name the new repository, pass in NAME, optionally in ORGANIZATION/NAME form to create under an organization you're a member of. With -p, create a private repository, and with -d and -h set the repository's description and homepage URL, respectively. ``` Normally, through vanilla git, you can't do this. You need to have a project to push to first. However, you can just name the project whatever you want, and then just push to it. The name of the repository doesn't matter (ie, if you upload the "Foo" repo, you can change the folder name and it won't affect the remote repo and vice versa). All that matters is what's inside your repo, not what it's called. **Try the following:** 1. Get the url for the reporistory. [![picture of where to get repo url](https://i.stack.imgur.com/06e4k.png)](https://i.stack.imgur.com/06e4k.png) 2. `git remote add origin "remote repository URL"` That should tell your repo where you are pushing to. 3. `git remote -v` Verifies that it's a valid remote address. 4. `git push origin master` Push the commit you made and make sure it shows up on your repo. Upvotes: 0 <issue_comment>username_3: Ok, I assume you have your local git already created, if not you will have to do this on the terminal in the directory of your project: ``` git init git add . git commit -m "Initial commit" ``` Next in your github account create a new repo: [An image of how to create a repo on GitHub](https://i.stack.imgur.com/iGxVr.png) Then you go to de button that says "clone or download" and copy the link you wish (https or SSH) something like this `https://github.com/YourUsername/YourRepo.git` And by the end on the console you do this: ``` git remote add origin https://github.com/YourUsername/YourRepo.git git push -u origin master ``` Upvotes: 1 [selected_answer]
2018/03/20
345
1,152
<issue_start>username_0: I have some elements created by v-for. How can I run the function only once by keeping 'for every element creation' as a condition . ``` ```<issue_comment>username_1: It's not so clear what exactly you want: > > > So, here's all possible solutions you want to implement. You can bind the method using [`once`](https://v2.vuejs.org/v2/guide/events.html#Event-Modifiers) modifier: ``` ``` Or, if you want not to change the content then you can use [`v-once`](https://v2.vuejs.org/v2/api/#v-once): ``` {{ neverChanged }} ``` But if you just need to use the function when it was created then call the function inside created property and do not bind the method anywhere else. ``` created() { if(condition) { this.yourMethod() } } ``` Upvotes: 1 <issue_comment>username_2: the easiest way, IMO, would be to make each of those elements a Vue Component & pass the function down as a prop. File One ``` ``` Custom Component ``` {{propValue.value}} export default { props: ['propFunction', 'propValue'], created(){ if (this.propValue.bool === true) { this.propFunction() } } } ``` Upvotes: 2
2018/03/20
872
2,418
<issue_start>username_0: I pulled down the qtpdf repository to check out and play with from here: <http://code.qt.io/cgit/qt-labs/qtpdf.git> As soon as I open the qtpdf.pro file, I get the following general messages: ``` Cannot read C:/.../build-qpdf-Desktop_Qt_5_10_0_MinGW_32bit-Debug/src/lib/pdfium.pri: No such file or directory Cannot read C:/.../build-qpdf-Desktop_Qt_5_10_0_MinGW_32bit-Debug/src/lib/freetype.pri: No such file or directory Project MESSAGE: perl -w C:\Qt\5.10.0\mingw53_32\bin\syncqt.pl -module QtPdf -version 5.9.0 -outdir "C:/.../build-qpdf-Desktop_Qt_5_10_0_MinGW_32bit-Debug" C:/.../qtpdf Project MESSAGE: perl -w C:\Qt\5.10.0\mingw53_32\bin\syncqt.pl -module QtPdfWidgets -version 5.9.0 -outdir "C:/.../build-qpdf-Desktop_Qt_5_10_0_MinGW_32bit-Debug" C:/.../qtpdf Project ERROR: Unknown module(s) in QT: pdfwidgets Project ERROR: Unknown module(s) in QT: pdf ``` Also when running "rebuild all" I get the following error message: ``` 16:15:33: Starting: "C:\Qt\Tools\mingw530_32\bin\mingw32-make.exe" qmake_all "Some of the required modules (!qnx:!uikit:!winphone:!winrt:!win32-g++:!integrity) are not available." "Skipped." ``` I've narrowed this down to !win32-g++ - If I remove that, it won't throw that error, but I don't understand what exactly this error means - I have googled around a lot but couldn't find anything that seemed reminiscent of this particular problem.<issue_comment>username_1: > > # Upstream PDFium has not been ported to various platforms yet. > requires(!qnx:!uikit:!winphone:!winrt:!win32-g++:!integrity) > > > The module is not working on `win32-g++ (mingw32)`. Upvotes: 0 <issue_comment>username_2: I managed to build qtpdf with mingw provided with Qt5.11.2. 1. Comment out "requires(!qnx:!uikit:!winphone:!winrt:!win32-g++:!integrity)" from pdf.pro 2. Apply the patch found here : <https://github.com/Alexpux/MINGW-packages/blob/master/mingw-w64-pdfium-git/pdfium-2729.patch> (link dead in 2021, no backup sorry) 3. Add -luuid after "win32: LIBS\_PRIVATE += -ladvapi32 -lgdi32 -luser32" in lib.pro 4. I don't know why, probably because I am not an expert, but mingw doesn't generate correctly the folder "include". As a workaround I use the one generated under linux. For some reason, mingw generate only QPdfDepends and QPdfWidgetsDepends, none of the needed headers are generated. I didn't dig because I am lazy. Hope it helps ! Upvotes: 1
2018/03/20
665
2,112
<issue_start>username_0: I have three javascript functions. The main function is within script tags in my index.html file. I have two other functions that are in an external file named cities.js. Beneath is the main function found within the script tags: ``` function initMap() { map = new google.maps.Map(document.getElementById('map'), { zoom: 12, center: coords, mapTypeId: 'terrain' } ``` This is for the Google Maps API. The next two functions hold the coordinates for the map. ``` function ldn() { var coords = var coords = {lat: 51.509865, lng: -0.118092}; } function birm() { var coords = {lat: 52.509865, lng: -0.2}; } ``` Essentially, I have a drop down box and when either 'ldn' or 'birm' is clicked, the functions in the external file are executed depending on which one is clicked. I wish to take the coordinates within the functions and include them into the main function for the 'center' variable. I hope this makes sense! I have tried calling the function in center (center: birm()) but it does not work as intended. Simply returning the coords doesn't work either.<issue_comment>username_1: > > # Upstream PDFium has not been ported to various platforms yet. > requires(!qnx:!uikit:!winphone:!winrt:!win32-g++:!integrity) > > > The module is not working on `win32-g++ (mingw32)`. Upvotes: 0 <issue_comment>username_2: I managed to build qtpdf with mingw provided with Qt5.11.2. 1. Comment out "requires(!qnx:!uikit:!winphone:!winrt:!win32-g++:!integrity)" from pdf.pro 2. Apply the patch found here : <https://github.com/Alexpux/MINGW-packages/blob/master/mingw-w64-pdfium-git/pdfium-2729.patch> (link dead in 2021, no backup sorry) 3. Add -luuid after "win32: LIBS\_PRIVATE += -ladvapi32 -lgdi32 -luser32" in lib.pro 4. I don't know why, probably because I am not an expert, but mingw doesn't generate correctly the folder "include". As a workaround I use the one generated under linux. For some reason, mingw generate only QPdfDepends and QPdfWidgetsDepends, none of the needed headers are generated. I didn't dig because I am lazy. Hope it helps ! Upvotes: 1
2018/03/20
405
1,565
<issue_start>username_0: As I've learned from [Spark documentation](http://sparkjava.com/documentation#gzip) GZIP is done automatically for response if it contains *Content-Encoding* header with value *gzip*. I have the following code: ``` post("/test", (req, res) -> byteArray); ``` What if `byteArray` is already compressed with GZIP? Then Spark will compress it once again and client will receive a garbage. Is there any way to response with GZIP without Spark auto-compression?<issue_comment>username_1: > > Is there any way to response with GZIP without Spark auto-compression? > > > Unfortunately not. You could write your response directly by using the writer on the underlying `HttpServletResponse`. ``` post("/test", (req, res) -> { res.raw().getWriter(); // do something with this writer return ""; ); ``` It's not pretty though. Upvotes: 3 [selected_answer]<issue_comment>username_2: It is not possible to opt out of Spark's automatic GZIP compression. One somewhat grotesque workaround is to wrap the InputStream in a GZIPInputStream and return that from your controller method: ```java return new GZIPInputStream(myInputStream); ``` **HUGE CAVEAT:** This will waste CPU cycles on what is effectively a no-op of unzipping and re-zipping the stream, but given that Spark has no opt-out it can serve as a temporary workaround. I encourage anyone who is running into this issue to voice a feature request in the Spark 3 [feature request thread](https://github.com/perwendel/spark/issues/1105#issuecomment-600239799). Upvotes: 0
2018/03/20
1,694
6,616
<issue_start>username_0: I'm using Google AppEngine Flexible with [python environment](https://cloud.google.com/appengine/docs/flexible/python/). Right now I have two services: default and worker that share the same codebase, configured by `app.yaml` and `worker.yaml`. Now I need to install native C++ library, so I had to switch to [Custom runtime](https://cloud.google.com/appengine/docs/flexible/custom-runtimes/) and added Dockerfile. Here is the Dockerfile generated by `gcloud beta app gen-config --custom` command ``` FROM gcr.io/google-appengine/python LABEL python_version=python3.6 RUN virtualenv --no-download /env -p python3.6 # Set virtualenv environment variables. This is equivalent to running # source /env/bin/activate ENV VIRTUAL_ENV /env ENV PATH /env/bin:$PATH ADD requirements.txt /app/ RUN pip install -r requirements.txt ADD . /app/ CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi ``` Previously my `app.yaml` and `worker.yaml` each had it's own `entrypoint:` config that specified the command needed to be run to start the service. So, my question is how can I use two different commands to start the services? **EDIT 1** So far I was able to solve this by rewriting `CMD` line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution. `gcloud app deploy` command has `--image-url` flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often<issue_comment>username_1: Since the `Dockerfile` name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration. But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see [Sharing code between flexible environment modules in a GAE project](https://stackoverflow.com/questions/42464203/sharing-code-between-modules-in-a-gae-project/42465117#42465117). I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have: * hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file * package and publish your shared code as an external library, added to the `requirements.txt` file of each service using it * split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services. * have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying *might* work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work. Upvotes: 3 [selected_answer]<issue_comment>username_2: So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application `src/main/appengine/web` and `src/main/appengine/worker` where each directory conainted the `app.yaml` and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the `-Dapp.stage.dockerDirectory` and `-Dapp.stage.appEngineDirecory` respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety: Web Process: `mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}` Worker Process: `mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}` Upvotes: 1 <issue_comment>username_3: Yes, as you mentioned, I think using the *--image-url* flag, is a good option here. Specify a custom runtime. Build the image locally, tag it, and push it to Google Container Registry (GCR) then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option. Here's an example that accomplishes different entrypoints in 2 services that share the same code: ...this is assuming that the "flex" and not "standard" app engine offering is being used. lets say you have a: project called *my-proj* with a default service that is not important and a second service called *queue-processor* which is using much of the same code from the same directory. Create a separate dockerfile for it called *QueueProcessorDockerfile* and a separate app.yaml called *queue-processor-app.yaml* to tell google app engine what i want to happen. `QueueProcessorDockerfile` ``` FROM node:10 # Create app directory WORKDIR /usr/src/app COPY package.json ./ COPY yarn.lock ./ RUN npm install -g yarn RUN yarn # Bundle app source COPY . . CMD [ "yarn", "process-queue" ] ``` \*of course i have a "process-queue" script in my package.json `queue-processor-app.yaml` ``` runtime: custom env: flex ... other stuff... ... ``` 1. build and tag the docker image Check out googles guide here -> <https://cloud.google.com/container-registry/docs/pushing-and-pulling> docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile . 2. push it to GCR docker push eu.gcr.io/my-proj/queue-processor 3. deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor Upvotes: 3
2018/03/20
1,244
3,243
<issue_start>username_0: I have data in a csv file such as: ``` value,key A,Name B,Name C,Name 24,Age 25,Age 20,Age M,Gender F,Gender ``` I would like to parse it to produce the following map: ``` Map(Name -> List(A, B, C), Age -> List(24,25,20), Gender -> List(M,F)) ```<issue_comment>username_1: Here is a possibility: ``` import scala.io.Source Source.fromFile("my/path") .getLines() .drop(1) // Drop the header (first line) .map(_.split(",")) // Split by ",": List(Array(A, Name), Array(B, Name), Array(C, Name), ... .groupBy(_(1)) // group by value: Map(Age -> List(Array(24, Age), Array(25, Age), Array(20, Age)), ... .map{ case (key, values) => (key, values.map(_(0))) } // final format: Map(Age -> List(24, 25, 20), ... ``` which gives: ``` Map(Age -> List(24, 25, 20), Name -> List(A, B, C), Gender -> List(M, F)) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This code will give the desired output ``` import scala.io.Source Source.fromFile("C:\\src\\data.txt").getLines() .drop(1).map(_.split(",").toList) // gives each list like this -- List(A, Name) .map(x => (x.tail.head -> x.head)).toList // swap key and value places -- (Name,A) .groupBy(_._1) // group by key -- (Age,List((Age,24), (Age,25), (Age,20))) .map(x => x._1 -> x._2.map(v => v._2)).toMap // extracting only values part -- Map(Age -> List(24, 25, 20), Name -> List(A, B, C), Gender -> List(M, F)) ``` Upvotes: 0 <issue_comment>username_3: If you're not willing to iterate multiple times over your dataset, here's a single pass solution: ``` import scala.io.Source val m = mutable.Map[String, List[String]]().withDefaultValue(List.empty) Source.fromFile("my/path") .getLines() .drop(1) .map(_.split(",")) .foreach { case x => m.put(x(1), x(0) :: m(x(1))) } ``` Upvotes: 1 <issue_comment>username_4: Play by Play: ``` scala> val doc = """A,Name | B,Name | C,Name | 24,Age | 25,Age | 20,Age | M,Gender | F,Gender""".stripMargin doc: String = A,Name B,Name C,Name 24,Age 25,Age 20,Age M,Gender F,Gender scala> doc.split("\\n") res0: Array[String] = Array(A,Name, B,Name, C,Name, 24,Age, 25,Age, 20,Age, M,Gender, F,Gender) scala> res0.toList.map{ x => val line = x.split(","); line(1) -> line(0)} res1: List[(String, String)] = List((Name,A), (Name,B), (Name,C), (Age,24), (Age,25), (Age,20), (Gender,M), (Gender,F)) scala> res1.groupBy(e => e._1) res4: scala.collection.immutable.Map[String,List[(String, String)]] = Map(Age -> List((Age,24), (Age,25), (Age,20)), Name -> List((Name,A), (Name,B), (Name,C)), Gender -> List((Gender,M), (Gender,F))) scala> res4.mapValues{x => x.map{case (k,v) => v}} res6: scala.collection.immutable.Map[String,List[String]] = Map(Age -> List(24, 25, 20), Name -> List(A, B, C), Gender -> List(M, F)) ``` Upvotes: 0 <issue_comment>username_5: More functional approach: ``` Source.fromFile("file.csv").getLines().drop(1).foldLeft(Map.empty[String, List[String]]){ (acc, line) ⇒ val value :: key :: Nil = line.split(",").toList acc + (key → (acc.getOrElse(key, List.empty) :+ value)) } ``` This gives: ``` Map(Name -> List(A, B, C), Age -> List(24, 25, 20), Gender -> List(M, F)) ``` Upvotes: 1
2018/03/20
731
2,635
<issue_start>username_0: I am using query string to dynamically loop through table name. Now I need to add a wildcard to the table name so that it picks up new table I get. Example below ``` WHILE @Year_Id <= 2018 BEGIN SET @YearVar = CONVERT(varchar(4), @Year_Id) SET @TABLENAME = '[SWDI].[dbo].[hail-'+@YearVar+']' SET @SQLQUERY = 'SELECT CELL_ID, LAT, LON, SEVPROB, PROB, MAXSIZE, _ZTIME'+ ' from '+@TABLENAME+ ``` So my earlier tables were hail-2001, hail-2002, hail-2003 till 2017. Now I get tables with name hail-201801, hail-201802.. I want to incorporate the extra 01, 02 as wild card while calling the table. Thanks a lot for the help. I am new to this.<issue_comment>username_1: As a general solution, you could do something like this: ``` SET @TableName = '[SWDI].[dbo].[hail-'+@YearVar+']'; -- Check if the year table exists IF (OBJECT_ID(@TableName, 'U') IS NULL) BEGIN -- Implement your 'wildcard' logic here SET @NumVar = '01'; SET @TableName = '[SWDI].[dbo].[hail-'+ @YearVar + @NumVar']'; END ``` Another solution would be to have the missing numbered tables as views on top of the existing tables, but this might have negative performance effects. A third one is to have yearly views on top of the new numbered tables, with clever constraints on the tables and in the view definition, this can have insignificant overhead. Last, but not least, you should consider to build a partitioned view on top of these tables and maintain that view. You can query the view directly without messing with table names all the time. **Please read Gordon's answer!** In any case, I'd hightly suggest to be careful with the dynamic queries. You might want to take a look at functions like `PARSENAME` and `QUOTENAME`. Upvotes: 0 <issue_comment>username_2: Uh, no you don't. You clearly don't have a complete understanding of how tables work in a database or in SQL Server. You gain *nothing* by having multiple tables with exact same columns and types and whose names are differentiated by numbers or dates. That is not how SQL works. You lose a lot: foreign key references, query simplicity, maintainability, and more. Instead, include the date column in the data and store everything in one table. If you are concerned about performance, then you can create an index on the date column to get the data that you need. Another method (if the data is large) is to store the data in separate data partitions. [These are an important part of SLQ Server functionality](https://learn.microsoft.com/en-us/sql/relational-databases/partitions/partitioned-tables-and-indexes). Upvotes: 2
2018/03/20
882
2,884
<issue_start>username_0: Checking Flask and Flask-SQLAlchemy doc i have a confusion, if i have: **models.py**: ``` from flask_sqlalchemy import SQLAlchemy #:Use or not db = SQLAlchemy() class User(db.Model): __tablename__ = "USERS" id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(25), nullable=False) password = db.Column(db.String(255), nullable=False) ``` **config.py**: ``` import os class Config(object): DEBUG = True SECRET_KEY = os.urandom(12) SQLALCHEMY_TRACK_MODIFICATIONS = False SQLALCHEMY_DATABASE_URI = "mysql://root:@localhost/example" ``` **app.py**: ``` from config import Config from models import db from models import User app = Flask(__name__) app.config.from_object(Config) db = SQLAlchemy(app) if __name__ == "__main__": db.init_app(app) with app.app_context(): db.create_all() app.run() ``` **is necessary use:** `db = SQLAlchemy(app)` after `app.config.from_object(Config)` or `db.init_app(app)` is the same? Because i founded some examples of flask with only `db = SQLAlchemy()` in `models.py` and have `db.init_app(app)` before `app.run()` And so founded examples with `db = SQLAlchemy(app)` override in `app.py` with no db.init\_app(app) I printed both values and get: ``` with only db in models: the problem is: The app create the tables in my database But engine=None with db = SQLAlchemy(app) override the problem is: The app dont create the tables in my database ``` *What is the correct way to assign the database of SQLAlchemy to Flask app?*<issue_comment>username_1: There is no correct way as it all depends on how you want to instantiate your db using SQLAlchemy and app using Flask. But I'll go over how I use the app. ``` def create_app(): app = Flask(__name__, static_folder='static', instance_relative_config=True) app.config.from_pyfile('app_config.py') return app app = create_app() db = SQLAlchemy(app) ``` Upvotes: 3 <issue_comment>username_2: The Flask documentation recommends: ``` db.init_app(app) ``` [Flask Factories & Extensions](https://flask.palletsprojects.com/en/1.1.x/patterns/appfactories/#factories-extensions) Upvotes: 2 <issue_comment>username_3: **From: <https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/>** There are two usage modes which work very similarly. One is binding the instance to a very specific Flask application: ``` app = Flask(__name__) db = SQLAlchemy(app) ``` The second possibility is to create the object once and configure the application later to support it: ``` db = SQLAlchemy() def create_app(): app = Flask(__name__) db.init_app(app) return app ``` The difference between the two is that in the first case methods like `create_all()` and `drop_all()` will work all the time but in the second case a `flask.Flask.app_context()` has to exist. Upvotes: 4
2018/03/20
559
1,963
<issue_start>username_0: I have a little problem that has been hard to solve, harder than I believed. I assume there will be a simple answer to this one. So, I have made a timer using javascript, the timer works fine and that's not a problem. However, I am using this to define the timer: ``` document.getElementById("timerz").innerHTML = 0; ``` And then using this to display the timer ``` ``` I know that I can't use an element with the same ID on multiple places. However, I've tried this: ``` document.getElementsByClass("timerz").innerHTML = 0; ``` And then ``` ``` And it is still not working, how can I solve this? It's very strange.<issue_comment>username_1: There is no correct way as it all depends on how you want to instantiate your db using SQLAlchemy and app using Flask. But I'll go over how I use the app. ``` def create_app(): app = Flask(__name__, static_folder='static', instance_relative_config=True) app.config.from_pyfile('app_config.py') return app app = create_app() db = SQLAlchemy(app) ``` Upvotes: 3 <issue_comment>username_2: The Flask documentation recommends: ``` db.init_app(app) ``` [Flask Factories & Extensions](https://flask.palletsprojects.com/en/1.1.x/patterns/appfactories/#factories-extensions) Upvotes: 2 <issue_comment>username_3: **From: <https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/>** There are two usage modes which work very similarly. One is binding the instance to a very specific Flask application: ``` app = Flask(__name__) db = SQLAlchemy(app) ``` The second possibility is to create the object once and configure the application later to support it: ``` db = SQLAlchemy() def create_app(): app = Flask(__name__) db.init_app(app) return app ``` The difference between the two is that in the first case methods like `create_all()` and `drop_all()` will work all the time but in the second case a `flask.Flask.app_context()` has to exist. Upvotes: 4
2018/03/20
490
1,929
<issue_start>username_0: I don't understand this error. I'm trying to use the `mat-datepicker` with `MomentJS` exactly as shown in [the examples](https://material.angular.io/components/datepicker/examples), but I cannot get rid of this error. My component code looks like this: ``` import { Component, Input, OnInit } from '@angular/core'; import { TimeRange, TimeRanges } from "./time-range-selector.constants"; import * as moment from 'moment'; import {FormControl} from "@angular/forms"; @Component({ selector: 'time-range-selector', templateUrl: './time-range-selector.component.html', styleUrls: ['./time-range-selector.component.scss'] }) export class TimeRangeSelectorComponent implements OnInit { private _timeRange: TimeRange; public timeRanges: {} = TimeRanges; public startDate: FormControl = new FormControl(moment([2017, 0, 1])); public endDate: FormControl = new FormControl(moment([2017, 0, 2])); public get selectedTimeRange(): TimeRange { return this._timeRange; } @Input() public set selectedTimeRange(range: TimeRange) { this._timeRange = range; } constructor() { } ngOnInit() { } } ``` and my markup like this: ``` {{ timeRange.label }} ``` The console output says the error occurs at the line beginning `.`<issue_comment>username_1: Your template variables for the elements have the same name as the `[formControl]` bindings, which is causing issues (it looks like Angular favors the template variable over a variable declared in the component) when attempting to create a FormControl object. This fixes your issue: ``` ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The error is derived from your `[matDatepicker] = "startDate"` and `[matDatepicker] = "endDate"`. Referencing the date picker with the same name of the FormControl is an issue. Update your code to something like the following. ``` ``` Upvotes: 2
2018/03/20
938
2,433
<issue_start>username_0: 1) Why I can not change the nav-link color on CSS? I'm trying to change the color of MENUTEST's (nav-link) to white, but it's not working and 2) Add a line white above every MENUTEST's texts (just like [u] but above) How to do it? What do i'm doing wrong? navbar.php ``` [![](img/logo.png)](#) * [MENUTEST](index.php) * [MENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) * [MENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) * [MENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) * [MENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) * [MENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) [SUBMENUTEST](#) * [MENUTEST](#) ``` And navbar.css ``` /* MENUTEST */ .nav-link a { font-color: white; } /* SUBMENUTEST */ .dropdown-menu { background: #EDEFF1; height: auto; width: auto; padding-left: 5px; border-bottom: 1px solid #CCCCCC; border-top: 1px solid #DDDDDD; } .dropdown-menu a { color: #78828D; font-size: 15px; } ``` ...............................................<issue_comment>username_1: `font-color` is not a css property. You'll want to use `color`. ``` .nav-link a { // font-color: white; color: white; } ``` Upvotes: 2 <issue_comment>username_2: Try and take a little further than just the class and try -> li, a, .nav-link and then whatever css you want. I can see you are using Bootstrap so it might be inheriting something. A trick I always use it to go to the Chrome browser and right click on the thing I want to change and click on "inspect" this will make it possible to see what css lies behind the item you want to change and you can then try to edit it in the browser and if it works copy that code and insert into your own css file. Hope it helps :) Upvotes: 1 [selected_answer]<issue_comment>username_3: ``` ` * [HOME](index.php) * [PORTFOLIO](two.php) .... // specific CSS for your menu div.menuBar li.selected a { color: #FF0000; } // more general CSS li.selected a { color: #FF0000; }` ``` How about this, have you tried using `li.selcted a{color:#FF0000;}`. Upvotes: 0
2018/03/20
1,337
4,291
<issue_start>username_0: I have muliple buttons and on clicking them I want a sound. My code goes like this ``` button.setOnClickListener(new Button.OnClickListener() { @Override public void onClick(View view) { if(mSound != null && mSound.isPlaying()){ mSound.stop(); mSound.reset(); mSound.release(); mSound = null; } mSound = new MediaPlayer(); mSound = MediaPlayer.create(getApplicationContext(), R.raw.button); mSound.start(); } }); ``` In OnCreate, I have intitalized mSound like this, ``` mSound = new MediaPlayer(); mSound = MediaPlayer.create(this, R.raw.button); ``` I am getting error (1,-19) as well as (0,38). Note: This is not duplicate question. I tried each answer from all the questions that are asked before but nothing worked.<issue_comment>username_1: According to the [docs](https://developer.android.com/reference/android/media/MediaPlayer.html) failure happen due to several reasons, the main ones are > > failure to call release() > > > and it is recommended that you catch your error and recover. > > Some playback control operation may fail due to various reasons, such > as unsupported audio/video format, poorly interleaved audio/video, > resolution too high, streaming timeout, and the like. Thus, error > reporting and recovery is an important concern under these > circumstances. > > > Try this [answer](https://stackoverflow.com/questions/17731527/how-to-implement-a-mediaplayer-restart-on-errors-in-android#20817610) or something better to catch the error. Or try the following code, to release the object in memory. ``` mSound = MediaPlayer.create(getApplicationContext(), R.raw.button); mSound.start(); mSound.setOnCompletionListener(new OnCompletionListener() { public void onCompletion(MediaPlayer mp) { mp.release(); }; }); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I was getting same error, while I was playing some audio files repeatedly. I was making a simple English to french phrase book, having 8 buttons for 8 common phrases, the audio files were very small. On closely monitoring the log I found that there is some limitation from the increment in code cache. After playing the audio repeatedly after some time I got this: ``` 2019-12-10 12:37:32.561 29888-29893/com.example.gridlayout I/zygote: Do partial code cache collection, code=22KB, data=30KB 2019-12-10 12:37:32.567 29888-29893/com.example.gridlayout I/zygote: After code cache collection, code=22KB, data=30KB 2019-12-10 12:37:32.567 29888-29893/com.example.gridlayout I/zygote: Increasing code cache capacity to 128KB ``` On repeating further more I got this: ``` 2019-12-10 12:37:46.008 29888-29893/com.example.gridlayout I/zygote: Do partial code cache collection, code=53KB, data=57KB 2019-12-10 12:37:46.009 29888-29893/com.example.gridlayout I/zygote: After code cache collection, code=53KB, data=57KB 2019-12-10 12:37:46.009 29888-29893/com.example.gridlayout I/zygote: Increasing code cache capacity to 256KB ``` And on doing even more, I got: ``` 2019-12-10 12:38:23.596 29888-29905/com.example.gridlayout E/MediaPlayerNative: error (1, -19) 2019-12-10 12:38:23.596 29888-29888/com.example.gridlayout E/MediaPlayer: Error (1,-19) ``` I repeated this by patiently waiting for the short audio to complete and then play again, making sure that the cache isn't increased due to simultaneously playing over and over again. What I inferred is that, to run the audio files which are huge, code cache needs to be increased, it does automatically to some extent and gives out error after a limit(256KB in my case). Pretty sure the cache can be increased somehow... I experimented the robustness on my phone and it worked perfectly fine, I guess in our phones, they clear cache automatically after completion. Try running your app on mobile instead of emulator. **My conclusion: *Code cache limits the performance for sound playing. Either increase the code cache limit (I tried shallow searching, couldn't find, but search it properly, I guess you will find it somewhere), or try running it directly on your Mobile.*** Upvotes: 1
2018/03/20
1,837
7,184
<issue_start>username_0: I am using Instance as a lazy / dynamic injector in a TomEE Java application, and I have noticed a memory leak in my application. This is a first for me, so it's actually surprising to see a memory leak warning that has been outlined in the Java EE Library : ``` package javax.enterprise.inject; public interface Instance extends Iterable, Provider { /\*\* \* Destroy the given Contextual Instance. \* This is especially intended for {@link javax.enterprise.context.Dependent} scoped beans \* which might otherwise create mem leaks. \* @param instance \*/ public void destroy(T instance); } ``` Now this is most likely being caused by a clash with `@ApplicationScoped` and the `Instance`. I've provided an example of how the layers are in my classes. Notice the nested `Instance`. This is to provide dynamic injection of tasks. **Outer Class** ``` @ApplicationScoped public class MessageListenerImpl implements MessageListener { @Resource(name="example.mes") private ManagedExecutorService mes; @Inject @Any private Instance> workerInstance; // ... @Override public void onMessage(Message message) { ExampleObject eo = new ExampleObject(); Worker taskWorker = workerInstance.get(); taskWorker.setObject(eo); mes.submit(taskWorker); } // ... } ``` **Inner Class** ``` public class Worker implements Runnable { @Inject @Any private Instance taskInstance; @Setter private T object // ... @Override public void run() { Task t = taskInstance.get(); t.setObject(object); t.doTask(); // Instance destruction, manual cleanup tried here. } // ... } ``` **Interface** ``` public interface Task { void doTask(); void setObject(T obj); } ``` The classes that are leaking without calling `destroy(T instance)` are `ExampleObject`, `Worker`, and the implementation of `Task`. To keep the async design, I have tried passing the instance of `Worker` within it's instance (probably a bad idea, but I tried anyways), calling `destroy(T instance)` and setting `ExampleObject` to `null`. This cleaned up the `Task` implementation and `ExampleObject`, but not `Worker`. Another test I tried was doing a synchronous design within `MessageListenerImpl` (i.e. removing `Worker` and using `Task`) as a fallback effort, calling `destroy(T instance)` to clean up. This STILL left the leak, which leads me to believe it's got to be the clash with `@ApplicationScoped` and the `Instance`. If there is a way to keep the async design while achieving no memory leaks, please let me know. Really appreciate feedback. Thanks!<issue_comment>username_1: Indeed this is a weakness of `Instance`, it may leak. [This article](https://rmannibucau.wordpress.com/2015/03/02/cdi-and-instance-3-pitfalls-you-need-to-know/) has a good explanation. (As underlined in the comment from Siliarus below, *this is not an intrinsic bug of `Instance`, but wrong usage/design.*) Your `Worker` declares no scope, thus it is `@Dependent` scoped. This means it is *created anew for each injection*. `Instance.get()` is essentially an injection, so a new dependent-scoped object is created with each invocation of `get()`. The specification says that dependent-scoped objects are destroyed when their "parent" (meaning the object they are injected into) gets destroyed; but application-scoped beans live as long as the application, keeping all dependent-scoped beans they created alive. This is the memory leak. To mitigate do as written in the linked article: 1. Call `workerInstance.destroy(taskWorker)` as soon as you do not need the `taskWorker` anymore, preferably within a `finally` block: ``` @Override public void onMessage(Message message) { ExampleObject eo = new ExampleObject(); Worker taskWorker; try { taskWorker = workerInstance.get(); taskWorker.setObject(eo); mes.submit(taskWorker); } finally { workerInstance.destroy(taskWorker); } } ``` **EDIT:** Some extra thoughts on this option: What happens if, in the course of time, the implementation of the injected bean changes from `@Dependent` to e.g. `@ApplicationScoped`? If the `destroy()` call is not explicitly removed, which is not something an unsuspecting developer will do in a normal refactoring, you will end up destroying a "global" resource. CDI will take care to recreate it, so no functional harm will come to the application. Still a resource intended to be instantiated only once will be constantly destroyed/recreated, which might have non-functional (performance) implications. So, from my point of view, this solution leads to unnecessary coupling between the client and the implementation, and I would rather not go for it. 2. If you are only using the `Instance` for lazy loading, and there is only one instance, you may want to cache it: ``` ... private Worker worker; private Worker getWorker() { if( worker == null ) { // guard against multi-threaded access if environment is relevant - not shown here worker = workerInstance.get(); } return worker; } ... Worker taskWorker = getWorker(); ... ``` 3. Give scope to your `Worker`, so that its parent is no longer responsible for its lifecycle, but the relevant scope. Upvotes: 3 <issue_comment>username_2: So, I found a great implementation ([source](https://www.outjected.com/blog/2011/12/16/cdi-instance-injections-and-session-application-scoped-beans.html)) that satisfied my use-case. Using `BeanManager` allowed me to control the lifecycle of the task bean. I avoided the `Worker` and went with `CompletableFuture` instead (with minor changes to the `Task` interface to allow a returned value from the task). This allowed me to perform cleanup of the task bean and handle any exceptions from the task asynchronously. Rough example shown below. Thanks for the replies, and I hope this helps anyone else struggling with this issue! **Outer Class** ``` @ApplicationScoped public class MessageListenerImpl implements MessageListener { @Resource(name="example.mes") private ManagedExecutorService mes; @Inject private BeanManager bm; // ... @Override public void onMessage(Message message) { CreationalContext ctx = bm.createCreationalContext(null); Bean beans = bm.resolve(bm.getBeans(MyTask.class)); MyTask task = (MyTask) bm.getReference(beans, MyTask.class, ctx); task.setObject("Hello, Task!"); Utilities.doTask(mes, ctx, task); } // ... } ``` **Implemented Task** ``` public class MyTask implements Task { private String obj; // ... @Override public Boolean doTask() { System.out.println(obj); return Boolean.TRUE; } @Override void setObject(String obj) { this.obj = obj; } // ... } ``` **`CompletableFuture` Utility Method** ``` public final class Utilities { private Utilities() { } public static final doTask(ManagedExecutorService mes, CreationalContext ctx, Task task) { CompletableFuture.supplyAsync((Supplier) task::doTask, mes) .exceptionally((e) -> { System.out.println("doTask : FAILURE : " + e.getMessage()); return Boolean.FALSE; }) .thenApplyAsync((b) -> { System.out.println("Releasing Context"); ctx.release(); return b; }); } } ``` Upvotes: 1
2018/03/20
846
2,869
<issue_start>username_0: I almost give up. Can't find any solution on this so I hope you can help me. I have a script that shows/hides divs and it's working like this. If you click one button a div shows and if you press another button it switches to that div. That's working great. But I want to be able to close all divs with the last button clicked. This is my HTML ``` Div 1 Div 2 Content div 1 Content div 2 ``` This script works but has no closing functionality ``` $('.show-div').click(function() { $('.target-div').hide(); $('#div' + $(this).attr('target')).fadeIn(1000); }); ``` And this is the script I want to replace the working script with. I have been trying to change it to work with closing function. I might be totally of but hopefully you guide in the right direction. I get an error that tell me "box.hasClass() isn't a function". ``` $('.show-div').click(function() { var box = $('#div' + $(this).attr('target')); $('.target-div').hide(); if(box.hasCLass('close-div')) { box.removeClass('close-div'); $('.target-div').fadeOut(1000); } else { box.fadeIn(1000); box.addClass('close-div'); } }); ``` *Edit* Id's are updated. This is how the code became. With this code I can click on a button and show a div, click the next one to show another div. If I click the same button again it will close all divs. ``` $('.show-div').click(function() { var box = $('#div' + $(this).attr('target')); if(box.hasClass('close-div')) { $('.target-div').removeClass('close-div'); $('.target-div').fadeOut(1000); } else { $('.target-div').removeClass('close-div'); $('.target-div').hide(); box.fadeIn(1000); box.addClass('close-div'); } }); ```<issue_comment>username_1: You have typo in `if(box.has[CL]ass('close-div')) {` **hasClass** not **hasCLass** Upvotes: 2 <issue_comment>username_2: hasClass does not have a capital L - it's `hasClass` not `hasCLass` not sure if this is just a typo in the question or your real code. Also both your divs in the `hidden-divs` section have the same `id` of `div1`, when they should presumably be `div1` and `div2`. In any event it would be better to specify the full id of the div as the target instead of building it. In addition, you are applying `hide` to all elements of class `target-div` before fading them out, which rather defeats the idea of a fadeout ```html Div 1 Div 2 Content div 1 Content div 2 ``` ```javascript $('.show-div').click(function() { var box = $('#' + $(this).attr('target')); // this makes them invisible, so fadeOut is pointless $('.target-div').hide(); if(box.hasClass('close-div')) { box.removeClass('close-div'); $('.target-div').fadeOut(1000); } else { box.fadeIn(1000); box.addClass('close-div'); } }); ``` Upvotes: 1
2018/03/20
460
1,645
<issue_start>username_0: I am trying to create a Redshift UDF with function Parameters as below: `create or replace function function1(srctimezone VARCHAR,desttimezone VARCHAR,flag = 'nr') returns datetime` The last Parameter would be a defualt Parameter, i.e if user does not pass any value, it should take 'nr' by default. But I am not able to create function with default parameters. Any suggestions if Redshfit does / does not allow function creation with default Parameters. If it does, then what will be the correct syntax for the same?<issue_comment>username_1: You have typo in `if(box.has[CL]ass('close-div')) {` **hasClass** not **hasCLass** Upvotes: 2 <issue_comment>username_2: hasClass does not have a capital L - it's `hasClass` not `hasCLass` not sure if this is just a typo in the question or your real code. Also both your divs in the `hidden-divs` section have the same `id` of `div1`, when they should presumably be `div1` and `div2`. In any event it would be better to specify the full id of the div as the target instead of building it. In addition, you are applying `hide` to all elements of class `target-div` before fading them out, which rather defeats the idea of a fadeout ```html Div 1 Div 2 Content div 1 Content div 2 ``` ```javascript $('.show-div').click(function() { var box = $('#' + $(this).attr('target')); // this makes them invisible, so fadeOut is pointless $('.target-div').hide(); if(box.hasClass('close-div')) { box.removeClass('close-div'); $('.target-div').fadeOut(1000); } else { box.fadeIn(1000); box.addClass('close-div'); } }); ``` Upvotes: 1
2018/03/20
300
1,157
<issue_start>username_0: I'm making a simple CRUD single page application using PHP connected to a msqli database. All parts of the page work except for the "EDIT" function. It returns the warming "count(): Parameter must be an array or an object that implements Countable line 8" ``` php include('server.php'); if (isset($_GET['edit'])) { $id = $_GET['edit']; $update = true; $record = mysqli_query($db, "SELECT * FROM eBook_MetaData WHERE id=$id"); if (count($record) == 1 ) { $n = mysqli_fetch_array($record); $creator = $n['creator']; $title = $n['title']; $type = $n['type']; $identifier = $n['identifier']; $date = $n['date']; $language = $n['language']; $description = $n['description']; } } ? ``` any help with this would be appreciated.<issue_comment>username_1: Use mysql\_num\_rows for checking numbers of row Upvotes: 0 <issue_comment>username_2: I have a feeling its your sql query too. Try this instead `$record = mysqli_query($db, "SELECT * FROM eBook_MetaData WHERE id='$id'");` Upvotes: -1
2018/03/20
595
1,997
<issue_start>username_0: Hoping someone here can help, as I am currently out of my depth. So I am trying to cannibalize some pre-existing code to do a simple bit of maths for me. Basically I want to check if there is data in two fields, and if so to do the formula and then output the result into a field. ``` if (!empty($insurance["premium"] && $insurance['term'])) { $insurancep = $insurance["premium"] * ($insurance['term'] *12); } echo $insurancep; ``` I am new to php and I cannot understand why this is resulting in a fatal error/not working. Any help would be appreciated. Thanks!<issue_comment>username_1: You probably need to close your first parenthesis after $insurance['premium']. ``` if (!empty($insurance["premium"]) && !empty($insurance['term'])) { $insurancep = $insurance["premium"] * ($insurance['term'] *12); } ``` With the `empty` function, you can't test two variables at once, you have to test them one by one. Upvotes: 1 <issue_comment>username_2: ``` if (!empty($insurance["premium"] && $insurance['term'])) { $insurancep = $insurance["premium"] * ($insurance['term'] *12); } echo $insurancep; ``` is incorrect. <http://php.net/manual/en/function.empty.php> - is the link to `empty`. It should be written like this: ``` if (!empty($insurance["premium"]) && !empty($insurance['term'])) { $insurancep = $insurance["premium"] * ($insurance['term'] *12); } echo $insurancep; ``` off of ADyson's comment `ideally it would check that the values are numeric too, since the code is going to multiply them` ``` if ( !empty($insurance["premium"]) && is_numeric($insurance["premium"]) !empty($insurance['term']) && is_numeric($insurance["term"]) ) { $insurancep = $insurance["premium"] * ($insurance['term'] *12); } ``` if you're passing numbers and strings however, parse var like: `(int)$insurance['term']` to declare the vartype as integer (though I recommened passing the datatype you intend to use) Upvotes: 3 [selected_answer]
2018/03/20
1,067
3,616
<issue_start>username_0: I am struggling with an exercise in R. Bird's functional programming book that asks for an example of a function with type (num -> num) -> num The best I can come up with is a polymorphic type ``` func1 f = f 3 :t func1 func1 :: Num t1 => (t1 -> t2) -> t2 ``` The problem I am having is that I can't specify the return type of f, so the type remains (num -> t2) -> t2. My attempt to force the return type of f is as follows: ``` square x = x * x :t func1 square func1 square :: Num t2 => t2 -> t2 ``` Because of course if I try to find the type of func1 ∘ square it will just be num -> num<issue_comment>username_1: If it is enough to give a function which *can be assigned that type*, then yours is already enough. That is, the following type-checks just fine: ``` func1 :: Num a => (a -> a) -> a func1 f = f 3 ``` If, on the other hand, you want a function which *is inferred to have that type*, then you need to do some trickery. What we want to do here is to specify that the result of `f 3` and the `3` that we fed in have the same type. The standard way to force two terms to have the same type is to use `asTypeOf`, which is implemented this way: ``` asTypeOf :: a -> a -> a asTypeOf x _ = x ``` So let's try: ``` > :t \f -> f 3 `asTypeOf` 3 (Num a, Num t) => (t -> a) -> a ``` Unfortunately for us, this doesn't work, because the `3` in `f 3` and the standalone `3` are inferred to be using *potentially different* instances of `Num`. Still, it is a bit closer than `\f -> f 3` was -- note the new `Num a` constraint on the output that we didn't have before. An obvious next idea is to `let`-bind a variable to `3` and reuse that variable as the argument to both `f` and `asTypeOf`; surely then GHC will get the picture that `f`'s argument and result have the same type, right? ``` > :t \f -> let x = 3 in f x `asTypeOf` x (Num a, Num t) => (t -> a) -> a ``` Drat. Turns out that `let`s do what's called "let generalization"; the `x` will be just as polymorphic as the `3` was, and can be specialized to different types at different use sites. Usually this is a nice feature, but because we're doing an unnatural exercise we need to do unnatural things... Okay, next idea: some lambda calculi do not include a `let`, and when you need one, instead of writing `let a = b in c`, you write `(\a -> c) b`. This is especially interesting for us because Haskell uses a specially-restricted kind of polymorphism that means that inside `c`, the type of `a` is *monomorphic*. So: ``` > :t \f -> (\x -> f x `asTypeOf` x) 3 Num a => (a -> a) -> a ``` And now you complain that `asTypeOf` is cheating, because it uses a type declaration that doesn't match its inferred type, and the whole point of the exercise was to get the right type through inference alone. (If we were okay with using type declarations that don't match the inferred type, we could have stopped at `func1 :: Num a => (a -> a) -> a; func1 f = f 3` from way back at the beginning!) Okay, no problem: there's another standardish way to force the types of two expressions to unify, namely, by putting them in a list together. So: ``` > :t \f -> (\x -> head [f x, x]) 3 Num a => (a -> a) -> a ``` Phew, now we're finally at a place where we could in principle build, from the ground up, all the tools needed to get a term of the right type without any type declarations. Upvotes: 3 [selected_answer]<issue_comment>username_2: `func1 f = let x = f x in x` This is a partial function, it technically has the type you want and you should be aware of what they are and how they work in haskell. Upvotes: 0
2018/03/20
291
1,086
<issue_start>username_0: I am using a GridView to show multiple products and on click i want to go to details screen , however I am unable to find a way to identify which product grid view cell was selected / tapped by user , I think sending some parameter to handle method should do the trick. ```js return ( ( {item.name} )} /> ); ```<issue_comment>username_1: You can send parameters to your function like you said. **Example** ``` onPress={ () => this.onPressStone(item.id) } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can pass to the method onPressStone the item and depending the item you could do that print what you want. And if you want to know who was pressed you just have to make a log with the name of the item or take the index for example ``` return ( { //to know who was pressed: console.log('pressedItemName-->', item.name); //to know index pressed: console.log('pressedItemName-->', item); return ( {this.onPressStone(item)} } style={styles.itemContainer}> {item.name} )} /> ``` ); Upvotes: 1
2018/03/20
369
1,046
<issue_start>username_0: I have a data-set as below: ``` A B C 1 1 1 0 1 1 1 0 1 1 0 1 ``` I want to have a stack bar chart that shows percentage of 1 and 0 in each column next to other column all in one figure.<issue_comment>username_1: There are several steps you need to take: * calculate how many zeros and ones are for each variable * calculate relative percentage (you don't define this in your question * reflow data from wide to long (use `tidyr::gather`) * plot using `ggplot`'s [`geom_bar`](http://ggplot2.tidyverse.org/reference/geom_bar.html) Upvotes: 2 <issue_comment>username_2: First you need to tidy your data ``` library(tidyr) A = c(1,0,1,1) B = c(1,1,0,0) C = c(1,1,1,1) data = data.frame(A,B,C) data = gather(data, key = type, value = val) ``` Then compute your statistics ``` library(dplyr) perc = group_by(data, type) %>% summarise(perc = sum(val)/length(val)) ``` To finish plot them ``` library(ggplot2) ggplot(perc) + aes(x = type, y = perc) + geom_bar(stat = "identity") ``` Upvotes: 1
2018/03/20
1,068
3,107
<issue_start>username_0: Hi guys I am looking for a function ``` expand :: String -> [(Char,Int)] ``` that takes a string of character and numbers like `"a5b3c2"` and change it to a paired list of the string like `"[('a',5),('b',3),('c',2)]"` in that same form. Example: ``` expand "a5b4c2" [('a',5),('b',4),('c',2)] expand "d9d3" [('d',9),('d',3)] ``` I had already made a function that does the opposite of the above, all I am trying to do is figure out how to do the inverse of that. Example: ``` flatten :: [(Char, Int)] -> String flatten [] = [] flatten ((ca,cb):cs) = ca:(show cb ++ flatten cs) ```<issue_comment>username_1: Is it ok like this? ``` import Data.List (intersperse) import Data.List.Split (splitPlaces, wordsBy) mystring = "a5b2c3' >>> map (\[x,y] -> (head x, read y :: Int)) $ splitPlaces (replicate (length mystring `div` 2) 2) $ wordsBy (==',') $ intersperse ',' mystring [('a',5),('b',2),('c',3)] ``` Simpler, thanks to @4castle: ``` import Data.List (intersperse) import Data.List.Split (chunksOf, wordsBy) map (\[x,y] -> (head x, read y :: Int)) $ chunksOf 2 $ wordsBy (==',') $ intersperse ',' mystring ``` Even simpler, still thanks to @4castle: ``` import Data.List.Split (chunksOf) map (\[x,y] -> (x, read [y] :: Int)) $ chunksOf 2 mystring ``` Upvotes: 2 <issue_comment>username_2: Use a parsing library like `Parsec`. The learning curve is a little steep (I'm not even sure this is a good example), but you can describe parsers like this is very little code. ``` import qualified Text.Parsec as T parseDigit :: T.Parsec String () Int parseDigit = fmap (read . pure) T.digit myParser = T.many ((,) <$> T.letter <*> parseDigit) ``` Then ``` > T.parse myParser "" "a5b4c2" Right [('a', 5),('b',4),('c',2)] ``` So, your `expand` could be defined as ``` import Data.Either expand :: String -> [(Char, Int)] expand s = fromRight [] (T.parse myParser "" s) ``` to return an empty list in the event the parser fails on the input string. Upvotes: 2 <issue_comment>username_3: Without any complicated imports, assuming alternating chars and (single) digits, you can use a simple recursion: ``` f :: [(Char, Int)] -> String f (c:d:xs) = (c,read [d]):f xs f x = x ``` Upvotes: 0 <issue_comment>username_4: A relatively simple solution: ``` import Data.Char (digitToInt) expand :: String -> [(Char, Int)] expand (x:x':xs) = (x, digitToInt x'):expand xs expand _ = [] ``` Upvotes: 0 <issue_comment>username_5: Again, I prefer list comprehensions and/or tail recursion functions. If a function is not a fold then it returns a list. List comprehensions return lists. ``` let l = "a5b3c2" [ (a,(read [b] :: Int)) | (a,b) <- zip l (tail l), elem b "1234567890"] ``` [('a',5),('b',3),('c',2)] I didn't know about the `chunksOf` function when I posted this. `chunksOf` is a very handy function and I'm already using it a lot. Also I think I like the `ord` function over the `read` function. When you have pairs, you know where things are at and can process accordingly. ``` [ (a,(ord b)-48) | (a:b:c) <- chunksOf 2 "a5b4c3"] ``` Upvotes: 0
2018/03/20
932
2,731
<issue_start>username_0: [Screen](https://i.stack.imgur.com/7d2pL.png) [Code](https://i.stack.imgur.com/cNw8u.png) In this screen, we have used kendo treelist. I need to implement autocomplete dropdown in CODE column. How can i do that?<issue_comment>username_1: Is it ok like this? ``` import Data.List (intersperse) import Data.List.Split (splitPlaces, wordsBy) mystring = "a5b2c3' >>> map (\[x,y] -> (head x, read y :: Int)) $ splitPlaces (replicate (length mystring `div` 2) 2) $ wordsBy (==',') $ intersperse ',' mystring [('a',5),('b',2),('c',3)] ``` Simpler, thanks to @4castle: ``` import Data.List (intersperse) import Data.List.Split (chunksOf, wordsBy) map (\[x,y] -> (head x, read y :: Int)) $ chunksOf 2 $ wordsBy (==',') $ intersperse ',' mystring ``` Even simpler, still thanks to @4castle: ``` import Data.List.Split (chunksOf) map (\[x,y] -> (x, read [y] :: Int)) $ chunksOf 2 mystring ``` Upvotes: 2 <issue_comment>username_2: Use a parsing library like `Parsec`. The learning curve is a little steep (I'm not even sure this is a good example), but you can describe parsers like this is very little code. ``` import qualified Text.Parsec as T parseDigit :: T.Parsec String () Int parseDigit = fmap (read . pure) T.digit myParser = T.many ((,) <$> T.letter <*> parseDigit) ``` Then ``` > T.parse myParser "" "a5b4c2" Right [('a', 5),('b',4),('c',2)] ``` So, your `expand` could be defined as ``` import Data.Either expand :: String -> [(Char, Int)] expand s = fromRight [] (T.parse myParser "" s) ``` to return an empty list in the event the parser fails on the input string. Upvotes: 2 <issue_comment>username_3: Without any complicated imports, assuming alternating chars and (single) digits, you can use a simple recursion: ``` f :: [(Char, Int)] -> String f (c:d:xs) = (c,read [d]):f xs f x = x ``` Upvotes: 0 <issue_comment>username_4: A relatively simple solution: ``` import Data.Char (digitToInt) expand :: String -> [(Char, Int)] expand (x:x':xs) = (x, digitToInt x'):expand xs expand _ = [] ``` Upvotes: 0 <issue_comment>username_5: Again, I prefer list comprehensions and/or tail recursion functions. If a function is not a fold then it returns a list. List comprehensions return lists. ``` let l = "a5b3c2" [ (a,(read [b] :: Int)) | (a,b) <- zip l (tail l), elem b "1234567890"] ``` [('a',5),('b',3),('c',2)] I didn't know about the `chunksOf` function when I posted this. `chunksOf` is a very handy function and I'm already using it a lot. Also I think I like the `ord` function over the `read` function. When you have pairs, you know where things are at and can process accordingly. ``` [ (a,(ord b)-48) | (a:b:c) <- chunksOf 2 "a5b4c3"] ``` Upvotes: 0
2018/03/20
1,032
3,117
<issue_start>username_0: I am trying to export a datatable from Microsoft Access 2016 via ODBC Export to a MariaDB. I have tried: 1. do a right click on the datatable and choose "Export" --> "ODBC-Database" 2. then choose the preconfigured ODBC User-DSN Then I get the ODBC-Call Error: "ODBC-Driver[...] Data truncated for column 'TotRev' at row 1 [#1265]" ![Error Message Picture](https://i.stack.imgur.com/6VaFn.png) I have tried different codings, as I got other error codes before which were related to that. I would really appreciate a hint for this solution. The used Database is MariaDB with utf8-mb4 encoding.<issue_comment>username_1: Is it ok like this? ``` import Data.List (intersperse) import Data.List.Split (splitPlaces, wordsBy) mystring = "a5b2c3' >>> map (\[x,y] -> (head x, read y :: Int)) $ splitPlaces (replicate (length mystring `div` 2) 2) $ wordsBy (==',') $ intersperse ',' mystring [('a',5),('b',2),('c',3)] ``` Simpler, thanks to @4castle: ``` import Data.List (intersperse) import Data.List.Split (chunksOf, wordsBy) map (\[x,y] -> (head x, read y :: Int)) $ chunksOf 2 $ wordsBy (==',') $ intersperse ',' mystring ``` Even simpler, still thanks to @4castle: ``` import Data.List.Split (chunksOf) map (\[x,y] -> (x, read [y] :: Int)) $ chunksOf 2 mystring ``` Upvotes: 2 <issue_comment>username_2: Use a parsing library like `Parsec`. The learning curve is a little steep (I'm not even sure this is a good example), but you can describe parsers like this is very little code. ``` import qualified Text.Parsec as T parseDigit :: T.Parsec String () Int parseDigit = fmap (read . pure) T.digit myParser = T.many ((,) <$> T.letter <*> parseDigit) ``` Then ``` > T.parse myParser "" "a5b4c2" Right [('a', 5),('b',4),('c',2)] ``` So, your `expand` could be defined as ``` import Data.Either expand :: String -> [(Char, Int)] expand s = fromRight [] (T.parse myParser "" s) ``` to return an empty list in the event the parser fails on the input string. Upvotes: 2 <issue_comment>username_3: Without any complicated imports, assuming alternating chars and (single) digits, you can use a simple recursion: ``` f :: [(Char, Int)] -> String f (c:d:xs) = (c,read [d]):f xs f x = x ``` Upvotes: 0 <issue_comment>username_4: A relatively simple solution: ``` import Data.Char (digitToInt) expand :: String -> [(Char, Int)] expand (x:x':xs) = (x, digitToInt x'):expand xs expand _ = [] ``` Upvotes: 0 <issue_comment>username_5: Again, I prefer list comprehensions and/or tail recursion functions. If a function is not a fold then it returns a list. List comprehensions return lists. ``` let l = "a5b3c2" [ (a,(read [b] :: Int)) | (a,b) <- zip l (tail l), elem b "1234567890"] ``` [('a',5),('b',3),('c',2)] I didn't know about the `chunksOf` function when I posted this. `chunksOf` is a very handy function and I'm already using it a lot. Also I think I like the `ord` function over the `read` function. When you have pairs, you know where things are at and can process accordingly. ``` [ (a,(ord b)-48) | (a:b:c) <- chunksOf 2 "a5b4c3"] ``` Upvotes: 0
2018/03/20
394
1,194
<issue_start>username_0: I would like to regex the fullpath of the css, for example I have this: ``` ``` I would like to regex: `/assets/myCssFile.css` What I tried is this: `/(?:href)=("|').*?([\w.]+\.(?:css))\1/gi` but this returns me this: `href="../assets/myCssFile.css"` Can someone help me out with the regex. BTW: It is an response text of an ajax request which returns me a string of the html page<issue_comment>username_1: `href=\"(.*\.css)"` This will return `"../assets/myCssFile.css"` in a capture group. <https://regex101.com/r/a44Axz/1> All it is doing is saying: I am only interested in text within quotes that immediately follows an "href" and only if it is a ".css" file. Upvotes: 2 <issue_comment>username_2: Late, but maybe it will help someone. :) Another regex that matches css paths: `href[ \t]{0,}=[ \t]{0,}"(.{1,}\.css)"` This also matches cases when between `href` and `=` exists tabs or spaces or between `=` and css filename. Also, if you want to match css and javascript filenames in one regex you can use something like this: `href[ \t]{0,}=[ \t]{0,}"(.{1,}\.css)"|src[ \t]{0,}=[ \t]{0,}"(.{1,}\.js)"` They are combined by or operator `|`. Upvotes: 0
2018/03/20
562
1,750
<issue_start>username_0: I'm trying to create a spark scala udf in order to transform MongoDB objects of the following shape: ``` Object: "1": 50.3 "8": 2.4 "117": 1.0 ``` Into Spark ml SparseVector. The problem is that in order to create a SparseVector, I need one more input parameter - its size. And in my app I keep the Vector sizes in a separate MongoDB collection. So, I defined the following UDF function: ``` val mapToSparseVectorUdf = udf { (myMap: Map[String, Double], size: Int) => { val vb: VectorBuilder[Double] = new VectorBuilder(length = -1) vb.use(myMap.keys.map(key => key.toInt).toArray, myMap.values.toArray, size) vb.toSparseVector } } ``` And I was trying to call it like this: ``` df.withColumn("VecColumn", mapToSparseVectorUdf(col("MapColumn"), vecSize)).drop("MapColumn") ``` However, my IDE says "Not applicable" to that udf call. Is there a way to make this kind of UDF that can take an extra parameter?<issue_comment>username_1: `href=\"(.*\.css)"` This will return `"../assets/myCssFile.css"` in a capture group. <https://regex101.com/r/a44Axz/1> All it is doing is saying: I am only interested in text within quotes that immediately follows an "href" and only if it is a ".css" file. Upvotes: 2 <issue_comment>username_2: Late, but maybe it will help someone. :) Another regex that matches css paths: `href[ \t]{0,}=[ \t]{0,}"(.{1,}\.css)"` This also matches cases when between `href` and `=` exists tabs or spaces or between `=` and css filename. Also, if you want to match css and javascript filenames in one regex you can use something like this: `href[ \t]{0,}=[ \t]{0,}"(.{1,}\.css)"|src[ \t]{0,}=[ \t]{0,}"(.{1,}\.js)"` They are combined by or operator `|`. Upvotes: 0
2018/03/20
1,259
4,807
<issue_start>username_0: I had ASP MVC 5 project, to which I've added Web Api. Now I'm trying to install Swashbuckle (from bootstrap) - but it just shows an empty document without any controllers. My controller: ``` [RoutePrefix("api/v1/Test/Check")] public class TestController : ApiController { // GET: api/Test [HttpGet] public IEnumerable Get() { return new string[] { "value1", "value2" }; } // GET: api/Test/5 [HttpGet] public string Get(int id) { return "value"; } // POST: api/Test [HttpPost] public void Post([FromBody]string value) { } // PUT: api/Test/5 [HttpPut] public void Put(int id, [FromBody]string value) { } // DELETE: api/Test/5 [HttpDelete] public void Delete(int id) { } // GET: api/Test [Route("GetMetadata")] [HttpGet] public IEnumerable GetMetadata() { return new string[] { "value2221", "value2" }; } } ``` My Web Api Config: ``` configuration.MapHttpAttributeRoutes(); configuration.Routes.MapHttpRoute("Public API V1", "api/v1/{controller}/{id}", new { id = RouteParameter.Optional }); ``` My `Global.asax.cs`: ``` public class MvcApplication : System.Web.HttpApplication { protected void Application_Start() { AreaRegistration.RegisterAllAreas(); GlobalConfiguration.Configure(WebApiConfig.Register); //WebApiConfig.Register(GlobalConfiguration.Configuration); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } } ``` `Swashbuckle` configuration is default. The generated swagger document looks like this: ``` { swagger: "2.0", info: { version: "v1", title: "WebServer" }, host: "localhost:55309", schemes: [ "http" ], paths: { }, definitions: { } } ``` I can access my controller on `http://localhost:55309/api/v1/Test` I'm not sure do I need to change anything in the generated `SwaggerConfig.cs` but looking through it and on `swashbuckle` docs it looks like it should work without any modifications<issue_comment>username_1: I have faced the same problem in my web api project and I have solved it this way: 1) First of all I have created the following extension method: ``` public static class SwaggerExtensions { public static HttpConfiguration EnableSwagger(this HttpConfiguration httpConfiguration) { httpConfiguration .EnableSwagger(c => c.SingleApiVersion("v1", "WebApi")) .EnableSwaggerUi(); return httpConfiguration; } } ``` 2) Then inside Startup class: ``` public void Configuration(IAppBuilder app) { var config = new HttpConfiguration(); config.EnableSwagger(); var webApiConfiguration = WebApiConfig.Register(config); //here I commented other startup code } ``` Upvotes: 1 <issue_comment>username_2: Ok, I think I've made it. I had to download `swashbuckle` sources and debug it, but it was worth it. Turns out the problem is not `swashbuckle` itself, but rather `ApiExplorer` that has `.ApiDescriptions` empty for some reason. During my debug I've put these two lines in the end of my `Application_Start()`, and even though these lines don't do anything, magic happened and it started to work: ``` var explorer = GlobalConfiguration.Configuration.Services.GetApiExplorer(); var descriptions = explorer.ApiDescriptions; ``` Then I went futher and found this topic: [ASP.Net Web API Help Page Area returning empty output](https://stackoverflow.com/questions/19413786/asp-net-web-api-help-page-area-returning-empty-output) (see first answer) I was using `Glimpse`, although I actually installed it in an attempt to solve the `swashbuckle` problem! (I thought it might help - it does not, but Glimpse felt a nice tool so I left it there) As the first answer suggests, I've modified `web.config` with this: ``` ``` And removed the two lines from `Application_Start()` as these are not a fix. It started to work! To be honest I have no idea what is happening there. I clearly remember it not working before I started to use `Glimpse`, so installing a glimpse and fixing a problem in it doesn't feel like a proper fix to original problem, but I'm really tired of this issue and am ready to close it down like this. Hope this helps somebody. P.S. Just a warning to those who try to debug similar problems. My other mistake was that I didn't close IIS Express between my tries. It actually keeps application running so the configuration is not re-applied even though you start/stop app in VS. If you working with configuration - you need to close ISS Express between your tries. Upvotes: 3 [selected_answer]
2018/03/20
789
3,162
<issue_start>username_0: I receive (similar to) the following JSON data: ``` {"accountId"=>"some-private-really-long-account-id", "stats"=> {"score"=> {"globalScore"=> [{"key"=>"lifetimeScore", "value"=>"571", "key"=>"someOtherKeyHere", "value"=>"someValue"}]} ``` I am not quite sure how I would get the lifetime score. I've tried doing stuff like this: ``` puts data["globalScore"]["lifetimeScore"]["value"] ``` But that doesn't work. (`data` is of course the JSON data received).<issue_comment>username_1: I have faced the same problem in my web api project and I have solved it this way: 1) First of all I have created the following extension method: ``` public static class SwaggerExtensions { public static HttpConfiguration EnableSwagger(this HttpConfiguration httpConfiguration) { httpConfiguration .EnableSwagger(c => c.SingleApiVersion("v1", "WebApi")) .EnableSwaggerUi(); return httpConfiguration; } } ``` 2) Then inside Startup class: ``` public void Configuration(IAppBuilder app) { var config = new HttpConfiguration(); config.EnableSwagger(); var webApiConfiguration = WebApiConfig.Register(config); //here I commented other startup code } ``` Upvotes: 1 <issue_comment>username_2: Ok, I think I've made it. I had to download `swashbuckle` sources and debug it, but it was worth it. Turns out the problem is not `swashbuckle` itself, but rather `ApiExplorer` that has `.ApiDescriptions` empty for some reason. During my debug I've put these two lines in the end of my `Application_Start()`, and even though these lines don't do anything, magic happened and it started to work: ``` var explorer = GlobalConfiguration.Configuration.Services.GetApiExplorer(); var descriptions = explorer.ApiDescriptions; ``` Then I went futher and found this topic: [ASP.Net Web API Help Page Area returning empty output](https://stackoverflow.com/questions/19413786/asp-net-web-api-help-page-area-returning-empty-output) (see first answer) I was using `Glimpse`, although I actually installed it in an attempt to solve the `swashbuckle` problem! (I thought it might help - it does not, but Glimpse felt a nice tool so I left it there) As the first answer suggests, I've modified `web.config` with this: ``` ``` And removed the two lines from `Application_Start()` as these are not a fix. It started to work! To be honest I have no idea what is happening there. I clearly remember it not working before I started to use `Glimpse`, so installing a glimpse and fixing a problem in it doesn't feel like a proper fix to original problem, but I'm really tired of this issue and am ready to close it down like this. Hope this helps somebody. P.S. Just a warning to those who try to debug similar problems. My other mistake was that I didn't close IIS Express between my tries. It actually keeps application running so the configuration is not re-applied even though you start/stop app in VS. If you working with configuration - you need to close ISS Express between your tries. Upvotes: 3 [selected_answer]
2018/03/20
961
3,426
<issue_start>username_0: In my application I am looking for writing a function that deletes the number of check boxes selected from my array. I have written and i can console.log the the number of checkboxes i have selected. But im stuck at how to get index numbers of the selected check boxes and delete those only. And in delete function i have to spilce those number of rows from main array. html - ``` | | | ``` COMPONENT.TS ``` export class ReasonCodesComponent implements OnInit { checkAll(event: any) { if (!this.result) return; this.result.forEach((x: any) => x.state = event.target.checked) } isAllChecked() { if (this.arr) return; return this.result.every((_: any) => _.state); } check(result: any) { result.state == false; result.state! = result.state; } @select(store.displayReasonCodes) displayReasonCodes$: Observable; arr: any[]; state: any; $event: any; displayReasonCodes: any; x: any; \_: any; result: any; isEditable: boolean; constructor(private reasonCodesActions: ReasonCodesActions) { } reasoncodeObject:Object; ngOnInit() { this.displayReasonCodes$.subscribe(data => this.result = data ) } deleterow() { this.result.forEach((x: any) => { if (x.state) { alert(x.state) /// here i am getting true as alert for number of rows selected and when i clcik on delete. let copyObj = this.displayReasonCodes$.subscribe(data1=>this.data=(JSON.parse(JSON.stringify(data1)))); console.log(this.data) ; //data contains the array of objects, from which i have to splice up the number of selected rows from table. }; }) } } ``` I am looking for to delete the rows selected in checkbox and display the rest ones left out.<issue_comment>username_1: As you define your field state with 'any' instead of 'boolean' try this. Just leave out those objects that are checked, return those that are unchecked: ``` this.result = this.result.filter(el => { el.state == undefined || el.state == null || el.state == false; }); ``` Put this code instead of your forEach()-loop in your deleterow()-method. Upvotes: 1 <issue_comment>username_1: Okay, now I see. Having your code on Plunkr as source I did the following, just to show you how it works: In your template I changed ``` ``` to ``` ``` Next I extended your sizes-object only to broaden the variety of the possible output: ``` sizes: any[] = [ { 'size': '0', 'diameter': '16000 km', 'state': false }, { 'size': '1', 'diameter': '32000 km', 'state': false }, { 'size': '23', 'diameter': '3000 km', 'state': false }, { 'size': '22', 'diameter': '700 km', 'state': false }, { 'size': '99', 'diameter': '377000 km', 'state': false } ]; ``` Please note, that I applied the missing field 'state', which makes every action much easier afterwards. Then I added the following 2 methods ``` private checkOccurred(): void { this.getSelectedCheckboxes(); } private getSelectedCheckboxes(): any[] { const result: number[] = []; this.sizes.forEach((item, index) => { if (item.state === true) { result.push(index); } }); console.log(result); return result; } ``` What you see here considering getSelectedCheckboxes() is exactly what you want. A method that delivers a list of the indexes of the selected checkboxes in your array states. Hope this helps. Upvotes: 0
2018/03/20
1,364
5,043
<issue_start>username_0: I'm extremely new to react and javascript so I'm sorry if I'm asking this wrong. This is for a group project which means I most likely don't understand the project 100% * **The older sibling** has this nice prop that I want to use: ``` ``` export default class OlderSibling extends Component { state = { currentCity: city //(that was gathered from a bunch of other steps) }; render() { return( ) } } ``` ``` * **The parent** file doesn't have this prop, but it does have its own stuff. I do not care for these other props though. ``` ``` class Parent extends Component { constructor(props) { super(props) this.state = { flower: 'red' } } render() { return ( ) } } ``` ``` * **The younger sibling** (the one that wants current city) has a bunch of this.state properties that I do not want to share with others but just want the older sibling's stuff (I guess like what younger siblings normally do). ``` ``` export class YoungerSibling extends Component { constructor(props) { super(props); this.state = { title: '', description: [] } } render() { return( ) } } ``` ``` Just in case I wasn't clear, younger sibling just wants older sibling's this.state: currentCity that older Sibling worked so hard to gather. I know I didn't put the code completely, but if you want to critique it anyway, please do! I am still learning and I welcome every bit of feedback! I looked up ways to do this, but they're all about transferring parent to child which is not what I want. I also read that there was Redux that could handle this?? I don't know if my fellow groupmates are interested in that just yet. Thank you for your time in reading this! EDIT: [ SOLVED ] I just want to edit and say thank you to @soupette, @Liam, @[<NAME>otsur], and @Tomasz for helping me to understand react a bit more. I realize that this post was very much a spoon feeding request and you all helped away. Thank you! Also, just in case anybody else ran into this issue, don't forget to call it on Younger Sibling as `this.props.currentCity` .<issue_comment>username_1: You got two choices. 1. use external state management library like redux. 2. [lift the state up](https://reactjs.org/docs/lifting-state-up.html) - which means `Parent` component will hold `currentCity` in the state. There could be 3rd option of using [contex API](https://reactjs.org/docs/context.html#how-to-use-context) but I'm not sure how to do it here. Upvotes: 1 <issue_comment>username_2: You can do something like: ``` class Parent extends Component { state = { flower: 'red' currentCity: '', }; updateCurrentCity = (currentCity) => this.setState({ currentCity }); render() { return ( ); } } ``` then in your OlderSibling you can update the parent's state with a function: ``` export default class OlderSibling extends Component { state = { currentCity: city //(that was gathered from a bunch of other steps) }; componentDidUpdate(prevProps, prevState) { if (prevState.currentCity !== this.state.currentCity) { this.props.updateCurrentCity(this.state.currentCity); } } render() { return( ); } } ``` Upvotes: 2 <issue_comment>username_3: You should create `Parent` component for these two ones and keep state there. Also, you should pass it into two children (`YoungerSibling` and `OlderSibling`) as a `prop` and add inverse data flow for `OlderSibling` so when city is changed in the `OlderSibling` then `ParentSibling` should know about this. For example: **Parent.jsx** ``` class Parent extends React.Component { constructor(props) { super(props); this.state = { currentCity: '' } } currentCityChangeHandler = city => { this.setState({currentCity: city}); }; render() { const { currentCity } = this.state; return( ... ... ) } } ``` **OlderSibling.jsx** ``` class OlderSibling extends React.Component { ... // Somewhere when new city is coming from // You should pass it into the callback newCityComingFunction() { const city = 'New York'; // Pass the new value of city into common `Parent` component this.props.onCurrentCityChange(city); } ... } ``` So in this case it will allow you to use this param in both cases and it will be keep updated. In addition, you should use `this.props.currentCity` in the `OlderSibling` instead of using `this.state.currentCity` in this component because it's value is moved into `Parent`'s component state. Hope it will helps. Upvotes: 2 <issue_comment>username_4: As the previous answer suggests lift the state up I also prefer to use it Here's an example [![Edit l5l18v19m9](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/l5l18v19m9) Upvotes: 3 [selected_answer]
2018/03/20
421
1,308
<issue_start>username_0: I'm trying this: ``` [">]( <?php $fl= $p['n'].) ``` The problem is that the codes inside `$code` are being evaluated and I need that `$code` simply save the codes like a string.<issue_comment>username_1: Using `"$foo"` will evaluate, but `'$foo'` will not. However, this is opens massive security risks so I would probably back up and take a look at what you’re trying to do. Upvotes: 1 <issue_comment>username_2: Use [PHP's nowdoc](http://php.net/manual/en/language.types.string.php#language.types.string.syntax.nowdoc) feature (emphasis mine): > > Nowdocs are to single-quoted strings what heredocs are to double-quoted strings. A nowdoc is specified similarly to a heredoc, but no parsing is done inside a nowdoc. **The construct is ideal for embedding PHP code or other large blocks of text without the need for escaping**. > > > ### Example ```php php $test_var = 1; $str = <<<'EOD' Example of string spanning multiple lines using nowdoc syntax. $test_var EOD; echo $str; </code ``` Outputs ``` Example of string spanning multiple lines using nowdoc syntax. $test_var ``` --- So this ... ``` [" EOD; echo fwrite($file,$code); fclose($file); }?>">]( <?php $fh = fopen($p['n'].'.php', 'w') or die() ``` ... should work for you. Upvotes: 3 [selected_answer]
2018/03/20
625
2,132
<issue_start>username_0: I'm learning about vectors in `Accelerated C++` (which is C++ 98, not C++11) by <NAME> and <NAME>. In this code... ``` map> xref(istream& in, vector find\_words(const string&) = split) { ... ``` ...what is being defined in the block? `xref` or `find_words`? In my debugger, the call stack goes: `main() > xref() > split()`. `find_words` isn't defined elsewhere. ``` // find all the lines that refer to each word in the input map > xref(istream& in, vector find\_words(const string&) = split) { string line; int line\_number = 0; map> ret; // read the next line while (getline(in, line)) { ++line\_number; // break the input line into words vector words = find\_words(line); // remember that each word occurs on the current line for (vector::const\_iterator it = words.begin(); it != words.end(); ++it) ret[\*it].push\_back(line\_number); } return ret; } ``` Also, `split` looks like this: ``` vector split(const string& s) { ... } ```<issue_comment>username_1: `xref` is the function that is defined, `find_words` is one of its parameters (a function taking a `const string&` as parameter and returning a `vector`) and `split` is the default value for that parameter. Upvotes: 0 <issue_comment>username_2: ``` map> xref( istream& in, vector find\_words(const string&) = split ) { /\* ... \*/ } ``` This defines the function named `xref`. According to [Clockwise Spiral Rule](http://c-faq.com/decl/spiral.anderson.html), `xref` is a function: * taking as arguments: 1. a `istream` reference (`istream& in`) 2. a function: (`vector find\_words(const string&) = split`) + taking as argument: 1. a `string` constant reference(`const string&`) + returning a `vector` + whose default value is `split` (`= split`) * and returning a `map>` Upvotes: 3 [selected_answer]<issue_comment>username_3: `xref` is being defined in the code block and returns a map of a string and a vector of ints. It is taking in parameters of an `istream`by reference, a function that takes a string constant reference and returns a vector that the default value is split. Upvotes: 0
2018/03/20
661
1,584
<issue_start>username_0: I am currently using python and i have a question: if we have a string that is separated by multiple characters for example: ``` l = ['0 , hellp,\t2,\t BB , -\n', ' 1 ,\t \t knock \t , BB,N,- ] ``` and I want show it in this way: ``` 0,hellp,2,BB,- 1,knock,BB,N,- ``` how would the coding be if i wanted to use split? i have already tried this code: ``` l = ['0 , hellp,\t2,\t BB , -\n', ' 1 ,\t \t knock \t , BB,N,- ] replacements=('\t',' ' ,'\n') for r in replacements : l = l.replace(r,' ') words =l.split() print(words) ``` but it didnt work out like how i want<issue_comment>username_1: **Using regex**. You can use `re.split` to plit by multiple chars. **Ex:** ``` import re l = ['0 , hellp,\t2,\t BB , -\n', ' 1 ,\t \t knock \t , BB,N,-' ] for i in l: val = ",".join(re.split(",\t", i.strip())).replace(" ", "") #split by comma(,) & tab(\t) print(re.sub('\s+',"", val)) #replace space ``` **Output:** ``` 0,hellp,2,BB,- 1,knock,BB,N,- ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try this: ``` import re l = ['0 , hellp,\t2,\t BB , -\n', ' 1 ,\t \t knock \t , BB,N,- '] new_l = [','.join(filter(None, re.split('[\s,\t\n]+', i))) for i in l] ``` Output: ``` ['0,hellp,2,BB,-', '1,knock,BB,N,-'] ``` Upvotes: 0 <issue_comment>username_3: Can be expressed fairly simply using `re.subn`. ``` >>> import re >>> l = ['0 , hellp,\t2,\t BB , -\n', ' 1 ,\t \t knock \t , BB,N,- '] >>> for item in l: ... re.subn(r'\s', '', item)[0] ... '0,hellp,2,BB,-' '1,knock,BB,N,-' ``` Upvotes: 0
2018/03/20
3,336
8,328
<issue_start>username_0: I want to create my first game. Nothing special, just blue rectangle moving when WSAD keys are pressed. The problem is that when I run my game, there are bugs with rectangle(See image below). Bugs appears only during horizontal movement, and not vertical. [![enter image description here](https://i.stack.imgur.com/RVj6P.png)](https://i.stack.imgur.com/RVj6P.png) Which is interesting, when I changed line: ``` renderer = SDL_CreateRenderer(display, -1, SDL_RENDERER_ACCELERATED) ``` to: ``` renderer = SDL_CreateRenderer(display, -1, SDL_RENDERER_SOFTWARE) ``` everything is OK I am using Windows 10, MinGw with CMake(C++14), and SDL 2.0.8, Intel core i5 7th gen, Radeon M7 R465 Im my code `OnRender` function is responsible for rendering, maybe I made something wrong in it?(Function in my code posted at end of question) I am also using `SDL_WINDOW_OPENGL` flag to create my window, but changing it to `SDL_WINDOW_SHOWN` doesn't change anything. ``` #include class Game { private: SDL\_Surface \*display\_surf = nullptr; SDL\_Renderer \*renderer = nullptr; SDL\_Window \*display = nullptr; private: bool running, prW = false, prS = false, prD = false, prA = false; int x, y; int spd\_y, spd\_x; int scr\_w, scr\_h; public: Game(); int OnExecute(); public: bool OnInit(); void OnEvent( SDL\_Event \*event ); void OnLoop(); void OnRender(); void OnCleanup(); }; Game::Game() { running = false; } int Game::OnExecute() { if( !OnInit() ) { return -1; } running = true; SDL\_Event event; while( running ) { while( SDL\_PollEvent( &event ) ) { OnEvent( &event ); } OnLoop(); OnRender(); SDL\_Delay( 1 ); } OnCleanup(); return 0; } bool Game::OnInit() { if( SDL\_Init( SDL\_INIT\_EVERYTHING ) < 0 ) { return false; } SDL\_DisplayMode dspm; if( SDL\_GetDesktopDisplayMode( 0, &dspm ) < 0 ) { return false; } scr\_h = dspm.h; scr\_w = dspm.w; if( ( display = SDL\_CreateWindow( "Game", SDL\_WINDOWPOS\_CENTERED, SDL\_WINDOWPOS\_CENTERED, 1920, 1080, SDL\_WINDOW\_OPENGL ) ) == nullptr ) { return false; } display\_surf = SDL\_GetWindowSurface( display ); if( ( renderer = SDL\_CreateRenderer( display, -1, SDL\_RENDERER\_ACCELERATED ) ) == nullptr ) { return false; } x = 0; y = 0; spd\_x = 0; spd\_y = 0; SDL\_SetWindowFullscreen( display, SDL\_WINDOW\_FULLSCREEN ); return true; } void Game::OnEvent( SDL\_Event \*event ) { if( event->type == SDL\_QUIT ) { running = false; return; } switch( event->type ) { case SDL\_KEYDOWN: switch( event->key.keysym.sym ) { case SDLK\_w: if( prS ) { spd\_y = 0; } else { spd\_y = -5; } prW = true; break; case SDLK\_s: if( prW ) { spd\_y = 0; } else { spd\_y = 5; } prS = true; break; case SDLK\_d: if( prA ) { spd\_x = 0; } else { spd\_x = 5; } prD = true; break; case SDLK\_a: if( prD ) { spd\_x = 0; } else { spd\_x = -5; } prA = true; break; default: return; } break; case SDL\_KEYUP: switch( event->key.keysym.sym ) { case SDLK\_w: if( !prS ) { spd\_y = 0; } else { spd\_y = 5; } prW = false; break; case SDLK\_s: if( !prW ) { spd\_y = 0; } else { spd\_y = -5; } prS = false; break; case SDLK\_a: if( !prD ) { spd\_x = 0; } else { spd\_x = 5; } prA = false; break; case SDLK\_d: if( !prA ) { spd\_x = 0; } else { spd\_x = -5; } prD = false; break; default: return; } default: return; } } void Game::OnLoop() { x += spd\_x; y += spd\_y; if( x < 0 ) { x = 0; } else if( x > scr\_w - 100 ) { x = scr\_w - 100; } if( y < 0 ) { y = 0; } else if( y > scr\_h - 100 ) { y = scr\_h - 100; } } void Game::OnRender() { SDL\_SetRenderDrawColor( renderer, 0, 0, 0, 0x00 ); SDL\_RenderClear( renderer ); SDL\_Rect charc; charc.x = x; charc.y = y; charc.w = 100; charc.h = 100; SDL\_SetRenderDrawColor( renderer, 0, 0, 0xff, 0 ); SDL\_RenderFillRect( renderer, &charc ); SDL\_RenderPresent( renderer ); } void Game::OnCleanup() { SDL\_DestroyWindow( display ); SDL\_Quit(); } int main( int argc, char\*\* argv ) { Game game; return game.OnExecute(); } ```<issue_comment>username_1: Looks a lot like [tearing](https://en.wikipedia.org/wiki/Screen_tearing) caused by a high frame-rate & lack of vsync. I can get tear-less drawing by passing `SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC` to `flags` on `SDL_CreateRenderer()`: ``` #include #include class Game { private: SDL\_Renderer \*renderer = nullptr; SDL\_Window \*display = nullptr; private: bool running, prW = false, prS = false, prD = false, prA = false; int x, y; int spd\_y, spd\_x; int scr\_w, scr\_h; public: Game(); int OnExecute(); public: bool OnInit(); void OnEvent( SDL\_Event \*event ); void OnLoop(); void OnRender(); void OnCleanup(); }; Game::Game() { running = false; } int Game::OnExecute() { if( !OnInit() ) { return -1; } running = true; SDL\_Event event; Uint32 beg = SDL\_GetTicks(); size\_t frames = 0; while( running ) { while( SDL\_PollEvent( &event ) ) { OnEvent( &event ); } OnLoop(); OnRender(); frames++; Uint32 end = SDL\_GetTicks(); if( end - beg > 1000 ) { std::cout << "Frame time: " << ( end - beg ) / frames << " ms" << std::endl; beg = end; frames = 0; } } OnCleanup(); return 0; } bool Game::OnInit() { if( SDL\_Init( SDL\_INIT\_EVERYTHING ) < 0 ) { return false; } if( ( display = SDL\_CreateWindow( "Game", SDL\_WINDOWPOS\_CENTERED, SDL\_WINDOWPOS\_CENTERED, 1280, 720, 0 ) ) == nullptr ) { return false; } scr\_w = 1280; scr\_h = 720; Uint32 rflags = SDL\_RENDERER\_ACCELERATED; rflags |= SDL\_RENDERER\_PRESENTVSYNC; if( ( renderer = SDL\_CreateRenderer( display, -1, rflags ) ) == nullptr ) { return false; } x = 0; y = 0; spd\_x = 0; spd\_y = 0; return true; } void Game::OnEvent( SDL\_Event \*event ) { if( event->type == SDL\_QUIT ) { running = false; return; } switch( event->type ) { case SDL\_KEYDOWN: switch( event->key.keysym.sym ) { case SDLK\_w: if( prS ) { spd\_y = 0; } else { spd\_y = -5; } prW = true; break; case SDLK\_s: if( prW ) { spd\_y = 0; } else { spd\_y = 5; } prS = true; break; case SDLK\_d: if( prA ) { spd\_x = 0; } else { spd\_x = 5; } prD = true; break; case SDLK\_a: if( prD ) { spd\_x = 0; } else { spd\_x = -5; } prA = true; break; default: return; } break; case SDL\_KEYUP: switch( event->key.keysym.sym ) { case SDLK\_w: if( !prS ) { spd\_y = 0; } else { spd\_y = 5; } prW = false; break; case SDLK\_s: if( !prW ) { spd\_y = 0; } else { spd\_y = -5; } prS = false; break; case SDLK\_a: if( !prD ) { spd\_x = 0; } else { spd\_x = 5; } prA = false; break; case SDLK\_d: if( !prA ) { spd\_x = 0; } else { spd\_x = -5; } prD = false; break; default: return; } default: return; } } void Game::OnLoop() { x += spd\_x; y += spd\_y; if( x < 0 ) { x = 0; } else if( x > scr\_w - 100 ) { x = scr\_w - 100; } if( y < 0 ) { y = 0; } else if( y > scr\_h - 100 ) { y = scr\_h - 100; } } void Game::OnRender() { SDL\_SetRenderDrawColor( renderer, 0, 0, 0, 0x00 ); SDL\_RenderClear( renderer ); SDL\_Rect charc; charc.x = x; charc.y = y; charc.w = 100; charc.h = 100; SDL\_SetRenderDrawColor( renderer, 0, 0, 0xff, 0 ); SDL\_RenderFillRect( renderer, &charc ); SDL\_Delay( 1 ); SDL\_RenderPresent( renderer ); } void Game::OnCleanup() { SDL\_DestroyWindow( display ); SDL\_Quit(); } int main( int argc, char\*\* argv ) { Game game; return game.OnExecute(); } ``` If I just pass `SDL_RENDERER_ACCELERATED` I get tearing and a *vastly* higher frame-rate. Make sure your OS isn't configured to disable vsync by default. Upvotes: 3 [selected_answer]<issue_comment>username_2: Many developers seem to get some unwanted behaviours when enabling SDL\_RENDERER\_ACCELERATED flag on SDL 2.0.8. A ticket has been opened in libsdl's bugzilla (<https://bugzilla.libsdl.org/show_bug.cgi?id=4110>). It's about another problem but issue with hardware rendering is mentioned in description. For now, I use software rendering (SDL\_RENDERER\_SOFTWARE) as a fallback. Not really what I wanted to do but now I get the expected result. I'll try PREVENTSYNC... Upvotes: 2
2018/03/20
1,287
4,220
<issue_start>username_0: I am new to stackoverflow and python so please bear with me. I am trying to run an Latent Dirichlet Analysis on a text corpora with the gensim package in python using PyCharm editor. I prepared the corpora in R and exported it to a csv file using this R command: ``` write.csv(testdf, "C://...//test.csv", fileEncoding = "utf-8") ``` Which creates the following csv structure (though with much longer and already preprocessed texts): ``` ,"datetimestamp","id","origin","text" 1,"1960-01-01","id_1","Newspaper1","Test text one" 2,"1960-01-02","id_2","Newspaper1","Another text" 3,"1960-01-03","id_3","Newspaper1","Yet another text" 4,"1960-01-04","id_4","Newspaper2","Four Five Six" 5,"1960-01-05","id_5","Newspaper2","Alpha Bravo Charly" 6,"1960-01-06","id_6","Newspaper2","Singing Dancing Laughing" ``` I then try the following essential python code (based on the [gensim tutorials](https://radimrehurek.com/gensim/tutorial.html)) to perform simple LDA analysis: ``` import gensim from gensim import corpora, models, similarities, parsing import pandas as pd from six import iteritems import os import pyLDAvis.gensim class MyCorpus(object): def __iter__(self): for row in pd.read_csv('//mpifg.local/dfs/home/lu/Meine Daten/Imagined Futures and Greek State Bonds/Topic Modelling/Python/test.csv', index_col=False, header = 0 ,encoding='utf-8')['text']: # assume there's one document per line, tokens separated by whitespace yield dictionary.doc2bow(row.split()) if __name__ == '__main__': dictionary = corpora.Dictionary(row.split() for row in pd.read_csv( '//.../test.csv', index_col=False, encoding='utf-8')['text']) print(dictionary) dictionary.save( '//.../greekdict.dict') # store the dictionary, for future reference ## create an mmCorpus corpora.MmCorpus.serialize('//.../greekcorpus.mm', MyCorpus()) corpus = corpora.MmCorpus('//.../greekcorpus.mm') dictionary = corpora.Dictionary.load('//.../greekdict.dict') corpus = corpora.MmCorpus('//.../greekcorpus.mm') # train model lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, iterations=1000) ``` I get the following error codes and the code exits: > > ...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg\_resources\_vendor\pyparsing.py:832: DeprecationWarning: invalid escape sequence \d > > > \...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg\_resources\_vendor\pyparsing.py:2736: DeprecationWarning: invalid escape sequence \d > > > \...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg\_resources\_vendor\pyparsing.py:2914: DeprecationWarning: invalid escape sequence \g > > > \...\Python\venv\lib\site-packages\pyLDAvis\_prepare.py:387: > DeprecationWarning: > .ix is deprecated. Please use > .loc for label based indexing or > .iloc for positional indexing > > > I cannot find any solution and to be honest neither have any clue where exactly the problem comes from. I spent hours making sure that the encoding of the csv is utf-8 and exported (from R) and imported (in python) correctly. What am I doing wrong or where else could I look at? Cheers!<issue_comment>username_1: `DeprecationWarining` is exactly that - warning about a feature being *deprecated* which is supposed to prompt the user to use some other functionality instead to maintain the compatibility in the future. So in your case I would just watch for the update of libraries that you use. Starting with the last warning it look like it is originating from `pandas` and has been logged against `pyLDAvis` [here](https://github.com/bmabey/pyLDAvis/issues/96). The remaining ones come from `pyparsing` module but it does not seem that you are importing it explicitly. Maybe one of the libraries you use has a dependency and uses some relatively old and deprecated functionality. To eradicate the warning for the start I would check if upgrading does not help. Good luck! Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` import warnings warnings.filterwarnings("ignore") pyLDAvis.enable_notebook() ``` Try using this Upvotes: 0
2018/03/20
1,631
5,924
<issue_start>username_0: We are utilizing Terraform heavily for AWS Cloud provisioning. Our base terraform structure looks like this: ``` ├─ modules ├── x ├── y ├─ environments ├── dev │ ├── main.tf │ ├── output.tf │ └── variables.tf └── uat │ ├── main.tf │ ├── output.tf │ └── variables.tf └── prod ├── main.tf ├── output.tf └── variables.tf ``` As we reached a point where we have many modules and many environments, code duplication becomes a more serious headache now, we would like to get rid of as much of it as possible. Our main concern currently is with the `output.tf` files - every time we extend an existing module or add a new module, we need to set up the environment specific configuration for it (this is expected), but we still have to copy/paste the required parts into `output.tf` to output the results of the provisioning (like IP addresses, AWS ARNs, etc.). Is there a way to get rid of the duplicated `output.tf` files? Could we just define the wanted outputs in the modules themselves and see all defined outputs whenever we run terraform for a specific environment?<issue_comment>username_1: One way to resolve this is to create a `base` environment, and then symlink the common elements, for example: ``` ├─ modules ├── x ├── y ├─ environments ├── base │ ├── output.tf │ └── variables.tf ├── dev │ ├── main.tf │ ├── output.tf -> ../base/output.tf │ └── variables.tf -> ../base/variables.tf ├── uat │ ├── main.tf │ ├── output.tf -> ../base/output.tf │ └── variables.tf -> ../base/variables.tf ├── super_custom │ ├── main.tf │ ├── output.tf # not symlinked │ └── variables.tf # not symlinked └── prod ├── main.tf ├── output.tf -> ../base/output.tf └── variables.tf -> ../base/variables.tf ``` This approach only really works if your `output.tf` and `variables.tf` files are the same for each environment, and although you can have non-symlinked variants (e.g. `super_custom` above), this can become confusing as it's not immediately obvious which environments are custom and which aren't. YMMV. I try to keep the changes between environments limited to a `.tfvars` file per environment. It's worth reading [Charity Major's excellent post on tfstate files](https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/), which set me on this path. Upvotes: 2 <issue_comment>username_1: If your `dev`, `uat` and `prod` environments have the same shape, but different properties you could leverage [workspaces](https://www.terraform.io/docs/state/workspaces.html) to separate your environment state, along with separate `*.tfvars` files to specify the different configurations. This could look like: ``` ├─ modules │ ├── x │ └── y ├── dev.tfvars ├── prod.tfvars ├── uat.tfvars ├── main.tf ├── outputs.tf └── variables.tf ``` You can create a new workspace with: ``` terraform workspace new uat ``` Then deploying changes becomes: ``` terraform workspace select uat terraform apply --var-file=uat.tfvars ``` The workspaces feature ensures that different environments states are managed separately, which is a bonus. This approach only works when the differences between the environments are small enough that it makes sense to encapsulate the logic for that in the individual modules (for example, having a `high_availability` flag which adds some additional redundant infrastructure for `uat` and `prod`). Upvotes: 0 <issue_comment>username_2: We built and open sourced [Terragrunt](https://github.com/gruntwork-io/terragrunt) to solve this very issue. One of Terragrunt's features is the ability to download remote Terraform configurations. The idea is that you define the Terraform code for your infrastructure just once, in a single repo, called, for example, `modules`: ``` └── modules ├── app │ └── main.tf ├── mysql │ └── main.tf └── vpc └── main.tf ``` This repo contains typical Terraform code, with one difference: anything in your code that should be different between environments should be exposed as an input variable. For example, the app module might expose the following variables: ``` variable "instance_count" { description = "How many servers to run" } variable "instance_type" { description = "What kind of servers to run (e.g. t2.large)" } ``` In a separate repo, called, for example, live, you define the code for all of your environments, which now consists of just one `.tfvars` file per component (e.g. `app/terraform.tfvars`, `mysql/terraform.tfvars`, etc). This gives you the following file layout: ``` └── live ├── prod │ ├── app │ │ └── terraform.tfvars │ ├── mysql │ │ └── terraform.tfvars │ └── vpc │ └── terraform.tfvars ├── qa │ ├── app │ │ └── terraform.tfvars │ ├── mysql │ │ └── terraform.tfvars │ └── vpc │ └── terraform.tfvars └── stage ├── app │ └── terraform.tfvars ├── mysql │ └── terraform.tfvars └── vpc └── terraform.tfvars ``` Notice how there are no Terraform configurations (`.tf` files) in any of the folders. Instead, each `.tfvars` file specifies a `terraform { ... }` block that specifies from where to download the Terraform code, as well as the environment-specific values for the input variables in that Terraform code. For example, `stage/app/terraform.tfvars` may look like this: ``` terragrunt = { terraform { source = "git::[email protected]:foo/modules.git//app?ref=v0.0.3" } } instance_count = 3 instance_type = "t2.micro" ``` And `prod/app/terraform.tfvars` may look like this: ``` terragrunt = { terraform { source = "git::[email protected]:foo/modules.git//app?ref=v0.0.1" } } instance_count = 10 instance_type = "m2.large" ``` See the [Terragrunt documentation](https://github.com/gruntwork-io/terragrunt) for more info. Upvotes: 3
2018/03/20
1,616
5,983
<issue_start>username_0: I have implemented this class in workbox 2, now I have upgraded to version 3 but workbox.runtimeCaching.Handler is deprecated. Can someone help me on how to develop it in workbox 3?\* ``` importScripts('workbox-sw.prod.v2.1.2.js'); importScripts('workbox-runtime-caching.prod.v2.0.3.js'); importScripts('workbox-cache-expiration.prod.v2.0.3.js'); const workboxSW = new self.WorkboxSW(); class AlwaysNetworkWithCacheUpdateHandler extends workbox.runtimeCaching.Handler{ setCacheOptions(cacheOptions){ this.cacheOptions = cacheOptions; } handle({event}){ let requestWrapper = new workbox.runtimeCaching.RequestWrapper({ cacheName: this.cacheOptions.cacheName, plugins:[ new workbox.cacheExpiration.CacheExpirationPlugin(this.cacheOptions.expirationOptions) ] }); return ( requestWrapper .fetchAndCache({ request: event.request, waitOnCache: true }) ); } } ```<issue_comment>username_1: One way to resolve this is to create a `base` environment, and then symlink the common elements, for example: ``` ├─ modules ├── x ├── y ├─ environments ├── base │ ├── output.tf │ └── variables.tf ├── dev │ ├── main.tf │ ├── output.tf -> ../base/output.tf │ └── variables.tf -> ../base/variables.tf ├── uat │ ├── main.tf │ ├── output.tf -> ../base/output.tf │ └── variables.tf -> ../base/variables.tf ├── super_custom │ ├── main.tf │ ├── output.tf # not symlinked │ └── variables.tf # not symlinked └── prod ├── main.tf ├── output.tf -> ../base/output.tf └── variables.tf -> ../base/variables.tf ``` This approach only really works if your `output.tf` and `variables.tf` files are the same for each environment, and although you can have non-symlinked variants (e.g. `super_custom` above), this can become confusing as it's not immediately obvious which environments are custom and which aren't. YMMV. I try to keep the changes between environments limited to a `.tfvars` file per environment. It's worth reading [Charity Major's excellent post on tfstate files](https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/), which set me on this path. Upvotes: 2 <issue_comment>username_1: If your `dev`, `uat` and `prod` environments have the same shape, but different properties you could leverage [workspaces](https://www.terraform.io/docs/state/workspaces.html) to separate your environment state, along with separate `*.tfvars` files to specify the different configurations. This could look like: ``` ├─ modules │ ├── x │ └── y ├── dev.tfvars ├── prod.tfvars ├── uat.tfvars ├── main.tf ├── outputs.tf └── variables.tf ``` You can create a new workspace with: ``` terraform workspace new uat ``` Then deploying changes becomes: ``` terraform workspace select uat terraform apply --var-file=uat.tfvars ``` The workspaces feature ensures that different environments states are managed separately, which is a bonus. This approach only works when the differences between the environments are small enough that it makes sense to encapsulate the logic for that in the individual modules (for example, having a `high_availability` flag which adds some additional redundant infrastructure for `uat` and `prod`). Upvotes: 0 <issue_comment>username_2: We built and open sourced [Terragrunt](https://github.com/gruntwork-io/terragrunt) to solve this very issue. One of Terragrunt's features is the ability to download remote Terraform configurations. The idea is that you define the Terraform code for your infrastructure just once, in a single repo, called, for example, `modules`: ``` └── modules ├── app │ └── main.tf ├── mysql │ └── main.tf └── vpc └── main.tf ``` This repo contains typical Terraform code, with one difference: anything in your code that should be different between environments should be exposed as an input variable. For example, the app module might expose the following variables: ``` variable "instance_count" { description = "How many servers to run" } variable "instance_type" { description = "What kind of servers to run (e.g. t2.large)" } ``` In a separate repo, called, for example, live, you define the code for all of your environments, which now consists of just one `.tfvars` file per component (e.g. `app/terraform.tfvars`, `mysql/terraform.tfvars`, etc). This gives you the following file layout: ``` └── live ├── prod │ ├── app │ │ └── terraform.tfvars │ ├── mysql │ │ └── terraform.tfvars │ └── vpc │ └── terraform.tfvars ├── qa │ ├── app │ │ └── terraform.tfvars │ ├── mysql │ │ └── terraform.tfvars │ └── vpc │ └── terraform.tfvars └── stage ├── app │ └── terraform.tfvars ├── mysql │ └── terraform.tfvars └── vpc └── terraform.tfvars ``` Notice how there are no Terraform configurations (`.tf` files) in any of the folders. Instead, each `.tfvars` file specifies a `terraform { ... }` block that specifies from where to download the Terraform code, as well as the environment-specific values for the input variables in that Terraform code. For example, `stage/app/terraform.tfvars` may look like this: ``` terragrunt = { terraform { source = "git::[email protected]:foo/modules.git//app?ref=v0.0.3" } } instance_count = 3 instance_type = "t2.micro" ``` And `prod/app/terraform.tfvars` may look like this: ``` terragrunt = { terraform { source = "git::git@<EMAIL>:foo/modules.git//app?ref=v0.0.1" } } instance_count = 10 instance_type = "m2.large" ``` See the [Terragrunt documentation](https://github.com/gruntwork-io/terragrunt) for more info. Upvotes: 3
2018/03/20
2,431
7,795
<issue_start>username_0: I have been working on a script to concatenate multiple csv files into a single, large csv. The csv's contain names of folders and their respective sizes, in a 2-column setup with the format "Size, Projectname" Example of a single csv file: ``` 49747851728,ODIN 32872934580,_WORK 9721820722,LIBRARY 4855839655,BASELIGHT 1035732096,ARCHIVE 907756578,USERS 123685100,ENV 3682821,SHOTGUN 1879186,SALT 361558,SOFTWARE 486,VFX 128,DNA ``` For my current test I have 25 similar files, with different numbers in the first column. I am trying to get this script to do the following: * Read each csv file * For each Project it sees, scan the outputfile if that Project was already printed to the file. If not, print the Projectname * For each file, for each Project, if the Project was found, print the Size to the output csv. However, I need the Projects to all be on textline 1, comma separated, so I can use this outputfile as input for a javascript graph. The Sizes should be added in the column below their projectname. My current script: ``` csv_folder=$(echo "$1" | sed 's/^[ \t]*//;s/\/[ \t]*$//') csv_allfiles="$csv_folder/*.csv" csv_outputfile=$csv_folder.csv echo -n "" > $csv_outputfile for csv_inputfile in $csv_allfiles; do while read line && [[ $line != "" ]]; do projectname=$(echo $line | sed 's/^\([^,]*\),//') projectfound1=$(cat $csv_outputfile | grep -w $projectname) if [[ ! $projectfound1 ]]; then textline=1 sed "${textline}s/$/${projectname}, /" >> $csv_outputfile for csv_foundfile in $csv_allfiles; do textline=$(echo $textline + 1 | bc ) projectfound2=$(cat $csv_foundfile | grep -w $projectname) projectdata=$(echo $projectfound2 | sed 's/\,.*$//') if [[ $projectfound2 ]]; then sed "${textline}s/$/$projectdata, /" >> $csv_outputfile fi done fi done < $csv_inputfile done ``` My current script finds the right information (projectname, projectdata) and if I just 'echo' those variables, it prints the correct data to a file. However, with echo it only prints in a long list per project. I want it to 'jump back' to line 1 and print the new project at the end of the current line, then run the loop to print data at the end of each next line. I was thinking this should be possible with sed or awk. sed should have a way of inserting text to a specific line with ``` sed '{n}s/search/replace/' ``` where {n} is the line to insert to awk should be able to do the same thing with something like ``` awk -v l2="$textline" -v d="$projectdata" 'NR == l2 {print d} {print}' >> $csv_outputfile ``` However, while replacing the sed commands in the script with ``` echo $projectname echo $projectdata ``` spit out the correct information (so I know my variables are filled correctly) the sed and awk commands tend to spit out the entire contents of their current inputcsv; not just the line that I want them to. Pastebin outputs per variant of writing to file * <https://pastebin.com/XwxiAqvT> - sed output * <https://pastebin.com/xfLU6wri> - echo, plain output (single column) * <https://pastebin.com/wP3BhgY8> - echo, detailed output per variable * <https://pastebin.com/5wiuq53n> - desired output As you see, the sed output tends to paste the whole contents of inputcsv, making the loop stop after one iteration. (since it finds the other Projects after one loop) So my question is one of these; * How do I make sed / awk behave the way I want it to; i.e. print only the info in my var to the current textline, instead of the whole input csv. Is sed capable of this, printing just one line of variable? Or * Should I output the variables through 'echo' into a temp file, then loop over the temp file to make sed sort the lines the way I want them to? (Bear in mind that more .csv files will be added in the future, I can't just make it loop x times to sort the info) * Is there a way to echo/print text to a specific text line without using sed or awk? Is there a printf option I'm missing? Other thoughts? Any help would be very much appreciated.<issue_comment>username_1: A way to accomplish this transposition is to save the data to an associative array. In the following example, we use a two dimensional array to keep track of our data. Because ordering seems to be important, we create a col array and create a new increment whenever we see a new **projectname** -- this col array ends up being our first index into our data. We also create a row array which we increment whenever we see a new data for the current column. The row number is our second index into data. At the end, we print out all the records. ``` #! /usr/bin/awk -f BEGIN { FS = "," OFS = ", " rows=0 cols=0 head="" split("", data) split("", row) split("", col) } !($2 in col) { # new project if (head == "") head = $2 else head = head OFS $2 i = col[$2] = cols++ row[i] = 0 } { i = col[$2] j = row[i]++ data[i,j] = $1 if (j > rows) rows = j } END { print head for (j=0; j<=rows; ++j) { if ((0,j) in data) x = data[0,j] else x = "" for (i=1; i ``` As a bonus, here is a script to reproduce the detailed output from one of your pastebins. ``` #! /usr/bin/awk -f BEGIN { FS = "," split("", data) # accumulated data for a project split("", line) # keep track of textline for data split("", idx) # index into above to maintain input order sz = 0 } $2 in idx { # have seen this projectname i = idx[$2] x = ORS "textline = " ++line[i] x = x ORS "textdata = " $1 data[i] = data[i] x next } { # new projectname i = sz++ idx[$2] = i x = "textline = 1" x = x ORS "projectname = " $2 x = x ORS "textline = 2" x = x ORS "projectdata = " $1 data[i] = x line[i] = 2 } END { for (i=0; i ``` Upvotes: 1 <issue_comment>username_2: Fill parray with project names and array with values, then print them with bash printf, You can choose column width in printf command (currently 13 characters - %13s) ``` #!/bin/bash declare -i index=0 declare -i pindex=0 while read project; do parray[$pindex]=$project index=0 while read;do array[$pindex,$index]="$REPLY" index+=1 done <<< $(grep -h "$project" *.csv|cut -d, -f1) pindex+=1 done <<< $(cat *.csv|cut -d, -f 2|sort -u) maxi=$index maxp=$pindex for (( pindex=0; $pindex < $maxp ; pindex+=1 ));do STR="%13s $STR" VAL="$VAL ${parray[$pindex]}" done printf "$STR\n" $VAL for (( index=0; $index < $maxi;index+=1 ));do STR=""; VAL="" for (( pindex=0; $pindex < $maxp;pindex+=1 )); do STR="%13s $STR" VAL="$VAL ${array[$pindex,$index]}" done printf "$STR\n" $VAL done ``` Upvotes: 0 <issue_comment>username_3: If you are OK with the output being sorted by name this one-liner might be of use: ``` awk 'BEGIN {FS=",";OFS=","} {print $2,$1}' * | sort | uniq ``` The files have to be in the same directory. If not a list of files replaces the \*. First it exchanges the two fields. Awk will take a list of files and do the concatenation. Then sort the lines and print just the unique lines. This depends on the project size always being the same. The simple one-liner above gives you one line for each project. If you really want to do it all in awk and use awk write the two lines, then the following would be needed. There is a second awk at the end that accumulates each column entry in an array then spits it out at the end: ``` awk 'BEGIN {FS=","} {print $2,$1}' *| sort |uniq | awk 'BEGIN {n=0} {p[n]=$1;s[n++]=$2} END {for (i=0;i ``` If you have the rs utility then this can be simplified to ``` awk 'BEGIN {FS=","} {print $2,$1}' *| sort |uniq | rs -C',' -T ``` Upvotes: 0
2018/03/20
1,498
6,554
<issue_start>username_0: I'm performing an update via a method using Hibernate and the EntityManager. This update method is called multiple times (within a loop). It seems like when I execute it the first time, it locks the table and does not free it. When trying to update the table via SQL Developer after having closed the application, I see the table is still locked because the update is hanging. What do you see as a solution to this problem? If you need more information, let me know. **Class** ``` @Repository @Transactional(propagation = REQUIRES_NEW) public class YirInfoRepository { @Autowired EntityManager entityManager; @Transactional(propagation = REQUIRES_NEW) public void setSent(String id) { String query = "UPDATE_QUERY"; Query nativeQuery = entityManager.createNativeQuery(String.format(query, id)); nativeQuery.executeUpdate(); } } ``` --- **UPDATE** After having waited more than one hour, I launched the application again and it worked fine once but now again, it hangs. --- **UPDATE 2 -- I'll give a maximum bounty to whoever helps me solve this** On another place I use an application managed entity manager and it still gives me the same type of errors. ``` public void fillYirInfo() { File inputFile = new File("path"); try (InputStream inputStream = new FileInputStream(inputFile); BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream))) { bufferedReader.lines().skip(1).limit(20).forEach(line -> { String[] data = line.split(","); String rnr = data[0]; String linked = data[1]; String email = data.length > 2 ? data[2] : ""; String insuredId = insuredPeopleRepository.getInsuredIdFromNationalId(rnr); int modifiedCounter = 0; if (!isNullOrEmpty(insuredId)) { EntityManager entityManager = emf.createEntityManager(); EntityTransaction transaction = entityManager.getTransaction(); Query nativeQuery = entityManager.createNativeQuery( "QUERY" ); transaction.begin(); nativeQuery.executeUpdate(); entityManager.flush(); transaction.commit(); entityManager.close(); } System.out.println(modifiedCounter + " rows modified"); }); } catch (IOException e) { e.printStackTrace(); } } ```<issue_comment>username_1: Try without an update-query: ``` @Repository @Transactional(propagation = REQUIRES_NEW) public class YirInfoRepository { @Autowired EntityManager entityManager; @Transactional(propagation = REQUIRES_NEW) public void setSent(String id) { //guessing your class name and method.. final YirInfo yirInfo = entityManager.find(YirInfo.class, id); yirInfo.setSent(); } } ``` Might not be as fast as a single update query, but it's possible to get it reasonably fast, unless the amount of data is huge. This is the preferred way of using Hibernate/JPA, instead of thinking in terms of single values and SQL queries, you work with entities/objects and (sometimes) HQL/JPQL queries. Upvotes: 2 <issue_comment>username_2: The first thing you have to understand is that for the first example, you are using a native query to update rows in the DB. In this case you are completely skipping Hibernate to do anything for you. In your second example, you have the same thing, you are updating via an update query. You don't need to flush the entity manager as it's only necessary for transferring the pending changes made to your entity objects within that entity manager. Plus I don't know how your example works as you are autowiring the entity manager and not using the `@PersistenceContext` annotation. Make sure you use this one properly because you might have misconfigured the application. Also there is no need to manually create the entity manager when using Spring as it looks in the second example. Just use `@PersistenceContext` to get an entity manager in your app. You are also mixing up transaction management. In the first example, it's enough if you put the `@Transactional` annotation to either of your method or to the class. For the other example, you are doing manual transaction management which makes no sense in this case. If you are using Spring, you can simply rely on declarative transaction management. The first thing I'd check here is to integrate [datasource-proxy](https://github.com/ttddyy/datasource-proxy) into your connection management and log out how your statements are executed. With this info, you can make sure that the query is sent to the DB side and the DB is executing it very slowly, or you are having a network issue between your app and db. If you find out that the query is sent properly to the DB, you want to analyze your query, because most probably it's just executed very slowly and needs some optimizations. For this, you can use the Explain plan feature, to find out how your execution plan looks like and then make it faster. Upvotes: 0 <issue_comment>username_3: You are using `@Transactional` annotation. This means you are using Spring Transaction. Then in your UPDATE 2 you are using transaction by yourself and managed by spring (I guess it's another project or class not managed by Spring). In any case what I would do is to try to update your records in single spring transaction and I'd not use `@Transactional` in DAO layer but in service layer. Something like this: Service layer: ``` @Service public class YirInfoService { @Autowired YirInfoRepository dao; @Transactional(propagation = REQUIRES_NEW) public void setSent(List < String > ids) { dao.setSents(ids); } } ``` DAO layer: ``` @Repository public class YirInfoRepository { @Autowired EntityManager entityManager; //Here you can update by using and IN statement or by doing a cycle //Let's suppose a bulk operation public void setSents(List < String > ids) { String query = "UPDATE_QUERY"; for (int i = 0; i < ids.size(); i++) { String id = ids.get(i); Query nativeQuery = entityManager.createNativeQuery(String.format(query, id)); nativeQuery.executeUpdate(); if (i % 20 == 0) { entityManager.flush(); entityManager.clear(); } } } } ``` Upvotes: 2 [selected_answer]
2018/03/20
287
1,175
<issue_start>username_0: I made a very simple Qt Quick Application (all using QML) and want it to appear as full screen/immersive mode on android. How can I achieve that?<issue_comment>username_1: Android creates an "activity" instance (android "view") before your/any Qt code runs. This view/activity decides whether the "title bar" of android is shown or not, so you will need to modify the parameters with which this view/window is created. I believe this can't be done at runtime of your app (you could make two activities and switch between them though). Your Qt-project needs an AndroidManifest.xml to be deployed with your compiled code. If you didn't add one yourself then the default one gets pulled during deployment, but you can override it with your own file. The parameters needed to go fullscreen can be set in this file. See [About the Full Screen And No Titlebar from manifest](https://stackoverflow.com/questions/5752619/about-the-full-screen-and-no-titlebar-from-manifest) for details. Upvotes: 0 <issue_comment>username_2: You can achieve it with QML in ApplicationWindow: ``` ApplicationWindow { //... visibility: Window.FullScreen } ``` Upvotes: 2
2018/03/20
527
1,631
<issue_start>username_0: Using Ansible, in my role, I have defined a variable (`defaults/main.yml`): ``` someHosts: "[ \"{{ hosts| join(':' + port + '\", \"') }}:{{ port }}\" ]" ``` which basically given `hosts=["host1", "host2"]` and `port=5050` is meant to turn the above into: ``` ["host1:5050", "host2:5050"] ``` No I am using it in my jinja2 template file (`*.j2`) with `template` module. My template file has the following line: ``` hosts => {{ someHosts }} ``` But now after running it I see the following on the actual machine (with `less`): ``` ['host1:5050', 'host2:5050'] ``` Why on earth and how did it change double quotes to single quotes? EDIT/ANSWER Building on @techraf's answer, this is what I came up with: ``` someHosts: "{{ hosts | zip_longest([], fillvalue=':' + port) | map('join') | list | to_json }}" ```<issue_comment>username_1: > > Why on earth and how did it change double quotes to single quotes? > > > Because you created a list object not a string; and this is how Jinja2 renders lists of string. See for yourself by adding: ``` - debug: var: someHosts|type_debug ``` You get: ``` ok: [localhost] => { "someHosts|type_debug": "list" } ``` --- If your question was however, how to get that list in JSON format, then your template should be: ``` hosts => {{ someHosts | to_json }} ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If using template: ``` [{% for host in groups['mygroup']%} "{{ host }}:{{port}}"{% if not loop.last %},{% endif %} {% endfor %}] ``` All you have to do is to define or replace mygroup with your group. Upvotes: 1
2018/03/20
893
3,298
<issue_start>username_0: I copy some cells from one worksheet manually using the ctrl + c command and want to paste it using a macro I created. I have the following code: ``` Range("A2:W5000").Select Selection.ClearContents Range("A2").Select ActiveSheet.Paste With Selection.Interior .PatternColorIndex = 7 .ThemeColor = xlThemeColorAccent2 .TintAndShade = 0.799981688894314 .PatternTintAndShade = 0 End With Range("A2").Select ``` This macro runs just fine on my machine, but for some reason, I get an error with the `ActiveSheet.Paste` when running the exact same macro on another PC. Any ideas on why this could be happening? Thank you in advance for any suggestions.<issue_comment>username_1: Most probably you are working protected worksheets. Thus, you are getting `1004` error. Try to check whether teh worksheet is protected before doing anything with it: ``` Sub TestMe() If ActiveSheet.ProtectContents Then MsgBox ActiveSheet.Name & " is protected!" Else Range("A2:W5000").Select Selection.ClearContents Range("A2").Select ActiveSheet.Paste With Selection.Interior .PatternColorIndex = 7 .ThemeColor = xlThemeColorAccent2 .TintAndShade = 0.799981688894314 .PatternTintAndShade = 0 End With Range("A2").Select End If End Sub ``` Upvotes: 0 <issue_comment>username_2: I think PEH has the right answer. But I updated your code to reflect ``` Dim ws as Worksheet set ws = ActiveSheet 'Setting the worksheet object and then referencing it for each Range will ensure that the macro doesn't get confused as to which sheet it should be getting the Range from. ws.Range("A2:W5000").ClearContents 'No need to select cells first before clearing them ws.Range("A2").PasteSpecial 'Once again, no need to select before pasting. It will do a normal paste if you do PasteSpecial only, but if you wanted to say paste values only it would look like this .PasteSpecial(xlPasteValues) With ws.Range("A2").Interior .PatternColorIndex = 7 .ThemeColor = xlThemeColorAccent2 .TintAndShade = 0.799981688894314 .PatternTintAndShade = 0 End With ws.Range("A2").Select 'No necessary unless you think that A2 won't be visible when the other user uses this macro. No harm in leaving it in though. ``` Hopefully, this helps you. Good luck! Jason Upvotes: 1 <issue_comment>username_3: The issue is that you start copy **before** you run that macro. But if you use `.ClearContents` in your macro the copy selection gets lost. Therefore `.PasteSpecial` after `.ClearContents` cannot work. you can easily test this with ``` Sub test() Range("A1").Copy Debug.Print Application.CutCopyMode '=1 means something is copied Range("A2").ClearContents 'kills cutcopymode Debug.Print Application.CutCopyMode '=0 means nothing is copied Range("A3").PasteSpecial 'fails because nothing is selected for copy anymore End Sub ``` --- So the solution would be … * not to use `.ClearContents` or any other action that kills the copy selection before `.Paste`. * write a procedure that … 1. `.ClearContents` first and then 2. Copies the desired range (eg. Selection) and finally 3. Pastes Upvotes: 2 [selected_answer]
2018/03/20
779
2,973
<issue_start>username_0: I am working with WPF and have ListViewItems like this: ``` ``` What this XAML code does is adding a stack panel with an image inside into a ListViewItem. I'd like to know how to achieve this programmatically. For example, I initiated a ListViewItem instance and can access its content. But this class doesn't have any fields like subitems.<issue_comment>username_1: Most probably you are working protected worksheets. Thus, you are getting `1004` error. Try to check whether teh worksheet is protected before doing anything with it: ``` Sub TestMe() If ActiveSheet.ProtectContents Then MsgBox ActiveSheet.Name & " is protected!" Else Range("A2:W5000").Select Selection.ClearContents Range("A2").Select ActiveSheet.Paste With Selection.Interior .PatternColorIndex = 7 .ThemeColor = xlThemeColorAccent2 .TintAndShade = 0.799981688894314 .PatternTintAndShade = 0 End With Range("A2").Select End If End Sub ``` Upvotes: 0 <issue_comment>username_2: I think PEH has the right answer. But I updated your code to reflect ``` Dim ws as Worksheet set ws = ActiveSheet 'Setting the worksheet object and then referencing it for each Range will ensure that the macro doesn't get confused as to which sheet it should be getting the Range from. ws.Range("A2:W5000").ClearContents 'No need to select cells first before clearing them ws.Range("A2").PasteSpecial 'Once again, no need to select before pasting. It will do a normal paste if you do PasteSpecial only, but if you wanted to say paste values only it would look like this .PasteSpecial(xlPasteValues) With ws.Range("A2").Interior .PatternColorIndex = 7 .ThemeColor = xlThemeColorAccent2 .TintAndShade = 0.799981688894314 .PatternTintAndShade = 0 End With ws.Range("A2").Select 'No necessary unless you think that A2 won't be visible when the other user uses this macro. No harm in leaving it in though. ``` Hopefully, this helps you. Good luck! Jason Upvotes: 1 <issue_comment>username_3: The issue is that you start copy **before** you run that macro. But if you use `.ClearContents` in your macro the copy selection gets lost. Therefore `.PasteSpecial` after `.ClearContents` cannot work. you can easily test this with ``` Sub test() Range("A1").Copy Debug.Print Application.CutCopyMode '=1 means something is copied Range("A2").ClearContents 'kills cutcopymode Debug.Print Application.CutCopyMode '=0 means nothing is copied Range("A3").PasteSpecial 'fails because nothing is selected for copy anymore End Sub ``` --- So the solution would be … * not to use `.ClearContents` or any other action that kills the copy selection before `.Paste`. * write a procedure that … 1. `.ClearContents` first and then 2. Copies the desired range (eg. Selection) and finally 3. Pastes Upvotes: 2 [selected_answer]
2018/03/20
1,594
4,345
<issue_start>username_0: I'm having some trouble with a weird code structure and I can't seem to find an answer to the problem after 2 hours of try and error. I have 2 variables and an array structure with some sub arrays (this is important for later on) the system is to big to post the full code so here is the problem part: ``` $counter= 10; $times = 20; //Random 1 / 0 while($times > 0){ $number = rand(0,1); if($number == 1) { $counter++; } else { $counter--; } $times--; } ``` So this is straightforward, problem however is that the $total which is 10 at the start has to end on 10 as well, so there has to be 10 times an `1` and 10 times an `0`. I tried to work around it using: ``` $number = 0; if ($counter > 0) { $number = 1; $counter--; } ``` Which worked and is a simple solution but see the 'addition 1` But this has to be randomized. And each of those positions is in the actual program an array so shuffling the array isn't an option. TLDR: I need to shuffle only 1 part of an array or have to random generate the `0 or 1` with the given code in the first code sample. And to be honest, I have no idea at this moment how I could do this so any help is appreciated. Addition 1: ``` array( array('I' => 1, 'X' => '7'), array('I' => 1, 'X' => '7'), array('I' => 1, 'X' => 'value'), array('I' => 1, 'X' => 'value'), array('I' => 1, 'X' => 'value'), array('I' => 1, 'X' => 'value'), array('I' => 1, 'X' => 'value'), array('I' => 1, 'X' => 'value'), array('I' => 1, 'X' => 'value'), array('I' => 1, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), array('I' => 0, 'X' => 'value'), ); ``` each value can occur 2 times (will not always occur 2 times) and if this is the case one of the 2 value has to be 'I' => 1 and 'I' => 0 but the X is filled with a random as well: ``` $x = rand(1, 100) ``` This is the actual structure sorry if I was unclear with the initial post and now my problem is more clear I presume. Addition 2: The expected result: ``` //This is correct 'I' => 1, 'X' => 7 'I' => 0, 'X' => 7 //This is not correct: 'I' => 1, 'X' => 7 'I' => 1, 'X' => 7 //This is not correct 'I' => 0, 'X' => 7 'I' => 0, 'X' => 7 ``` * 10 times an 'I' => 1 * 10 times an 'I' => 0 * If an 'X' occurs 2 times see above code sample:<issue_comment>username_1: ``` $yourSample = [ 0 => 0, 1 => 0, 2 => 0, 3 => 0, 4 => 0, 5 => 0, 6 => 0, 7 => 0, 8 => 0, 9 => 0, 10 => 1, 11 => 1, 12 => 1, 13 => 1, 14 => 1, 15 => 1, 16 => 1, 17 => 1, 18 => 1, 19 => 1 ]; $total = count($yourSample); $newIndexes = []; for ($i = 0; $i < $total; $i++) { $randItem = array_rand($yourSample); $newIndexes[] = $yourSample[$randItem]; unset($yourSample[$randItem]); } //echo ' ``` '; //var_dump($newIndexes); ``` ``` Is it fits your issue, right? Update 1: ``` php function getRandomX($sample, $spe) { $rand = mt_rand(0, 100); foreach ($sample as $element) { if ($element['X'] == $rand and $spe['I'] == $element['I']) { $rand = getRandomX($sample, $spe); } } return $rand; } function makeResult() { $sample = array_merge( array_fill(0, 10, ['I' = 1, 'X' => null]), array_fill(0, 10, ['I' => 0, 'X' => null]) ); shuffle($sample); foreach ($sample as &$value) { $value['X'] = getRandomX($sample, $value); } return $sample; } echo ' ``` '; var_export(makeResult()); ``` ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: The simplest solution is to just do a [Fisher-Yates Shuffle](https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle). That works like this: 1. Pick the element at a random position in the array to be last. Swap that with the last so that the right thing is now last. 2. Pick the element at a random position anywhere other than last to be last. Swap that with the last so that the right thing is now last. 3. Repeat moving up the array until you have chosen what goes in every position. Upvotes: 2
2018/03/20
711
1,959
<issue_start>username_0: How to write specflow test for jQuery Ui autocomplete. Thanks! ``` var practices = [ { "Data": "1", "value": "abc" }, { "Data": "2", "value": "efg" } ]; $('#autocomplete').autocomplete({ lookup: practices, onSelect: function (suggestion) { //do something } }); ```<issue_comment>username_1: ``` $yourSample = [ 0 => 0, 1 => 0, 2 => 0, 3 => 0, 4 => 0, 5 => 0, 6 => 0, 7 => 0, 8 => 0, 9 => 0, 10 => 1, 11 => 1, 12 => 1, 13 => 1, 14 => 1, 15 => 1, 16 => 1, 17 => 1, 18 => 1, 19 => 1 ]; $total = count($yourSample); $newIndexes = []; for ($i = 0; $i < $total; $i++) { $randItem = array_rand($yourSample); $newIndexes[] = $yourSample[$randItem]; unset($yourSample[$randItem]); } //echo ' ``` '; //var_dump($newIndexes); ``` ``` Is it fits your issue, right? Update 1: ``` php function getRandomX($sample, $spe) { $rand = mt_rand(0, 100); foreach ($sample as $element) { if ($element['X'] == $rand and $spe['I'] == $element['I']) { $rand = getRandomX($sample, $spe); } } return $rand; } function makeResult() { $sample = array_merge( array_fill(0, 10, ['I' = 1, 'X' => null]), array_fill(0, 10, ['I' => 0, 'X' => null]) ); shuffle($sample); foreach ($sample as &$value) { $value['X'] = getRandomX($sample, $value); } return $sample; } echo ' ``` '; var_export(makeResult()); ``` ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: The simplest solution is to just do a [Fisher-Yates Shuffle](https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle). That works like this: 1. Pick the element at a random position in the array to be last. Swap that with the last so that the right thing is now last. 2. Pick the element at a random position anywhere other than last to be last. Swap that with the last so that the right thing is now last. 3. Repeat moving up the array until you have chosen what goes in every position. Upvotes: 2
2018/03/20
433
1,248
<issue_start>username_0: How do I get the number of dictionaries where value satisfies a given condition? ``` list_of_dict = [ {'seq_no': 10, 'file_path': 'file/path', 'rel_item': None}, {'seq_no': 22, 'file_path': 'file/path', 'rel_item': 0}, {'seq_no': 32, 'file_path': 'file/path', 'rel_item': 0}, {'seq_no': 90, 'file_path': 'file/path', 'rel_item': 0}, {'seq_no': 10, 'file_path': 'file/path', 'rel_item': None}, ] ``` I would like to count the number of dictionaries where key `rel_item=None`.<issue_comment>username_1: Here is one way. ``` c = sum(i['rel_item'] is None for i in list_of_dict) # 2 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This will do the job: ``` len([x for x in list_of_dict if x["rel_item"] == None]) ``` Upvotes: 1 <issue_comment>username_3: You can using `map` then ``` sum((map(lambda x : x['rel_item'] is None,list_of_dict))) Out[721]: 2 ``` Upvotes: 0 <issue_comment>username_4: You can [`filter`](https://docs.python.org/3/library/functions.html#filter) the `list_of_dict` based on your criteria ``` >>> len(list(filter(lambda x: x['rel_item'] is None, list_of_dict))) >>> 2 ``` Upvotes: 1
2018/03/20
297
1,168
<issue_start>username_0: I code using Visual Studio Code on Ubuntu, with the Beautify and Perl extensions among others. An annoyance is that this bit of code, `split /\|/` breaks the highlighter.[![syntax is broken between the two occurrences of the code](https://i.stack.imgur.com/sxTmC.png)](https://i.stack.imgur.com/sxTmC.png) I'm a bit lost about where to poke in vscode to fix the highlighting error. This is a large codebase that uses that regex relatively frequently. While refactoring it all to centralize that code might be a good suggestion in theory, for the purpose of the question please assume I cannot change this code right now. I was aiming more at where to poke in the vscode/extension highlighter, or if it is an actual error.<issue_comment>username_1: Even if it didn't break syntax highlighting, leaning toothpicks are hardly readable. Better use something like: ``` split qr{\Q|}, ... ``` Upvotes: 3 <issue_comment>username_2: I found out that the easiest and clearest thing to do that solved the problem was calling the function using parentheses. ``` @result = split(/\|/, $input); ``` displays correctly. Upvotes: 1 [selected_answer]
2018/03/20
1,025
3,957
<issue_start>username_0: I'm making a script with a do-while loop that randomly selects a color between 6 colors. Then it asks the person to guess the color. If they get it wrong it says Incorrect and asks if they want to try again. If they do it selects a new color (This happens and its ok but i wish it wouldnt but thats not the problem) then they guess again. The issue occurs when they guess the color right it still outputs saying that they're incorrect. I can't figure out why it happens and need help. My code is the following: ``` import java.util.Random; import java.util.Scanner; public class MCassignment11 { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); Random rng = new Random(); String again, guess; do { int colorchoice = rng.nextInt(5); String color; if (colorchoice == 0) color = "RED"; else if (colorchoice == 1) color = "BLUE"; else if (colorchoice == 2) color = "GREEN"; else if (colorchoice == 3) color = "YELLOW"; else if (colorchoice == 4) color = "ORANGE"; else color = "PINK"; System.out.println("Guess my favorite color. There are 6 options."); guess = keyboard.next(); if (guess.equals(colorchoice)) System.out.println("Correct my favorite color is " + color); else System.out.println("Incorrect. Would you like to try again? (y/n)" + colorchoice); again = keyboard.next(); } while (again.equals("y")); } } ```<issue_comment>username_1: You are comparing `String` to `int`: ``` if (guess.equals(colorchoice)) ``` So you should convert one type to the other before. You could convert the string using `Integer.parseInt()` or the int concatenating it to `""`. See following please: ``` public static void main(String[] args) { String a = "1"; int b = 1; System.out.println(a.equals(b)); //Prints false System.out.println(a.equals("" + b)); //Prints true System.out.println(Integer.parseInt(a)==b); //Prints true } ``` Upvotes: 0 <issue_comment>username_2: One issue here is that your comparison is case-sensitive, meaning that if the user entered something like `Red`, `red`, or `rEd` or something like that it'll be considered incorrect. (Technically, `Red` and `RED` aren't equal strings). One option is to use the [toUpperCase() method](https://www.tutorialspoint.com/java/java_string_touppercase.htm) to make sure that the user input will be treated as if they wrote it with capital letters, like this: ``` if (guess.toUpperCase().equals(color)) ``` As @OHGODSPIDERS pointed out in the comments, Java strings also have an [equalsIgnoresCase() method](https://www.tutorialspoint.com/java/lang/string_equalsignorecase.htm) that you could use here too. Either will work. Also, as @username_1 pointed out, you should compare `guess` to `color` rather than to `colorchoice`. One other point: ``` if (guess.equals(colorchoice)) System.out.println("Correct my favorite color is " + color); else System.out.println("Incorrect. Would you like to try again? (y/n)" + colorchoice); again = keyboard.next(); ``` To make sure that you don't prompt the user to keep guessing, you should do the following: ``` if (guess.toUpperCase().equals(color)) { System.out.println("Correct my favorite color is " + color); // Explicitly set "again" to "n" so that we won't loop again again = "n"; } else // Add brackets so that it only prompts the user for input if they had a wrong answer { System.out.println("Incorrect. Would you like to try again? (y/n)" + colorchoice); again = keyboard.next(); } ``` Upvotes: 3 [selected_answer]
2018/03/20
1,304
4,053
<issue_start>username_0: How can i convert a hex string in SQL Server to binary? Better yet, how can i convert a hex string in SQL Server to an integer? The problem is that every existing answer on Stackoverflow assumes SQL Server 2008. Failed attempts =============== * **[Convert hex string to binary SQL Server](https://stackoverflow.com/questions/39674205/convert-hex-string-to-binary-sql-server)** ``` SELECT CONVERT(binary(16),'0x01',1) 0x30783031000000000000000000000000 ``` * **[SQL Server hex string to varbinary conversion](https://stackoverflow.com/questions/14145904/sql-server-hex-string-to-varbinary-conversion)** ``` select CONVERT(varbinary(max), '0x01', 1); 0x30783031 select CONVERT(varbinary(max), '01', 2); 0x3031 ``` * **[Convert integer to hex and hex to integer](https://stackoverflow.com/questions/703019/convert-integer-to-hex-and-hex-to-integer)** ``` -- If the '0x' marker is present: SELECT CONVERT(INT, CONVERT(VARBINARY, '0x000001', 1)) 808464433 -- If the '0x' marker is NOT present: SELECT CONVERT(INT, CONVERT(VARBINARY, '000001', 2)) 808464433 ``` Edit ---- ...yes 2005 has varbinary. Even 2000 has `varbinary`: ``` SELECT name, xtype FROM systypes WHERE name LIKE '%binary%'; SELECT @@version; name xtype --------- ----- varbinary 165 binary 173 (No column name) --------------------------------- Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.0 (Build 2195: Service Pack 4) ``` [Even SQL Server 6.5 has varbinary.](https://www.databasejournal.com/features/mssql/article.php/1442371/SQL-Server-65-Datatypes.htm) \*([archive](http://archive.is/C9iPK))\*Interesting, and typical SO fashion, to try to circumvent the question rather than answer it.<issue_comment>username_1: You are comparing `String` to `int`: ``` if (guess.equals(colorchoice)) ``` So you should convert one type to the other before. You could convert the string using `Integer.parseInt()` or the int concatenating it to `""`. See following please: ``` public static void main(String[] args) { String a = "1"; int b = 1; System.out.println(a.equals(b)); //Prints false System.out.println(a.equals("" + b)); //Prints true System.out.println(Integer.parseInt(a)==b); //Prints true } ``` Upvotes: 0 <issue_comment>username_2: One issue here is that your comparison is case-sensitive, meaning that if the user entered something like `Red`, `red`, or `rEd` or something like that it'll be considered incorrect. (Technically, `Red` and `RED` aren't equal strings). One option is to use the [toUpperCase() method](https://www.tutorialspoint.com/java/java_string_touppercase.htm) to make sure that the user input will be treated as if they wrote it with capital letters, like this: ``` if (guess.toUpperCase().equals(color)) ``` As @OHGODSPIDERS pointed out in the comments, Java strings also have an [equalsIgnoresCase() method](https://www.tutorialspoint.com/java/lang/string_equalsignorecase.htm) that you could use here too. Either will work. Also, as @username_1 pointed out, you should compare `guess` to `color` rather than to `colorchoice`. One other point: ``` if (guess.equals(colorchoice)) System.out.println("Correct my favorite color is " + color); else System.out.println("Incorrect. Would you like to try again? (y/n)" + colorchoice); again = keyboard.next(); ``` To make sure that you don't prompt the user to keep guessing, you should do the following: ``` if (guess.toUpperCase().equals(color)) { System.out.println("Correct my favorite color is " + color); // Explicitly set "again" to "n" so that we won't loop again again = "n"; } else // Add brackets so that it only prompts the user for input if they had a wrong answer { System.out.println("Incorrect. Would you like to try again? (y/n)" + colorchoice); again = keyboard.next(); } ``` Upvotes: 3 [selected_answer]
2018/03/20
439
1,447
<issue_start>username_0: I have multiple mongodb documents with some different fields but with one same field name as an identifier. How do I query the different documents using the one identical field? For example, the following documents all have a field for "name" but no other fields in common. ``` { "_id" : 1, "name" : "dave123", "gender" : "male"} { "_id" : 2, "name" : "dave123", "age" : 23} { "_id" : 3, "name" : "an567", "gender" : "male"} { "_id" : 4, "name" : "an567", "age" : 21} ``` If I wanted to query everyone who is a male and is 23, how would I go about that? The following code doesn't return anything because there are no documents with both fields. Rather, there are two separate documents with one matching field each. ``` df.collections.find({$and: [{"gender" : "male"}, {"age" : 23}]}) ```<issue_comment>username_1: If you don't have `gender` and `age` together in the document, you just can't have any result with your query Upvotes: 0 <issue_comment>username_2: Use [`$group`](https://docs.mongodb.com/manual/reference/operator/aggregation/group/) to prepare the data first followed by [`$match`](https://docs.mongodb.com/manual/reference/operator/aggregation/match/) in a aggregate query. Something like ``` db.collections.aggregate([ {"$group":{ "_id":"$name", "gender":{"$push":"$gender"}, "age":{"$push":"$age"} }}, {"$match":{"gender":"male","age":23}} ]) ``` Upvotes: 2 [selected_answer]
2018/03/20
735
2,640
<issue_start>username_0: I am working on a Java project that deals with projecting a grid to the user using the paint() method along with a JFrame and Canvas. I have a class called Runner, which includes my main method. Within Runner, I have the java paint() method. I also have a public variable of type Player within the Runner class. The problem arises when I try to call a public function of the Player instance variable. I get the following run-time error: [Run-time error](https://i.stack.imgur.com/vyg1s.jpg) Can you not call other functions from inside the paint() method? The Java documentation didn't mention that that action would be bad, as far as I can tell with my limited Java understanding. I will include some of my code below. I will do my best to only include pieces of the code that are relevant to the problem to make your job easier. This statement is within my class Runner: ``` //Player public Player player; ``` Here is the code from the paint method: ``` public void paint(Graphics g) { //(Code that draws grid from 2D array to screen) // . . . //Draw player int playerX = 1 + (int)numRows/2; int playerY = 1 + (int)numCols/2; //CAUSE OF ERROR******************* //Sets playerX and playerY player.setPlayerX(playerX); player.setPlayerY(playerY); //********************************* array[playerX][playerY] = 1; } ``` Here is the code from the class Player: ``` public class Player { private int playerX; private int playerY; private int playerHealth; //Constructor Player() { playerHealth = 100; } //Getters // . . . //Setters public void setPlayerX(int x) { playerX = x; } public void setPlayerY(int y) { playerY = y; } } ``` If more code is needed to diagnose the issue please let me know and I will happily provide it. Thank you so much for your time.<issue_comment>username_1: show me > > Runner class > > > You have `object` with `value=null`, in `Runner` class, `line 145`. And you cath `NullPointerException`, becouse you call something on object, that = `null`. May be it help your situation: ``` Object o = null; o.toString(); //throw NullPointerException ``` And fix it: ``` Object o = new Object(); o.toString(); ``` Upvotes: -1 <issue_comment>username_2: As @Gimby pointed out, the problem was likely resulting due to the statement player = new Player() either not being included or not being reached. This was the issue. I moved the line player = new Player() into the Runner constructor, since for some reason I forgot to put it there. Thanks to everyone and especially @Gimby for the assistance. Upvotes: 0
2018/03/20
474
1,629
<issue_start>username_0: I am struggling with running python script in shell. I use PyCharm where is everything ok, but I want to run script without running PyCharm. So my project folder is like: * `data/` + `file.txt` * `main/` + `__init__.py` + `script.py` * `tools/` + `__init__.py` + `my_strings.py` I want to run `main/script.py`, which start with `from tools import my_strings` and working directory should be `data/`. My PyCharm config is: * Script path: `/main/script.py` * Working directory: `/data` * Add content roots to PYTHONPATH: YES * Add source roots to PYTHONPATH: YES So I want to run `main/script.py` in shell on Ubuntu. I tried: ``` PYTHONPATH= cd /data python3 ../main/script.py ``` But I just got: `ImportError: No module named 'tools'`<issue_comment>username_1: Check out this post, it's explains the PYTHONPATH variable. [How to use PYTHONPATH](https://stackoverflow.com/questions/19917492/how-to-use-pythonpath) and the documentation the answer points to <https://docs.python.org/2/using/cmdline.html#envvar-PYTHONPATH> When you run from the data directory by default python can't find your tools directory. Also regarding your comment about needing to run from the data directory, you could just use `open('../data/file.txt')` if you decided to run from the main directory instead. Ideally, you should be able to run your script from anywhere though. I find this snippet very useful `os.path.dirname(sys.argv[0])`. It returns the directory in which the script exists. Upvotes: 1 <issue_comment>username_2: I simply forgot to `export $PYTHONPATH` as suggested by username_1. Upvotes: 0
2018/03/20
516
1,584
<issue_start>username_0: When trying to add ``` #include ``` which is needed for stringstream, I get several errors, the following included: expected ';' at end of input expected '}' at end of input macro "str" requires 2 arguments, but only 1 given How to enable using stringstream ?<issue_comment>username_1: This is a bug in the Xilinx SDK. You need to undef a macro named str. Replace ``` #include ``` with ``` #undef str #include ``` Credit: This method was proposed by sparks333 and can be found here: <https://forums.xilinx.com/t5/Embedded-Development-Tools/Error-with-Standard-Libaries-in-Zynq/td-p/450032> Upvotes: 2 <issue_comment>username_2: I just signed up just to answer this question. I went through this post some time ago and used the solution posed, even though I didn't like it very much. It was a mistake. This solution can cause deadlock of the system after some time in a random way, very difficult to debug. I propose the following solution: 1. create the file "compatible\_sstream.h": > > > ``` > #pragma push_macro("str") > #undef str > #include > #pragma pop\_macro("str") > > ``` > > 2. replace `#include` with `#include "compatible_sstream.h"` in all the other files. 3. wrap all calls to `std::ostringstream::str` in parentheses as in the example: > > > ``` > std::ostringstream foo() > { > // ... > } > > ``` > > > > > ``` > void main() > { > // ... > std::cout << (foo().str)() << std::endl; > // ... > } > > ``` > > Apologies in advance if I have not followed any of the posting rules correctly. Upvotes: 0
2018/03/20
328
1,014
<issue_start>username_0: I am trying to make a sum in c# but i keep getting these errors. by pressing the + button i pretty much want the numbers to be summed up in a label. Thanks in advance. here's my code. ``` private void buttonPlus_Click(object sender, EventArgs e) { Int32 a = 10; double b = 20.50; double c = 50.10; Int32 d = 20; labelSum.Text = (a+b+c+d); } ```<issue_comment>username_1: Not sure why you're using `Int32` and not just `int`. Also, try converting the sum to a string: ``` labelSum.Text = (a+b+c+d).ToString(); ``` Upvotes: 2 <issue_comment>username_2: `Text` property is of type `String` while your expression will return a numeric data type not `String` so, the right hand side should be returning the same type what the left hand of expression is having when assigning. Just convert the result of sum to string by calling `ToString()` : ``` labelSum.Text = (a+b+c+d).ToString(); ``` or like: ``` labelSum.Text = Convert.ToString(a+b+c+d); ``` Upvotes: 1
2018/03/20
665
2,143
<issue_start>username_0: I am trying to use bootstrap to align my `h3` headings with my `h5` so that they are in line with each other. **EDIT** I am trying to use the already available bootstrap and avoid changing the css unless absolutely necessary. Here is a screen to demonstrate what i mean. [![enter image description here](https://i.stack.imgur.com/Clixd.png)](https://i.stack.imgur.com/Clixd.png) E.g. First name should line up with address 1, last name with address 2 etc. here is the code I have. ``` ### First Name ### Last Name ### Week. ### Code. ##### Address 1 ##### Address 2 ##### Address 3 ##### Country ```<issue_comment>username_1: set same line-heigt, margins, and padding for h3&h5 in css styles Upvotes: 0 <issue_comment>username_2: That's not what headers are for. (semantically incorrect, search engines won't like this) I suggest to simply use a table for this (it IS tabular data) and use different CSS rules for `td:first-child` and `td:nth-child(2)` in order to get the different font sizes. ```css .x { width: 100%; } .x td { padding: 10px; font-family: sans-serif; font-weight: light; } .x td:first-child { font-size: 24px; width: 60%; } .x td:nth-child(2) { font-size: 16px; width: 40%; } ``` ```html | | | | --- | --- | | First Name | Address 1 | | Last Name | Address 2 | | Week. | Address 3 | | Code. | Country | ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Better not to mess with the h3/h5 heights, and use multiple `row`s instead. E.g. what if one of the texts becomes so long that it wraps? Then you can say bye-bye to your aligned layout. That will never happen if you use multple `row`s: ``` ### First Name ##### Address 1 ### Last Name ##### Address 2 ### Week. ##### Address 3 ### Code. ##### Country ``` Apart from all this: I agree with the remarks by others that you shouldn't be using h3/h5 in the first place. Headers are intended to define relations between logical units of text, so `#####` is the header of a sub-sub section of text inside a chapter of text that is headed by a `###` tag. Upvotes: 1
2018/03/20
397
1,125
<issue_start>username_0: ive created a lot of character objects in R that i would like to put into a list (storing all their information). the object looks like this and the pattern is "TMC" ``` str(TMCS09g10086933) chr [1:10] "TMCS09g1008699" "TMCS09g1008610 "TMCS09g10086101" "TMCS09g10086104" "TMCS09g100864343" "TMCS09g10086434343" "TMCS09g10086994111" ... ``` i have hundreds of these objects. Could someone tell me how to do this?<issue_comment>username_1: You can use the function `objects` with the argument `pattern` to list them. Then, you can call the function `get` to fetch them. If you do this with an `lapply`, you will get a `list` returned right away. ``` TMClist <- lapply(objects(pattern = "^TMC"), get) ``` Upvotes: 2 <issue_comment>username_2: First you need to find the objects, which you can do with a regex search through the list of the objects in your environment `grep("^TMC", ls(), value = TRUE)`, then you need to get the objects using the character vector of their names. For that you use `mget`. ``` your_list <- mget(grep("^TMC", ls(), value = TRUE)) ``` Upvotes: 2 [selected_answer]
2018/03/20
488
1,754
<issue_start>username_0: How do I clean up resources after doing a Firestore operation, I want to use the "finally" block to close a dialog after saving the record but it complains it is not a function. I been searching for the API reference but all I find is the few examples in the getting started section. my code is something like this: ``` db.collection("posts") .doc(doc.id) .set(post) .then(function(docRef) { //todo }) .catch(function(error) { console.error("Error saving post : ", error); }) /*.finally(function(){ //close pop up })*/ ; ```<issue_comment>username_1: Native Promises in node 6 don't have a finally() method. There is just then() and catch(). ([See this table](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise#Browser_compatibility), node is on the far right.) If you want do do something unconditionally at the end of a promise chain regardless of success or failure, you can duplicate that in both then() and catch() callbacks: ``` doSomeWork() .then(result => { cleanup() }) .catch(error => { cleanup() }) function cleanup() {} ``` Or you can use TypeScript, which has try/catch/finally defined in the language. Upvotes: 4 [selected_answer]<issue_comment>username_2: A then following a then/catch will always get executed so long as: * IF the catch is executed, the code within does not throw an error (if it does, the next catch is executed). ``` db.collection("posts") .doc(doc.id) .set(post) .then(function(docRef) { //any code, throws error or not. }) .catch(function(error) { console.error("Error saving post : ", error); //this code does not throw an error. }).then(function(any){ //will always execute. }); ``` Upvotes: 2
2018/03/20
344
1,159
<issue_start>username_0: I have gone through some similar question on SO but didn't find any workable solutions. I tried using `djang-dbbackup` module with my Django project. I run `$ python manage.py dbbackup` and got this error: ``` CommandConnectorError: Error running: pg_dump last_cosmetics --host=localhost --username=postgres --no-password --clean b'pg_dump: too many command-line arguments (first is "--host=localhost")\r\nTry "pg_dump --help" for more information.\r\n' ``` Also I tried running it on Windows cmd. ``` pg_dump last_cosmetics --host=localhost --username=postgres --no-password --clean ``` But I still get the same error.<issue_comment>username_1: You can check syntax in [here](https://www.postgresql.org/docs/9.3/static/app-pgdump.html). Database name should be last argument after all options. So in your case use: ``` pg_dump --host=localhost --username=postgres --no-password --clean last_cosmetics ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: If you used user as `postgres` in CentOS then ``` su postgres pg_dump -Fc last_cosmetics > /tmp/last_cosmetics.dump ``` It works in Ubuntu too. Upvotes: 0
2018/03/20
761
2,730
<issue_start>username_0: I want to return a value from mutation to action. I this case, I want the last inserted object : In my mutation, work fine : ``` mutations: { insert(state, item) { const guid = Math.floor(Math.random() * 6) + 1; // any sense, just example item.guid = guid; state.data.push(item); return guid; }, }, ``` In my action, work fine for the call, not for the return : ``` actions: { insert ({ commit }, data) { return new Promise((resolve) => { const guid = commit('insert', event); resolve(guid); // resolve undefined }); }, }, ``` There is a way to return the guid ? I need it to emit after with my component... Thanks<issue_comment>username_1: you can access the state data by passing it into the action `insert ({ commit, state }, data) { ...` example: ``` actions: { insert ({ commit, state }, data) { return new Promise((resolve) => { commit('insert', event); const guid = state.data[state.data.length].guid resolve(guid); // resolve undefined }); }, }, ``` Upvotes: 1 <issue_comment>username_2: [Mutations (commits) **don't** return values](https://github.com/vuejs/vuex/blob/v3.0.1/src/store.js#L104). And, as mentioned in comments, the best practice is to leave such GUID *generation* computation to an action and just really *commit* the state in the mutation. That being said, you cand send a callback to the mutation and call it. Just make sure the callback code is simple and synchronous (if not, see below). ```js const store = new Vuex.Store({ strict: true, state: { data: [] }, mutations: { insert(state, {item, callback}) { const guid = Math.floor(Math.random() * 600) + 1; // any sense, just example item.guid = guid; state.data.push(item); callback(guid); }, }, actions: { insert ({ commit }, data) { return new Promise((resolve) => { commit('insert', {item: data, callback: resolve}); }); }, }, }); new Vue({ store, el: '#app', data: { insertedGuid: 'click button below' }, methods: { go: async function() { const guid = await this.$store.dispatch('insert', {name: "Alice"}); this.insertedGuid = guid; } }, computed: { datadata: function() { return this.$store.state.data } }, }) ``` ```html store's data: {{ datadata }} insertedGuid: {{ insertedGuid }} Click to Insert ``` If you have no idea of what the callback could be, I suggest you wrap it as ``` setTimeout(() => callback(guid)); ``` Which would end the mutation right away and send the callback execution later down the queue of the event loop. Upvotes: 3 [selected_answer]
2018/03/20
744
2,590
<issue_start>username_0: This is in oracle. Table EmployeeName: ``` EmployeeNameID|EmployeeID|FirstName|LastName 1|1|ABC|DEF 2|1|ABC|EFG 3|1|ABC|DEF 4|2|XYZ|PQR 5|2|DEF|RST 6|3|XYQ|BRQ ``` I want to find out how many employee records have more than one name. The result should be: First column is the EmployeeId and the 2nd column is the distinct number of names they have. For the first result the ABC|DEF repeats so I just want to count it once. ``` 1|2 2|2 3|1 ``` I tried to group by but not sure how to work with distinct names requirement.<issue_comment>username_1: you can access the state data by passing it into the action `insert ({ commit, state }, data) { ...` example: ``` actions: { insert ({ commit, state }, data) { return new Promise((resolve) => { commit('insert', event); const guid = state.data[state.data.length].guid resolve(guid); // resolve undefined }); }, }, ``` Upvotes: 1 <issue_comment>username_2: [Mutations (commits) **don't** return values](https://github.com/vuejs/vuex/blob/v3.0.1/src/store.js#L104). And, as mentioned in comments, the best practice is to leave such GUID *generation* computation to an action and just really *commit* the state in the mutation. That being said, you cand send a callback to the mutation and call it. Just make sure the callback code is simple and synchronous (if not, see below). ```js const store = new Vuex.Store({ strict: true, state: { data: [] }, mutations: { insert(state, {item, callback}) { const guid = Math.floor(Math.random() * 600) + 1; // any sense, just example item.guid = guid; state.data.push(item); callback(guid); }, }, actions: { insert ({ commit }, data) { return new Promise((resolve) => { commit('insert', {item: data, callback: resolve}); }); }, }, }); new Vue({ store, el: '#app', data: { insertedGuid: 'click button below' }, methods: { go: async function() { const guid = await this.$store.dispatch('insert', {name: "Alice"}); this.insertedGuid = guid; } }, computed: { datadata: function() { return this.$store.state.data } }, }) ``` ```html store's data: {{ datadata }} insertedGuid: {{ insertedGuid }} Click to Insert ``` If you have no idea of what the callback could be, I suggest you wrap it as ``` setTimeout(() => callback(guid)); ``` Which would end the mutation right away and send the callback execution later down the queue of the event loop. Upvotes: 3 [selected_answer]
2018/03/20
538
1,776
<issue_start>username_0: Consider the following data: ``` Item | Overall | Individual | newColumn A | Fail | Pass | blank A | Fail | Fail | blank B | Fail | Pass | issue B | Fail | Pass | issue C | Pass | Pass | blank ``` I have the logic built out for the first 3 columns already. There are two levels of fails in this data: 1. overall, and 2. individual. If any of the individual fail, the overall fails. Sometimes the overall can fail even though all the individuals are fine. This logic is already built out. I am trying to find a formula for the `newColumn`. If **all** the individuals are a pass for a given item (example item B), but the overall is still a fail, the cell should return the text "issue". It is ok if it returns issue twice, not sure if you can non-dupe that part. I've tried various forms of countifs/and/ors and creating columns that count distinct values but I always find a scenario where it will break the logic.<issue_comment>username_1: Try this: ``` =IF(COUNTIFS($A$2:$A$6,A2,$C$2:$C$6,"Fail"),"blank",IF(B2="Fail","Issue","blank")) ``` As required [![enter image description here](https://i.stack.imgur.com/Tne9n.png)](https://i.stack.imgur.com/Tne9n.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: If you add a new column with the formula: ``` =IF(B2="Fail",IF(COUNTIFS(A:A,A2,C:C,"fail")=0,"issue",""),"") ``` Then this should work on the assumptions: * For each item if one of the overalls are false they are all false * The only two possible values are "Pass" and "Fail" for columns B & C If you require the word blank instead of a blank cell then use: ``` =IF(B2="Fail",IF(COUNTIFS(A:A,A2,C:C,"fail")=0,"issue","blank"),"blank") ``` Upvotes: 0
2018/03/20
1,318
4,678
<issue_start>username_0: I have this piece of a code. I want to add error messages depending on user's locale, but yup throws errors, same if fields are filled in incorrectly [missing "en.login.emailRequiredError" translation] [missing "en.login.passRequiredError" translation] ``` const schema = yup.object().shape({ email: yup .string() .email(i18n.t('login.emailSpellError')) .required(i18n.t('login.emailRequiredError')), password: yup .string() .matches(/^((?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{6,15})$/i, i18n.t('login.passSpellError')) .required(i18n.t('login.passRequiredError')), }); ``` i18n.t('login.passRequiredError') works fine when I put it into a render method for checking it but it does not work with the yup. Any suggestions? Thanks in advance<issue_comment>username_1: In your schema, replace: ``` .email(i18n.t('login.emailSpellError')) ``` with ``` .email('login.emailSpellError') ``` then in your render method: ``` {t(`form.errors.${form.errors.email}`)} ``` This assumes your translation file has an entry like this: ``` "form": { "errors": {"login": {"emailSpellError": "Your email is invalid"}}}} ``` The goal here is to move the t() method into your render method and have all translations happen there. Upvotes: 4 <issue_comment>username_2: **Yup Validation method**, ``` // You define the key mentioned in the translation file, in my example 'Invalid email' and 'Required' let ForgotPasswordSchema = yup.object().shape({ email: yup.string().email('Invalid email').required('Required'), }); ``` **In render method,** ``` // As per your definition isInvalid={(!!errors.email) && this.context.t(!!errors.email)} invalidText={(errors.email) && this.context.t(errors.email)} ``` **Translation File** ``` export const translations = { "cy": { "Required":"Gofynnol", "Invalid email":"Nid yw'r cyfeiriad ebost yn ddilys", } }; ``` Upvotes: 3 <issue_comment>username_3: A solution will be to make a function that returns your validation schema. Then call that function in your component with the result memoized. This way, you are guaranteed that translations for validation messages are computed on the fly. Another advantage here is you translate at the source of the message. ``` // Translation file { "validation.invalid-email": "Email is invalid", "validation.field-required": "Field is required" } // Validation schema const forgotPasswordSchema = () => { return yup.object().shape({ email: yup .string() .email(i18n.t('validation.invalid-email')) .required(i18n.t('validation.field-required')), }); }; // Your component const FormComponent = () => { const schema = useMemo(() => forgotPasswordSchema(), [i18n.language]); // NB: `[i18n.language]` is optional and `[]` will suffice depending on how you're handling language change return <>...; } ``` Upvotes: 2 <issue_comment>username_4: I've created a few custom hooks for this approach This one to refresh error messages inside schema when is changing app language ``` import { yupResolver } from '@hookform/resolvers/yup'; import { useRouter } from 'next/router'; import { useMemo } from 'react'; const useSchema = (getSchema) => { const { locale } = useRouter(); const resolver = useMemo(getSchema, [locale]); return yupResolver(resolver); }; export default useSchema; ``` And this one to set global in App component localised error messages ``` import { useTranslation } from 'react-i18next'; import { setLocale } from 'yup'; export const useLocalisedYupSchema = () => { const { t } = useTranslation('common'); setLocale({ mixed: { required: t('validation.required') }, string: { min: ({ min }) => t('validation.min', { min }), max: ({ max }) => t('validation.max', { max }) }, }); }; ``` Also usage of schemas inside component with React Hook Form ``` import { getChangePasswordSchema } from 'static/schemas/changePassword'; import useSchema from 'utils/hooks/useSchema'; import { useForm } from 'react-hook-form'; const AccountContentSecurity = () => { ... const resolver = useSchema(getChangePasswordSchema); const { reset, control, handleSubmit } = useForm({ defaultValues: { 'current_password': '', 'new_password': '', 'password_confirmation': '', }, resolver, }); ... ``` and schema ``` import { passwordSchema } from 'static/schemas'; import { object } from 'yup'; export const getChangePasswordSchema = () => object({ 'current_password': <PASSWORD>Schema, 'new_password': <PASSWORD>Schema, 'password_confirmation': <PASSWORD>Schema, }); ``` Upvotes: 0
2018/03/20
2,274
4,234
<issue_start>username_0: I have a data like below : ``` > dplyr::tbl_df(sbp) Country X1980 X1981 X1982 X1983 X1984 X1985 Albania 132.9270 133.0296 133.1459 133.1868 133.2048 133.2577 Algeria 132.4093 132.1710 131.9649 131.7835 131.6161 131.4345 Andorra 140.8585 140.1076 139.3727 138.6457 137.9525 137.3192 ``` I want to get mean of values for each year for all countries and add a row like World to the end of the dataframe, so that I can plot the change of the mean value through years, in that format. I tried using `gather()` so that I have a data with three columns only, like Country-year-value. However I can not think of a way to calculate the mean for the world. ``` Country year sbp Albania X1980 132.9270 Algeria X1980 132.4093 Andorra X1980 140.8585 ``` Can you please advise?<issue_comment>username_1: This is a great use case for `apply`, no transformations from your original format necessary: `1` means to calculate across rows, and we select columns `2:6` `df1$mean <- apply(df1[,2:6], 1, mean)` ``` Country X1980 X1981 X1982 X1983 X1984 X1985 mean 1 Albania 132.9270 133.0296 133.1459 133.1868 133.2048 133.2577 133.0988 2 Algeria 132.4093 132.1710 131.9649 131.7835 131.6161 131.4345 131.9890 3 Andorra 140.8585 140.1076 139.3727 138.6457 137.9525 137.3192 139.3874 ``` You don't really want to add a summary row to your primary table, that's how you might do it in Excel, but in R it's better practice to calculate it separately. To get the means for each year, we can also use apply, this time using `2` in the `apply` function to calculate down columns: ``` apply(df1[,2:6], 2, mean) X1980 X1981 X1982 X1983 X1984 135.3983 135.1027 134.8278 134.5387 134.2578 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: A possible solution with base R: ``` rbind(mydf, cbind(Country = 'World', as.data.frame.list(colMeans(mydf[,-1])))) ``` which gives: > > > ``` > Country X1980 X1981 X1982 X1983 X1984 X1985 > 1 Albania 132.9270 133.0296 133.1459 133.1868 133.2048 133.2577 > 2 Algeria 132.4093 132.1710 131.9649 131.7835 131.6161 131.4345 > 3 Andorra 140.8585 140.1076 139.3727 138.6457 137.9525 137.3192 > 4 World 135.3983 135.1027 134.8278 134.5387 134.2578 134.0038 > > ``` > > And a `tidyverse` solution: ``` mydf %>% gather(year, sbp, -1) %>% bind_rows(., mydf %>% gather(year, sbp, -1) %>% group_by(year) %>% summarise(Country = 'World', sbp = mean(sbp))) ``` with a long format outcome: > > > ``` > Country year sbp > 1 Albania X1980 132.9270 > 2 Algeria X1980 132.4093 > 3 Andorra X1980 140.8585 > 4 Albania X1981 133.0296 > 5 Algeria X1981 132.1710 > 6 Andorra X1981 140.1076 > 7 Albania X1982 133.1459 > 8 Algeria X1982 131.9649 > 9 Andorra X1982 139.3727 > 10 Albania X1983 133.1868 > 11 Algeria X1983 131.7835 > 12 Andorra X1983 138.6457 > 13 Albania X1984 133.2048 > 14 Algeria X1984 131.6161 > 15 Andorra X1984 137.9525 > 16 Albania X1985 133.2577 > 17 Algeria X1985 131.4345 > 18 Andorra X1985 137.3192 > 19 World X1980 135.3983 > 20 World X1981 135.1027 > 21 World X1982 134.8278 > 22 World X1983 134.5387 > 23 World X1984 134.2578 > 24 World X1985 134.0038 > > ``` > > --- Used data: ``` mydf <- read.table(text="Country X1980 X1981 X1982 X1983 X1984 X1985 Albania 132.9270 133.0296 133.1459 133.1868 133.2048 133.2577 Algeria 132.4093 132.1710 131.9649 131.7835 131.6161 131.4345 Andorra 140.8585 140.1076 139.3727 138.6457 137.9525 137.3192", header=TRUE, stringsAsFactors=FALSE) ``` Upvotes: 1 <issue_comment>username_3: You can easily get the means for each year using ``` world_means <- tbl %>% select(-Country) %>% summarise_all(mean) %>% cbind(list(Country="World"), .) ``` It just computes the mean for all columns except `Country` and then binds that with a `Country` we call `"World"`. To add it to your table, simply use `rbind`: ``` rbind(tbl, world_means) ``` Upvotes: 0
2018/03/20
525
1,818
<issue_start>username_0: I want to create two objects of the Ball class. I have tried the following: ``` public class World extends JPanel { JFrame frame = new JFrame("GreenJ"); Actor[] actor = new Actor[100]; int n = 0; public World() throws InterruptedException{ frame.add(this); frame.setSize(1000, 1000); frame.setVisible(true); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } public void addObject(Actor a) { actor[n] = a; frame.add(actor[n]); } } public class MyWorld extends World { public MyWorld() throws InterruptedException { addObject(new Ball(frame, 250, 750)); addObject(new Ball(frame, 750, 250)); } } public class Ball extends Actor{ int x; int y; @Override public void paint(Graphics g) { super.paint(g); Graphics2D g2d = (Graphics2D) g; g2d.fillOval(x, y, 50, 50); } public Ball(JFrame frame, int a, int b) throws InterruptedException{ frame.add(this); x = a; y = b; } public void main(String[]Args) { repaint(); } } ``` When I run this code I only get the first 'ball' in my frame. I have tried some other things but without success. Thank you in advance. ElAdriano<issue_comment>username_1: The value of `n` is never changed in your code. So `addObject` will always put the new object in index 0 of your `actor` array. Upvotes: 2 <issue_comment>username_2: Change your `Actor[]` into `ArrayList` of type `Actor` **this would help you forget about where to add the next object or at any index `n`** ``` ArrayList actors = new ArrayList<>(); ``` and change the addObject() method to add the object to the actors array ``` addObject(Actor a){ actors.add(a); } ``` Upvotes: 0
2018/03/20
572
2,101
<issue_start>username_0: I am trying to invoke a callback via intersection observer. I want the `target` to be `style: "position: fixed"` and move it via `style.top`. I also specified the root element which is an ancestor of the target with `style: "position: relative"`. But when the target and the observer intersects, the callback function won't be triggered. Are there some limitations I missed? Here is what I typed: ``` IO aaa bbb ccc let options = { root: document.getElementById("middle"), rootMargin: '0px', threshold: 0 }; let observer = new IntersectionObserver(entry => { console.log("observer's acting.") }, options); let target = document.getElementById("target"); observer.observe(target); let stepping = 0; let cb = () => { target.style.top = stepping + 'px'; stepping += 4; if (stepping < 300){ setTimeout(cb, 100); } }; window.addEventListener("click", () => { cb(); }) ``` And here is a codepen demo: [codepen demo](https://codepen.io/krave/pen/OvpPBm) You can click anywhere in the page to start moving the `ccc` block.<issue_comment>username_1: Elements with `position: fixed` are positioned relative to the viewport and the viewport moves. So, fixed positioned elements "move" as you scroll. Even though `#target` is a child of `#middle`, I believe the IntersectionObserver, with whatever it uses under the hood to calculate if the `target` is entering/leaving the `root`, never fires the callback because the `target` is outside of the document flow. Here is a related issue. There isn't much out in the interwebs related to this issue: <https://bugs.chromium.org/p/chromium/issues/detail?id=653240> Note: Setting `position: absolute` on the target does indeed fire the callback when entering and leaving the viewport. Upvotes: 3 <issue_comment>username_2: In my case I had my root element (with position:fixed) at the same DOM hierarchy level as the elements I wanted to observe (that were scrollable) and no events were triggered. ``` ``` When I placed the elements **inside** the root the events triggered. ``` ``` Upvotes: -1
2018/03/20
610
2,028
<issue_start>username_0: Using Laravel 5.5 and Mysql (10.1.19-MariaDB) For a md5 hash I want a binary(16) column. Let's call the colum **url\_hash** When using : ``` $table->binary('url_hash'); ``` it will give me a BLOB column. source : <https://laravel.com/docs/5.5/migrations#creating-columns> I have seen all kind of hacks or plugins around the web for this , but what is the most simple one without any external plugins that could break on the next update? Cheers<issue_comment>username_1: Laravel author [recommends](https://github.com/laravel/framework/issues/1606) to do a `DB:statement` call and run the raw SQL. If you are running migration, you could run this raw SQL after `Schema::create`: ``` DB::statement('ALTER TABLE table_name ADD url_hash binary(16) AFTER some_column'); ``` Depends on use case, you could need to run this raw SQL to drop the column before dropping the table: ``` DB::statement('ALTER TABLE table_name DROP url_hash'); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Extend the `MySqlGrammar` class, e.g. in `app/MySqlGrammar.php`: ``` namespace App; use Illuminate\Support\Fluent; class MySqlGrammar extends \Illuminate\Database\Schema\Grammars\MySqlGrammar { protected function typeRealBinary(Fluent $column) { return "binary({$column->length})"; } } ``` Then use a macro to add your own column type: ``` DB::connection()->setSchemaGrammar(new \App\MySqlGrammar()); Blueprint::macro('realBinary', function($column, $length) { return $this->addColumn('realBinary', $column, compact('length')); }); Schema::create('table', function(Blueprint $table) { $table->realBinary('url_hash', 16); }); ``` Upvotes: 3 <issue_comment>username_3: You can just set the *character set* to *binary*. ``` $table->char('url_hash', 16)->charset('binary'); ``` This is actually shown as a real *binary* column type with a length of 16 in MySQL Workbench. There shouldn't be any difference: <https://stackoverflow.com/a/15335682/5412658> Upvotes: 4
2018/03/20
656
2,234
<issue_start>username_0: I wanted to use a control-of-flow statement in a stored procedure that says: ``` begin try sql_statement end try begin try sql statement end try begin catch print error end catch ``` But the [documentation](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/try-catch-transact-sql) on try catch says: > > A TRY block must be immediately followed by an associated CATCH block. > Including any other statements between the END TRY and BEGIN CATCH > statements generates a syntax error. > > > What would the proper syntax for this be in sql: ``` execute x_sql_statement if x_sql_statement errors execute y_sql_statement else end ```<issue_comment>username_1: Laravel author [recommends](https://github.com/laravel/framework/issues/1606) to do a `DB:statement` call and run the raw SQL. If you are running migration, you could run this raw SQL after `Schema::create`: ``` DB::statement('ALTER TABLE table_name ADD url_hash binary(16) AFTER some_column'); ``` Depends on use case, you could need to run this raw SQL to drop the column before dropping the table: ``` DB::statement('ALTER TABLE table_name DROP url_hash'); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Extend the `MySqlGrammar` class, e.g. in `app/MySqlGrammar.php`: ``` namespace App; use Illuminate\Support\Fluent; class MySqlGrammar extends \Illuminate\Database\Schema\Grammars\MySqlGrammar { protected function typeRealBinary(Fluent $column) { return "binary({$column->length})"; } } ``` Then use a macro to add your own column type: ``` DB::connection()->setSchemaGrammar(new \App\MySqlGrammar()); Blueprint::macro('realBinary', function($column, $length) { return $this->addColumn('realBinary', $column, compact('length')); }); Schema::create('table', function(Blueprint $table) { $table->realBinary('url_hash', 16); }); ``` Upvotes: 3 <issue_comment>username_3: You can just set the *character set* to *binary*. ``` $table->char('url_hash', 16)->charset('binary'); ``` This is actually shown as a real *binary* column type with a length of 16 in MySQL Workbench. There shouldn't be any difference: <https://stackoverflow.com/a/15335682/5412658> Upvotes: 4
2018/03/20
598
2,046
<issue_start>username_0: I am trying to bind a whole object using it's x:Name to a property in another object. For example ``` ``` How do I go about achieving this? Update: I tried to implement the suggestions with no luck. The xaml that I'm talking about is as follows: ``` ``` Perhaps I should try move the starBehaviour into c# as it's not a visual element? P.S All I'm trying to do is set starTag to an object so if I can do this without binding, I don't mind if there's a solution.<issue_comment>username_1: Laravel author [recommends](https://github.com/laravel/framework/issues/1606) to do a `DB:statement` call and run the raw SQL. If you are running migration, you could run this raw SQL after `Schema::create`: ``` DB::statement('ALTER TABLE table_name ADD url_hash binary(16) AFTER some_column'); ``` Depends on use case, you could need to run this raw SQL to drop the column before dropping the table: ``` DB::statement('ALTER TABLE table_name DROP url_hash'); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Extend the `MySqlGrammar` class, e.g. in `app/MySqlGrammar.php`: ``` namespace App; use Illuminate\Support\Fluent; class MySqlGrammar extends \Illuminate\Database\Schema\Grammars\MySqlGrammar { protected function typeRealBinary(Fluent $column) { return "binary({$column->length})"; } } ``` Then use a macro to add your own column type: ``` DB::connection()->setSchemaGrammar(new \App\MySqlGrammar()); Blueprint::macro('realBinary', function($column, $length) { return $this->addColumn('realBinary', $column, compact('length')); }); Schema::create('table', function(Blueprint $table) { $table->realBinary('url_hash', 16); }); ``` Upvotes: 3 <issue_comment>username_3: You can just set the *character set* to *binary*. ``` $table->char('url_hash', 16)->charset('binary'); ``` This is actually shown as a real *binary* column type with a length of 16 in MySQL Workbench. There shouldn't be any difference: <https://stackoverflow.com/a/15335682/5412658> Upvotes: 4