date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/18
713
2,446
<issue_start>username_0: I want to define an object, and use a function to populate an array as one of the members of the object. I have tried using a function both inside and outside the object definition, but neither seems to work. Is this possible? Here is a simplified version of what I am trying to do: ``` object function test. var test\_object = { num1: 1, num2: 10, nums: make\_array(this.num1, this.num2) }; function make\_array(firstnum, lastnum) { myArray = []; for (i=firstnum; i <= lastnum; i++){ myArray.push(i); } return myArray; }; document.getElementById("demo").innerHTML = test\_object.nums; ``` It seems that num1 and num2 are `undefined` when they are passed to the function. What am I doing wrong?<issue_comment>username_1: You have to use a [`getter`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/get) here for the desired result, ``` var test_object = { num1: 1, num2: 10, get nums() { return make_array(this.num1, this.num2) } }; ``` And let me tell why your code is not working as expected, the `this` inside of that object literal would point to the upper context, not the object itself. This would be a better [read](https://stackoverflow.com/questions/3127429/how-does-the-this-keyword-work) for you. Upvotes: 0 <issue_comment>username_2: In this context, the `this` refers to the parent object (the `window`, if the code is running in a browser), not the `test_object` you're in the middle of defining: ``` var test_object = { num1: 1, num2: 10, nums: make_array(this.num1, this.num2) }; ``` If you want the end result to be a plain object, the easiest solution is probably to separate the assignments so you can refer to the object by name: ``` var test_object = { num1: 1, num2: 10 } test_object.nums = make_array(test_object.num1, test_object.num2); ``` (The ES6 `getter` referred to in another answer is also a possibility, though it works slightly differently (and doesn't work in some old browsers): the above would evaluate the array immediately; getters are evaluated when the value is requested. So with a getter, if for example you changed `test_object.num1` before reading `test_object.nums` you'd get a different answer. This may or may not be what you want.) Upvotes: 0 <issue_comment>username_3: The problem is that `myArray` insider your function `make_array` is not declared. Upvotes: -1
2018/03/18
544
1,328
<issue_start>username_0: I am trying to use the [jetbrains-toolbox](https://www.jetbrains.com/toolbox/app/) on NixOS. The download is a single ELF binary. I fixed it using ``` patchelf --set-interpreter /nix/store/2kcrj1ksd2a14bm5sky182fv2xwfhfap-glibc-2.26-131/lib/ld-linux-x86-64.so.2 --set-rpath /nix/store/y76fs08y8wais97jjrcphdw2rcaka1qa-fuse-2.9.7/lib:/nix/store/4csy6xvbrqxkp3mk6ngxp199xkr476lj-glib-2.54.3/lib:/nix/store/r43dk927l97n78rff7hnvsq49f3szkg6-zlib-1.2.11/lib jetbrains-toolbox ``` Now running the binary results in: ``` Cannot open /tmp/.mount_9TUyRi/.DirIcon ``` A bit of debugging gives: ``` $ strace ./jetbrains-toolbox 2>&1|grep mount mkdir("/tmp/.mount_HJCQAO", 0700) = 0 openat(AT_FDCWD, "/tmp/.mount_HJCQAO", O_RDONLY) = 4 openat(AT_FDCWD, "/tmp/.mount_HJCQAO/.DirIcon", O_RDONLY) = -1 ENOENT (No such file or directory) write(1, "Cannot open /tmp/.mount_HJCQAO/."..., 40Cannot open /tmp/.mount_HJCQAO/.DirIcon ``` Any idea what might be wrong here? (On a "normal" OS it runs just fine.)<issue_comment>username_1: This seems to be an instance of an unresolved [AppImage bug #296](https://github.com/AppImage/AppImageKit/issues/296). Upvotes: 1 <issue_comment>username_2: Finally, it's working, here is a [PR](https://github.com/NixOS/nixpkgs/pull/174272) to nixkgs. Upvotes: 0
2018/03/18
1,516
2,757
<issue_start>username_0: The m3u file looks like this: #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA-SEQUENCE:153741 #EXT-X-ALLOW-CACHE:NO #EXT-X-TARGETDURATION:11 #EXTINF:**10.005333**, **/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-29.ts** #EXTINF:**9.984000**, **/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-39.ts** #EXTINF:**10.005333**, **/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-49.ts** #EXTINF:**10.005333**, **/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-59.ts** #EXTINF:**10.005333**, **/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-59-09.ts** #EXTINF:**9.984000**, **/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-59-19.ts** I would like to extract pairs in bold. e.g.: **10.005333** **/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-29.ts** I managed to solved the problem only partially. The following regex gives me durations (numbers after the `#EXT-INF:`) ``` (?<=^EXTINF:)?(\d+\.\d+)(?=,\r|,\n) ``` But, when I try to add something to that regex at the end, something like `^(.*)` in order to capture anything beginning after the \r or \n, I get nothing. I need to capture anything in the line that follows immediately after the number that follows the #EXTINF:. Can somebody help with that? Update: > > const char \* const pm3u = > { > "#EXTM3U\n" "#EXT-X-VERSION:3\n" > > "#EXT-X-MEDIA-SEQUENCE:153741\n" > "#EXT-X-ALLOW-CACHE:NO\n" > > "#EXT-X-TARGETDURATION:11\n" > "#EXTINF:10.005333,\n" > > "/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-29.ts\n" > > "#EXTINF:9.984000,\n" > > "/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-39.ts\n" > > "#EXTINF:10.005333,\n" > > "/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-49.ts\n" > > "#EXTINF:10.005333,\n" > > "/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-58-59.ts\n" > > "#EXTINF:10.005333,\n" > > "/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-59-09.ts\n" > > "#EXTINF:9.984000,\n" > > "/RTS\_1\_009/audio/2018-03-16-H13/audio-2018-03-16-13-59-19.ts\n" > }; > > > int main() > { > > std::regex regExpression( "(#EXTINF:)(\\d+.\\d+)\*" ); > > std::smatch regExMatch; > > const std::string str( pm3u ); > > > bool b = std::regex\_match( str.begin(), str.end(), regExMatch, regExpression ); > > > return 0; > > } > > ><issue_comment>username_1: This seems to be an instance of an unresolved [AppImage bug #296](https://github.com/AppImage/AppImageKit/issues/296). Upvotes: 1 <issue_comment>username_2: Finally, it's working, here is a [PR](https://github.com/NixOS/nixpkgs/pull/174272) to nixkgs. Upvotes: 0
2018/03/18
978
4,347
<issue_start>username_0: I followed several CKQueryOperation examples/narratives on problems to fetch from CloudKit. My table has about 370 rows and 8 columns..at best I can only fetch about 60 rows. resultsLimit parameter does not seem to help.. My queryCompletionBlock is not executing. Sometimes I fetch 5 rows and other time 30+ Response from Cloud is quick just now all rows It's got to be some newbie code mistake! ``` func getData() { let predicate = NSPredicate(value: true) let query = CKQuery(recordType: RemoteFunctions.RemoteRecords.booksDB, predicate: predicate) let cloudContainer = CKContainer.default() let privateDatabase = cloudContainer.privateCloudDatabase let operation = CKQueryOperation(query: query) operation.queuePriority = .veryHigh operation.resultsLimit = 20 operation.recordFetchedBlock = { (record: CKRecord) in self.allRecords.append(record) print(record) } operation.queryCompletionBlock = {[weak self] (cursor: CKQueryCursor?, error: NSError?) in // There is another batch of records to be fetched print("completion block called with \(String(describing: cursor))") if let cursor = cursor { let newOperation = CKQueryOperation(cursor: cursor) newOperation.recordFetchedBlock = operation.recordFetchedBlock newOperation.queryCompletionBlock = operation.queryCompletionBlock newOperation.resultsLimit = 10 privateDatabase.add(newOperation) print("more records") } // There was an error else if let error = error { print("Error:", error) } // No error and no cursor means the operation was successful else { print("Finished with records:") } } as? (CKQueryCursor?, Error?) -> Void ``` // privateDatabase.add(operation) ``` } ```<issue_comment>username_1: Remove the `as? (CKQueryCursor?, Error?) -> Void` near the end. Be careful not to remove the preceeding brace. Remove `newOperation.resultsLimit = 10` in your cursor block. Add `operation = newOperation` immediately above `privateDatabase.add(newOperation)` Uncomment the `privateDatabase.add(operation)` That should help. Large fetches, where the cursor block gets hit more than 3 times can be problematic. If you do the above, you should be ok. Some people like to write/call the cursor block as its own function. That works as well, but it isn't necessary. Upvotes: 0 <issue_comment>username_2: You could try this... ``` func getData(withCursor cursor: CKQueryCursor? = nil) { let cloudContainer = CKContainer.default() let privateDatabase = cloudContainer.privateCloudDatabase let operation: CKQueryOperation if let cursor = cursor { operation = CKQueryOperation(cursor: cursor) } else { let operation_configuration: CKOperationConfiguration = CKOperationConfiguration() operation_configuration.isLongLived = true operation_configuration.qualityOfService = .background let predicate = NSPredicate(value: true) let query = CKQuery(recordType: RemoteFunctions.RemoteRecords.booksDB, predicate: predicate) operation = CKQueryOperation(query: query) operation.queuePriority = .veryHigh operation.configuration = operation_configuration operation.recordFetchedBlock = { (record: CKRecord) in self.allRecords.append(record) print(record) } operation.queryCompletionBlock = {[weak self] (cursor: CKQueryCursor?, error: NSError?) in // There is another batch of records to be fetched print("completion block called with \(String(describing: cursor))") if let error = error { print("Error:", error) } else if let cursor = cursor { self.getData(withCursor: cursor) } else { print("Finished with records:") } } } privateDatabase.add(operation) } ``` This is a *recursive* version of your fuction, using the new `CKQueryOperation` API. It seems that your're loosing reference to the operation object when a cursor arrives. Upvotes: 2
2018/03/18
677
2,615
<issue_start>username_0: I have a fade in effect working perfect when my login page loads. I am using a form to collect username and password from the user which is connected to a database. I also have a PHP error that shows when the username and password is incorrect or no data has been inserted. My issue is that when i click submit without entering anything my page will fade in again then shows the error. How can i stop the page fading in when the error messages show? I only really want the page to fade in when the user first enters the website. CSS: ``` body{ animation: fadein 1.5s;} @keyframes fadein { from { opacity: 0; } to { opacity: 1; } } ``` PHP showing error: ``` php echo $error; ? ```<issue_comment>username_1: You could echo a style in addition to the error, which would overwrite the animation. For example: ``` body{ animation:unset !important; } ``` Use the *!important* only if your error is echoed before the css. Upvotes: 1 <issue_comment>username_2: The easiest solution will be to require client-side validation for your form. This will prevent the page from re-loading every time the submit button is pushed and the form is not complete or satisfactory. Add the 'required' attribute to your inputs (at the minimum) or add some JS validation. For example: ``` Submit ``` This is supported in most browsers. <https://caniuse.com/#feat=form-validation> It's a best practice to supplement this with javascript validation, too. You can find additional information on MDN. <https://developer.mozilla.org/en-US/docs/Learn/HTML/Forms/Form_validation> EDIT: To answer your comment: As you're seeing, the only thing that client-side validation will check for is (a) completion, and/or (b) a pattern condition (like a regex or HTML5 condition [input type="email", input type="number", etc.]). Validating a username and password against existing users in a database will fall to PHP validation (or another server-side handler). Without using AJAX, there's not a way to check for correct username/password without reloading the page. Assuming you'd like to skip learning AJAX for this, you might try the following: ``` If (username) and (password) == (username) and (password) {redirect to ('success.html')} else {redirect to ('fail.html')} NOTE: THIS IS AN OUTLINE OF CONCEPT -- NOT THE ACTUAL CODE ``` On 'fail.html' do not include the fade-in. This would mean that the user will go to the login page (which will fade-in) and from there go to either the success page or the fail page, neither of which will include code to fade-in. Upvotes: 2
2018/03/18
558
1,159
<issue_start>username_0: I've got variable containing such strings: ``` zvq:0.001:0.006 hqp:0.006:0.01 pgvqa:0.1:0.01 ``` I'd like to find and echo ``` zvq:0.001:0.006 ``` or ``` hqp:0.006:0.01 ``` or ``` pgvqa:0.1:0.01 ``` by using either `zvq` or `pgvqa` or `hqp` patterns, how should I do this?<issue_comment>username_1: Try this : ``` var='zvq:0.001:0.006 hqp:0.006:0.01 pgvqa:0.1:0.01' ``` then ``` $ grep -Eo 'zvq[^ ]+' <<< "$var" zvq:0.001:0.006 ``` and ``` $ grep -Eo 'pgvqa[^ ]+' <<< "$var" pgvqa:0.1:0.01 ``` and ``` $ grep -Eo 'hqp[^ ]+' <<< "$var" hqp:0.006:0.01 ``` --- If you just want to cut the string based on space, like @janos said in the comments : ``` echo "${var%% *}" ``` using [*bash parameter expansion*](https://wiki.sputnick.fr/index.php/Parameter_expansion) Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` echo $variable | tr ' ' '\n' | grep $pattern ``` or ``` echo $variable | tr ' ' '\n' | grep "^${pattern}:" ``` example : ``` variable="zvq:0.001:0.006 hqp:0.006:0.01 pgvqa:0.1:0.01" pattern="zvq" echo $variable | tr ' ' '\n' | grep $pattern zvq:0.001:0.006 ``` Upvotes: 0
2018/03/18
2,875
12,698
<issue_start>username_0: I have started following TDD in my project. But ever since I started, even after reading some articles, I am confused since the development has slowed down. Whenever I refactor my code, I need to change the existing test cases I have written before because otherwise they will start failing. The following is an example of a class I recently refactored: ```cs public class SalaryManager { public string CalculateSalaryAndSendMessage(int daysWorked, int monthlySalary) { int salary = 0, tempSalary = 0; if (daysWorked < 15) { tempSalary = (monthlySalary / 30) * daysWorked; salary = tempSalary - 0.1 * tempSalary; } else { tempSalary = (monthlySalary / 30) * daysWorked; salary = tempSalary + 0.1 * tempSalary; } string message = string.Empty; if (salary < (monthlySalary / 30)) { message = "Salary cannot be generated. It should be greater than 1 day salary."; } else { message = "Salary generated as per the policy."; } return message; } } ``` But now I am doing lot of things in one method, so to follow the Single Responsibility Principle (SRP), I refactored it to something like below: ```cs public class SalaryManager { private readonly ISalaryCalculator _salaryCalculator; private readonly SalaryMessageFormatter _messageFormatter; public SalaryManager(ISalaryCalculator salaryCalculator, ISalaryMessageFormatter _messageFormatter){ _salaryCalculator = salaryCalculator; _messageFormatter = messageFormatter; } public string CalculateSalaryAndSendMessage(int daysWorked, int monthlySalary) { int salary = _salaryCalculator.CalculateSalary(daysWorked, monthlySalary); string message = _messageFormatter.FormatSalaryCalculationMessage(salary); return message; } } public class SalaryCalculator { public int CalculateSalary(int daysWorked, int monthlySalary) { int salary = 0, tempSalary = 0; if (daysWorked < 15) { tempSalary = (monthlySalary / 30) * daysWorked; salary = tempSalary - 0.1 * tempSalary; } else { tempSalary = (monthlySalary / 30) * daysWorked; salary = tempSalary + 0.1 * tempSalary; } return salary; } } public class SalaryMessageFormatter { public string FormatSalaryCalculationMessage(int salary) { string message = string.Empty; if (salary < (monthlySalary / 30)) { message = "Salary cannot be generated. It should be greater than 1 day salary."; } else { message = "Salary generated as per the policy."; } return message; } } ``` This may not be the greatest of examples. But the main point is that as soon as I did the refactoring, my existing test cases which I wrote for the `SalaryManager` started failing and I had to fix them using mocking. This happens all the time in read time scenarios, and the time of development increases with it. I am not sure if I am doing TDD in the right way. Please help me to understand.<issue_comment>username_1: > > Whenever I refactor my code, I need to change the existing test cases I have written before because they will start failing. > > > That's certainly an indication that something is going wrong. The popular definition of refactoring goes something like [this](https://refactoring.com/) > > REFACTORING is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. > > > Part of the point of having the unit tests, is that the unit tests are evaluating the external behavior of your implementation. A unit test that fails indicates that an implementation change has changed the externally observable behavior in some way. In this particular case, it looks like you changed your API - specifically, you removed the default constructor that had been part of the API for creating instances of `SalaryManager`; that's not a "refactoring", it's a backwards breaking change. There's nothing wrong with introducing new collaborators while refactoring, but you should do so in a way that doesn't break the current API contract. ``` public class SalaryManager { public SalaryManager(ISalaryCalculator salaryCalculator, ISalaryMessageFormatter _messageFormatter){ _salaryCalculator = salaryCalculator; _messageFormatter = messageFormatter; } public SalaryManager() { this(new SalaryCalculator(), new SalaryMessageFormatter()) } ``` where `SalaryCalculator` and `SalaryMessageFormatter` should be implementations that produce the same observable behavior that you had originally. Of course, there are occasions where we need to introduce a backwards breaking change. However, "Refactoring" isn't the appropriate tool for that case. In many cases, you can achieve the result you want in several phases: first extending your API with new tests (refactoring to remove duplication with the existing implementation), then removing the tests that evaluate the old API, and finally removing the old API. Upvotes: 2 <issue_comment>username_2: This problem happens when refactoring changes responsibilities of existing units especially by introducing new units or removing existing units. You can do this in TDD style but you need to: 1. do small steps (this rules out changes that extracts both classes simultaneously) 2. refactor (this includes refactoring test code as well!) Starting point -------------- In your case you have (I use more abstract python-like syntax to have less boilerplate, this problem is language independent): ``` class SalaryManager: def CalculateSalaryAndSendMessage(daysWorked, monthlySalary): // code that has two responsibilities calculation and formatting ``` You have test class for it. If you don't have tests you need to create these tests first (here you may find [Working Effectively with Legacy Code](https://rads.stackoverflow.com/amzn/click/com/0131177052) really helpful) or in many cases together with some refactoring to be able to refactor you code even more (refactoring is changing code structure without changing its functionality so you need to have test to be sure you don't change the functionality). ``` class SalaryManagerTest: def test_calculation_1(): // some test for calculation def test_calculation_2(): // another test for calculation def test_formatting_1(): // some test for formatting def test_formatting_2(): // another test for calculation def test_that_checks_both_formatting_and_calculation(): // some test for both ``` Extracting calculation to a class --------------------------------- Now let's you what to extract calculation responsibility to a class. You can do it right away without changing API of the `SalaryManager`. In classical TDD you do it in small steps and run tests after each step, something like this: 1. extract calculation to a function (say `calculateSalary`) of `SalaryManager` 2. create empty `SalaryCalculator` class 3. create instance of `SalaryCalculator` class in `SalaryManager` 4. move `calculateSalary` to `SalaryCalculator` Sometimes (if `SalaryCalculator` is simple and its interactions with `SalaryManager` are simple) you can stop here and do not change tests at all. So tests for calculation will still be part of `SalaryManager`. With the increasing of complexity of `SalaryCalculator` it will be hard/impractical to test it via `SalaryManager` so you will need to do the second step - refactor tests as well. Refactor tests -------------- I would do something like this: 1. split `SalaryManagerTest` into `SalaryManagerTest` and `SalaryCalculatorTest` basically by copying the class 2. remove `test_calculation_1` and `test_calculation_1` from `SalaryManagerTest` 3. leave only `test_calculation_1` and `test_calculation_1` in `SalaryCalculatorTest` Now tests in `SalaryCalculatorTest` test functionality for calculation but do it via `SalaryManager`. You need to do two things: 1. make sure you have integration test that checks that calculation happens at all 2. change `SalaryCalculatorTest` so that it does not use `SalaryManager` Integration test ---------------- 1. If you don't have such test already (`test_that_checks_both_formatting_and_calculation` may be such a test) create a test that does some simple usecase when calculation is involved from `SalaryManager` 2. You may want to move that test to `SalaryManagerIntegrationTest` if you wish Make SalaryCalculatorTest use SalaryCalculator ---------------------------------------------- Tests in `SalaryCalculatorTest` are all about calculation so even if they deal with manager their essence and important part is providing input to calculation and then check the result of it. Now our goal is to refactor the tests in a way so that it is easy to switch manager for calculator. The test for calculation may look like this: ``` class SalaryCalculatorTest: def test_short_period_calculation(self): manager = new SalaryManager() DAYS_WORKED = 1 result = manager.CalculateSalaryAndSendMessage(DAYS_WORKED, SALARY) assertEquals(result.contains('Salary cannot be generated'), True) ``` There are three things here: 1. preparation of the objects for tests 2. invocation of the action 3. check of the outcome Note that such test will check outcome of the calculation in some way. It may be confusing and fragile but it will do it somehow. As there should be some externally visible way to distinguish how calculation ended. Otherwise (if it does not have any visible effect) such calculation does not make sense. You can refactor like this: 1. extract creation of the `manager` to a function `createCalculator` (it is ok to call it this way as the object that is created from the test perspective is the calculator) 2. rename `manager` -> `sut` (system under test) 3. extract `manager.CalculateSalaryAndSendMessage` invocation into a function `calculate(calculator, days, salary) 4. extract the check into a function `assertPeriodIsTooShort(result)` Now the test has no direct reference to manager, it reflects the essence of what is tested. Such refactoring should be done with all tests and functions in this test class. Don't miss the opportunity to reuse some of them like `createCalculator`. Now you can change what object is created in `createCalculator` and what object is expected (and how the check is done) in `assertPeriodIsTooShort`. The trick here is to still control the size of that change. If it is too big (that is you can't make test green after the change in couple minutes in classical TDD) you may need to create a copy of the `createCalculator` and `assert...` and use them in one test only first but then gradually replace old with one in other tests. Upvotes: 1 <issue_comment>username_3: Your confusion is due to a general misunderstanding about unit testing. At some point people started to spread the myth that a unit is something like a single class or even a single method. This was fuelled by the relevant literature, articles and blog posts which showed how it is done with very small isolated classes. It was overlooked that the reason for that was the fact that you can not present a development process in a book or article with a real life application, as it would be by far too big for the format. It is true, that a unit test needs to be isolated. This does not mean that the "unit" needs to be completely isolated. The test itself needs to be isolated from other tests. This means that tests should not depend on each other and it should be possible to run them separately or even in parallel. The relevant unit for a unit test should be a use case. A use case is a feature of your application. In a blog engine "create blog post" is a use case. In a hotel reservation system "find free room" is a use case. Usually you will find the entry point for these in so called application services. That's the granularity you should aim for. Mock away ugly external dependencies like databases, file systems and external services. If you refactor the inner structure of your application the test will not break, because you do not change the behavior of the use case. Want to split or merge classes in your domain? The use case will be stable. Change how your domain objects interact with each other. The tests stay green. Upvotes: 0
2018/03/18
1,220
4,986
<issue_start>username_0: I have just started looking into .NET Core with Entity Framework. I have previously used .NET Framework with Ninject but I'm now trying to use the DI built into .NET Core. I have a `TestBase` class which my tests will derive from. I want this class to be responsible for creating and deleting a test database using `[OneTimeSetUp]` and `[OneTimeTearDown]`. The problem is that I don't seem to be able to figure out how to gain access to my DI services in the setup and teardown methods. These methods cannot have parameters and my `TestBase` class must have a parameterless constructor so I can't get them from there either. ``` [SetUpFixture] public partial class TestBase { protected IEFDatabaseContext DataContext { get; set; } public TestBase(IEFDatabaseContext dataContext) { this.DataContext = dataContext; } [OneTimeSetUp] public void TestInitialise() { this.DataContext.Database.EnsureCreated(); } [OneTimeTearDown] public void TestTearDown() { this.DataContext.Database.EnsureDeleted(); } } ``` The above gives the following error: > > `TestBase` does not have a default constructor. > > > I may well be going about this the wrong way but this is how I've always done things in the past so please let me know if there is a better method when using .NET Core DI. --- `Startup` class for reference: ``` public class Startup { private readonly IConfiguration config; public Startup(IConfiguration config) { this.config = config; } public void ConfigureServices(IServiceCollection services) { services.AddDbContext( options => options.UseSqlServer(this.config.GetConnectionString("TestConnectionString")), ServiceLifetime.Singleton); services.AddScoped(provider => provider.GetService()); } } ```<issue_comment>username_1: [Integration testing ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/testing/integration-testing) is well covered by the Microsoft documentation. Basically, you need to install the [Test Host](https://learn.microsoft.com/en-us/aspnet/core/testing/integration-testing#the-test-host) project from NuGet [`Microsoft.AspNetCore.TestHost`](https://www.nuget.org/packages/Microsoft.AspNetCore.TestHost/), then use it to launch the web environment within NUnit. Basic Example ------------- ``` public class TestClass { private TestServer _server; private HttpClient _client; [OneTimeSetUp] public void SetUp() { // Arrange _server = new TestServer(new WebHostBuilder() .UseStartup()); \_client = \_server.CreateClient(); } [OneTimeTearDown] public void TearDown() { \_server = null; \_client = null; } [Test] public async Task ReturnHelloWorld() { // Act var response = await \_client.GetAsync("/"); response.EnsureSuccessStatusCode(); var responseString = await response.Content.ReadAsStringAsync(); // Assert Assert.Equal("Hello World!", responseString); } } ``` With the `TestServer` it is possible to intervene with the DI configuration and/or the `IConfiguration` to substitute fakes in the configuration. See [Reconfigure dependencies when Integration testing ASP.NET Core Web API and EF Core](https://stackoverflow.com/questions/43543319/reconfigure-dependencies-when-integration-testing-asp-net-core-web-api-and-ef-co). Upvotes: 2 <issue_comment>username_2: Thanks to NightOwl for pointing me in the right direction. A combination of the Microsoft [article on integration testing](https://learn.microsoft.com/en-us/aspnet/core/testing/integration-testing) and the possible dupe question led me to the following solution. By using the `TestServer` from `Microsoft.AspNetCore.TestHost` I am able to access the DI `ServiceProvider` built in `Startup`. TestBase: ``` public partial class TestBase { protected readonly TestServer server; protected readonly IEFDatabaseContext DataContext; public TestBase() { this.server = new TestServer(new WebHostBuilder().UseStartup()); this.DataContext = this.server.Host.Services.GetService(); } [OneTimeSetUp] public void TestInitialise() { this.DataContext.Database.EnsureCreated(); } [OneTimeTearDown] public void TestTearDown() { this.DataContext.Database.EnsureDeleted(); } } ``` Startup: ``` public class Startup { private readonly IConfiguration config; public Startup(IConfiguration config) { this.config = new ConfigurationBuilder() .AddJsonFile("appsettings.json") .Build(); } public void Configure(IApplicationBuilder app, IHostingEnvironment env) { } public void ConfigureServices(IServiceCollection services) { services.AddDbContext( options => options.UseSqlServer(this.config.GetConnectionString("TestConnectionString")), ServiceLifetime.Singleton); services.AddScoped(provider => provider.GetService()); } } ``` Upvotes: 3
2018/03/18
412
960
<issue_start>username_0: I want to reshape a tensor, it's shape is (?,12,12,5,512) into (?,12,12,2560) shape of tensor. Is there anyone who can help me? My code is as below. ```python conv5_1 = Conv3D(512, (3, 3, 3), activation='relu', padding='same')(drop4_1) # conv5_1: Tensor("conv3d_10/Relu:0", shape=(", 12, 12, 5, 512), dtype=float32) conv5_1 = Conv3D(512, (3, 3, 3), activation='relu', padding='same')(conv5_1) drop5_1 = Dropout(0.2)(conv5_1) # drop5_1: Tensor("dropout_8/cond/Merge:0", shape=(", 12, 12, 5, 512), dtype=float32) ``` I want to make (?, 12, 12, 2560) shape of tensor after drop5\_1. Thanks<issue_comment>username_1: `keras.layers.core.Reshape()` function is helpful (see also the [document](https://keras.io/layers/core/#reshape)). ``` reshaped = Reshape((12, 12, 2560))(drop5_1) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can also try this ```py reshaped = tf.reshape(drop5_1 , [-1,12,12,2560]) ``` Upvotes: 0
2018/03/18
1,644
5,257
<issue_start>username_0: So I have been trying to have a index.php do three things. If there is a thumbnail to show the thumbnail and do the specific styling. if there isn't a thumbnail do something different and the last statement will have if the page is singular it will show something else. For some reason it keeps failing. I've seen some demos on here, but i can't figure it out ``` php /** this is the first if statement **/? php if ( is_home () || is_category() || is_archive() ): ? php if ( has_post_thumbnail() ) { ? > php optz\_post\_thumbnail(); ? php if ( is\_singular() ) : the\_title( '<h1 class="entry-title"', '' ); else : the\_title( '[', '](' . esc_url( get_permalink() ) . ') ------------------------------------------ ' ); endif; if ( 'post' === get\_post\_type() ) : ?> php endif; ? php the\_excerpt( sprintf( wp\_kses( /\* translators: %s: Name of current post. Only visible to screen readers \*/ \_\_( 'Continue reading<span class="screen-reader-text" "%s"', 'optz' ), array( 'span' => array( 'class' => array(), ), ) ), get\_the\_title() ) ); wp\_link\_pages( array( 'before' => '' . esc\_html\_\_( 'Pages:', 'optz' ), 'after' => '', ) ); ?> php optz\_posted\_on(); optz\_posted\_by(); ? php optz\_entry\_footer(); ? php /** this is the first second else statement **/ elseif (is_home () || is_category() || is_archive()) : > php the\_title( '<h2 class="entry-title"[', '](' . esc_url( get_permalink() ) . ')' ); if ( 'post' === get\_post\_type() ) : ?> php endif; ? php the\_content( sprintf( wp\_kses( /\* translators: %s: Name of current post. Only visible to screen readers \*/ \_\_( 'Continue reading<span class="screen-reader-text" "%s"', 'optz' ), array( 'span' => array( 'class' => array(), ), ) ), get\_the\_title() ) ); wp\_link\_pages( array( 'before' => '' . esc\_html\_\_( 'Pages:', 'optz' ), 'after' => '', ) ); ?> php optz\_posted\_on(); optz\_posted\_by(); ? php optz\_entry\_footer(); ? ``` ``` Nothing to see ``` \*\*Updated code:\*\* ``` php if ( is_home () || is_category() || is_archive() ) { ? php if ( has_post_thumbnail() ) { ? > php anasa\_post\_thumbnail(); ? php if ( is\_singular() ) : the\_title( '<h1 class="entry-title"', '' ); else : the\_title( '[', '](' . esc_url( get_permalink() ) . ') ------------------------------------------ ' ); endif; if ( 'post' === get\_post\_type() ) : ?> php endif; ? php the\_excerpt( sprintf( wp\_kses( /\* translators: %s: Name of current post. Only visible to screen readers \*/ \_\_( 'Continue reading<span class="screen-reader-text" "%s"', 'anasa' ), array( 'span' => array( 'class' => array(), ), ) ), get\_the\_title() ) ); wp\_link\_pages( array( 'before' => '' . esc\_html\_\_( 'Pages:', 'anasa' ), 'after' => '', ) ); ?> php anasa\_posted\_on(); anasa\_posted\_by(); ? php anasa\_entry\_footer(); ? php /** this is the second else statement **/ } elseif (is_home () || is_category() || is_archive()) {? > php the\_title( '<h2 class="entry-title"[', '](' . esc_url( get_permalink() ) . ')' ); if ( 'post' === get\_post\_type() ) : ?> php endif; ? php the\_content( sprintf( wp\_kses( /\* translators: %s: Name of current post. Only visible to screen readers \*/ \_\_( 'Continue reading<span class="screen-reader-text" "%s"', 'anasa' ), array( 'span' => array( 'class' => array(), ), ) ), get\_the\_title() ) ); wp\_link\_pages( array( 'before' => '' . esc\_html\_\_( 'Pages:', 'anasa' ), 'after' => '', ) ); ?> php anasa\_posted\_on(); anasa\_posted\_by(); ? php anasa\_entry\_footer(); ? php /** this is the last if else statement **/ } else {? php if ( is_singular()) { echo'hi'; }? php }} ? <!-- this is the main php thumbnail close tag --!> ```<issue_comment>username_1: Not sure if this will affect what you're doing but typically you want you *if* statement flipped: `if ( get_post_type() === 'post' ) : ?>` Upvotes: 0 <issue_comment>username_2: You have an error here ``` php /** this is the first last if else statement **/ }else :{? ``` It should be this without the colon : ``` php /** this is the first last if else statement **/ }else {? ``` Try and use the same standard for your conditional statements rather than using different conventions within the same file. Upvotes: 0 <issue_comment>username_3: Are you missing a "?" in your php closing tag? On the line just after the *elseif*, where your comment says: *"this is the first second else statement"* ``` php elseif (is_home () || is_category() || is_archive()) : ``` You also have both a colon and a curly bracket on *else* on the line where your other comment says: *this is the first last if else statement* ``` php /** this is the first last if else statement **/ }else :{? ``` Also a suggestion, if you write PHP together with Html (within views), many people use colons instead of curly brackets, so the code is easier to look into. Either way, whatever you prefer more, but I think you should stick to one of those two. Your statements ingeneral are okay. If you tweak all the statements to use either *: (colons)* or *{*, it should all work fine. Upvotes: 1
2018/03/18
2,181
7,273
<issue_start>username_0: I have been playing around with GLFX.js and have been able to setup some effects. What I am missing however is the capability to use each effect together in a sort of layering/blending. Currently if I increase the slider for say Sepia, and then increase the value for Saturation, Sepia will reset. I have an inclination that I somehow need to save the current value of the effect on the image each time the slider is updated but not sure how to go about doing that. Any help would be greatly appreciated. Thank you in advance! Here is my Javascript code: ` ``` window.onload = function() { // try to create a WebGL canvas (will fail if WebGL isn't supported) try { var canvas = fx.canvas(); } catch (e) { alert(e); return; } // convert the image to a texture var image = document.getElementById("image"); var texture = canvas.texture(image); let sepiaSlider, sepiaFilter, hueSatFltr, hueSldr, satSldr; canvas.draw(texture).update(); sepiaSlider = document.getElementById("sepia-slider"); hueSldr = document.getElementById("hue-slider"); satSldr = document.getElementById("sat-slider"); hueSldr.addEventListener("input", function(hueVal) { hueVal = this.value; console.log(hueVal); canvas.draw(texture).hueSaturation(hueVal, 0).update(); }); satSldr.addEventListener("input", function(satVal) { satVal = this.value; canvas .draw(texture) .hueSaturation(0, satVal) .update(); }); sepiaSlider.addEventListener("input", function(sepiaValue) { sepiaValue = this.value; console.log(sepiaValue); canvas .draw(texture) .sepia(sepiaValue) .update(); }); // replace the image with the canvas image.parentNode.insertBefore(canvas, image); image.parentNode.removeChild(image); }; ``` `<issue_comment>username_1: As you probably know I can't post a runnable snippet because of cross origin security restrictions, hence I only post my source code. It works and I'm able to apply "ink" and "sepia" effects together. Notice that there is only one call to `draw` and `update` for all effects. Check by yourself and tell me whether it is helpful or not. ``` glfx Ink Sepia ![](image.png) var form, canvas, image, texture ; onload = function () { canvas = fx.canvas(); form = document.forms[0]; image = document.getElementById("image"); texture = canvas.texture(image); form.addEventListener("submit", onSubmit); }; function onSubmit (ev) { var draw = canvas.draw(texture); ev.preventDefault(); if (form.elements.ink.checked) { draw = draw.ink(0.25); } if (form.elements.sepia.checked) { draw = draw.sepia(0.75); } draw.update(); image.src = canvas.toDataURL("image/png"); } ``` To make it run you need an HTTP server. Instructions for UbuntuΒ : ```none $ cd /tmp $ wget "https://i.stack.imgur.com/VqFm1.jpg?s=328&g=1" -O image.png $ wget http://evanw.github.io/glfx.js/glfx.js $ head -6 demo.html glfx $ python -m SimpleHTTPServer ``` Finally, open you web browser and type "localhost:8000/demo.html". Don't forget to press "Submit" :-) Upvotes: 1 <issue_comment>username_2: I was talking to a colleague, and he was able to help me out with a more direct solution. For anyone who is interested, it involves using classes. Here is the cleaned up javascript solution: ``` /*jshint esversion: 6 */ //set the intial value of canvas so it can be used throughout the program let canvas = null; //create a class to hold the intial values class CurrentSettings { constructor(canvas, texture) { this.canvas = canvas; this.texture = texture; this.hue = 0; this.sat = 0; this.sepia = 0; this.brtness = 0; this.cntrst = 0; this.vgnteSize = 0; this.vgnteAmnt = 0; this.vbrnce = 0; } //set the initial values of each effect setHue(fValue) { this.hue = fValue; this.update(); } setSaturation(fValue) { this.sat = fValue; this.update(); } setSepia(fValue) { this.sepia = fValue; this.update(); } setBrightness(fValue) { this.brtness = fValue; this.update(); } setContrast(fValue) { this.cntrst = fValue; this.update(); } setVignetteSize(fValue) { this.vgnteSize = fValue; this.update(); } setVignetteAmt(fValue) { this.vgnteAmnt = fValue; this.update(); } setVibrance(fValue) { this.vbrnce = fValue; this.update(); } //update the values if the slider is modified update() { this.canvas.draw(this.texture); if (this.hue > 0 || this.sat > 0) this.canvas.hueSaturation(this.hue, this.sat); if (this.sepia > 0) this.canvas.sepia(this.sepia); if (this.brtness > 0 || this.cntrst > 0) this.canvas.brightnessContrast(this.brtness, this.cntrst); if (this.vgnteSize > 0 || this.vgnteAmnt > 0) this.canvas.vignette(this.vgnteSize, this.vgnteAmnt); if (this.vbrnce > -1.1) this.canvas.vibrance(this.vbrnce); this.canvas.update(); } } //set the initial value of the settings let pSettings = null; //if the browser does not support webgl, return an error message window.onload = function() { try { canvas = fx.canvas(); } catch (e) { alert(e); return; } //gets the image from the dom var image = document.getElementById("image"); //convets the image from static dom to canvas var texture = canvas.texture(image); pSettings = new CurrentSettings(canvas, texture); //create the variables that will hold the event listeners let sepiaSlider, hueSldr, satSldr, brtnessSldr, cntrstSldr, vgnteSizeSldr, vgnteAmtSldr, vbrnceSldr; //draw the image onto the canvas canvas.draw(texture); //get all of the slider values sepiaSlider = document.getElementById("sepia-slider"); hueSldr = document.getElementById("hue-slider"); satSldr = document.getElementById("sat-slider"); brtnessSldr = document.getElementById("brt-slider"); cntrstSldr = document.getElementById("ctrs-slider"); vgnteSizeSldr = document.getElementById("size-vgntte-slider"); vgnteAmtSldr = document.getElementById("amnt-vgntte-slider"); vbrnceSldr = document.getElementById("vbrnce-slider"); //add an event listener to the sliders hueSldr.addEventListener("input", function(hueVal) { pSettings.setHue(this.value); }); satSldr.addEventListener("input", function(satVal) { pSettings.setSaturation(this.value); }); sepiaSlider.addEventListener("input", function(sepiaValue) { pSettings.setSepia(this.value); }); brtnessSldr.addEventListener("input", function(brtnessValue) { pSettings.setBrightness(this.value); }); cntrstSldr.addEventListener("input", function(cntrstValue) { pSettings.setContrast(this.value); }); vgnteSizeSldr.addEventListener("input", function(vgnteSizeValue) { pSettings.setVignetteSize(this.value); }); vgnteAmtSldr.addEventListener("input", function(vgnteAmtValue) { pSettings.setVignetteAmt(this.value); }); vbrnceSldr.addEventListener("input", function(vbrnceSldrValue) { pSettings.setVibrance(this.value); }); canvas.update(); image.parentNode.insertBefore(canvas, image); image.parentNode.removeChild(image); }; ``` Upvotes: 3 [selected_answer]
2018/03/18
2,060
6,703
<issue_start>username_0: Hello whenever i press the signin button onmy website i get this error: > > Notice: Undefined index: active in C:\xampp\htdocs\repute-multipurpose-theme-v1.3\theme\login.php on line 14 > > > I was wondering if you guys know how to fix it. I keep reading about an `isset` function, but i tried that on line 14 and it gave another error so it didn't seem to work. ``` php include("config.php"); session_start(); if($_SERVER["REQUEST_METHOD"] == "POST") { // username and password sent from form $myusername = mysqli_real_escape_string($conn,$_POST['username']); $mypassword = mysqli_real_escape_string($conn,$_POST['password']); $sql = "SELECT customer_id FROM customer WHERE email_adress = '$myusername' and password = '$<PASSWORD>'"; $result = mysqli_query($conn,$sql); $row = mysqli_fetch_array($result,MYSQLI_ASSOC); $active = $row['active']; $count = mysqli_num_rows($result); // If result matched $myusername and $mypassword, table row must be 1 row if($count == 1) { $_SESSION['login_user'] = $myusername; header("location: index2.php"); }else { $error = "Your Login Name or Password is invalid"; } } ? ```<issue_comment>username_1: As you probably know I can't post a runnable snippet because of cross origin security restrictions, hence I only post my source code. It works and I'm able to apply "ink" and "sepia" effects together. Notice that there is only one call to `draw` and `update` for all effects. Check by yourself and tell me whether it is helpful or not. ``` glfx Ink Sepia ![](image.png) var form, canvas, image, texture ; onload = function () { canvas = fx.canvas(); form = document.forms[0]; image = document.getElementById("image"); texture = canvas.texture(image); form.addEventListener("submit", onSubmit); }; function onSubmit (ev) { var draw = canvas.draw(texture); ev.preventDefault(); if (form.elements.ink.checked) { draw = draw.ink(0.25); } if (form.elements.sepia.checked) { draw = draw.sepia(0.75); } draw.update(); image.src = canvas.toDataURL("image/png"); } ``` To make it run you need an HTTP server. Instructions for UbuntuΒ : ```none $ cd /tmp $ wget "https://i.stack.imgur.com/VqFm1.jpg?s=328&g=1" -O image.png $ wget http://evanw.github.io/glfx.js/glfx.js $ head -6 demo.html glfx $ python -m SimpleHTTPServer ``` Finally, open you web browser and type "localhost:8000/demo.html". Don't forget to press "Submit" :-) Upvotes: 1 <issue_comment>username_2: I was talking to a colleague, and he was able to help me out with a more direct solution. For anyone who is interested, it involves using classes. Here is the cleaned up javascript solution: ``` /*jshint esversion: 6 */ //set the intial value of canvas so it can be used throughout the program let canvas = null; //create a class to hold the intial values class CurrentSettings { constructor(canvas, texture) { this.canvas = canvas; this.texture = texture; this.hue = 0; this.sat = 0; this.sepia = 0; this.brtness = 0; this.cntrst = 0; this.vgnteSize = 0; this.vgnteAmnt = 0; this.vbrnce = 0; } //set the initial values of each effect setHue(fValue) { this.hue = fValue; this.update(); } setSaturation(fValue) { this.sat = fValue; this.update(); } setSepia(fValue) { this.sepia = fValue; this.update(); } setBrightness(fValue) { this.brtness = fValue; this.update(); } setContrast(fValue) { this.cntrst = fValue; this.update(); } setVignetteSize(fValue) { this.vgnteSize = fValue; this.update(); } setVignetteAmt(fValue) { this.vgnteAmnt = fValue; this.update(); } setVibrance(fValue) { this.vbrnce = fValue; this.update(); } //update the values if the slider is modified update() { this.canvas.draw(this.texture); if (this.hue > 0 || this.sat > 0) this.canvas.hueSaturation(this.hue, this.sat); if (this.sepia > 0) this.canvas.sepia(this.sepia); if (this.brtness > 0 || this.cntrst > 0) this.canvas.brightnessContrast(this.brtness, this.cntrst); if (this.vgnteSize > 0 || this.vgnteAmnt > 0) this.canvas.vignette(this.vgnteSize, this.vgnteAmnt); if (this.vbrnce > -1.1) this.canvas.vibrance(this.vbrnce); this.canvas.update(); } } //set the initial value of the settings let pSettings = null; //if the browser does not support webgl, return an error message window.onload = function() { try { canvas = fx.canvas(); } catch (e) { alert(e); return; } //gets the image from the dom var image = document.getElementById("image"); //convets the image from static dom to canvas var texture = canvas.texture(image); pSettings = new CurrentSettings(canvas, texture); //create the variables that will hold the event listeners let sepiaSlider, hueSldr, satSldr, brtnessSldr, cntrstSldr, vgnteSizeSldr, vgnteAmtSldr, vbrnceSldr; //draw the image onto the canvas canvas.draw(texture); //get all of the slider values sepiaSlider = document.getElementById("sepia-slider"); hueSldr = document.getElementById("hue-slider"); satSldr = document.getElementById("sat-slider"); brtnessSldr = document.getElementById("brt-slider"); cntrstSldr = document.getElementById("ctrs-slider"); vgnteSizeSldr = document.getElementById("size-vgntte-slider"); vgnteAmtSldr = document.getElementById("amnt-vgntte-slider"); vbrnceSldr = document.getElementById("vbrnce-slider"); //add an event listener to the sliders hueSldr.addEventListener("input", function(hueVal) { pSettings.setHue(this.value); }); satSldr.addEventListener("input", function(satVal) { pSettings.setSaturation(this.value); }); sepiaSlider.addEventListener("input", function(sepiaValue) { pSettings.setSepia(this.value); }); brtnessSldr.addEventListener("input", function(brtnessValue) { pSettings.setBrightness(this.value); }); cntrstSldr.addEventListener("input", function(cntrstValue) { pSettings.setContrast(this.value); }); vgnteSizeSldr.addEventListener("input", function(vgnteSizeValue) { pSettings.setVignetteSize(this.value); }); vgnteAmtSldr.addEventListener("input", function(vgnteAmtValue) { pSettings.setVignetteAmt(this.value); }); vbrnceSldr.addEventListener("input", function(vbrnceSldrValue) { pSettings.setVibrance(this.value); }); canvas.update(); image.parentNode.insertBefore(canvas, image); image.parentNode.removeChild(image); }; ``` Upvotes: 3 [selected_answer]
2018/03/18
654
2,851
<issue_start>username_0: I am using an ARM embedded device but I think this is a general Linux question. I'm using the Linux watchdog daemon with the 'file' option to periodically check my application changes the time every xx seconds or so. This works fine but I've noticed if the system time is way off and NTP updates the time then the watchdog reboots. Presumably it has a hissy fit noticing that the stat on the file has changed a lot (the time was several months in the past so jumped ahead in time when corrected). I can't disable the watchdog because I have the NOWAYOUT option enabled and I have the 'tinker panic 0' option enabled for NTP because it's an embedded device that may not be connected to a network for long periods or be powered off for longer than the RTC backup can last. I think the device reboots at the instant my application calls 'utime' to modify the file. I'm not sure if this is just before or after NTP does its stuff. On a couple of devices the time doesn't seem to get permanently changed as it reboots due to the watchdog and repeats indefinitely. I thought one of the actions the watchdog performed before a reboot was to update the hardware time?<issue_comment>username_1: First off, the watchdog daemon shouldn't be updating the system time ever, and that would include prior to causing the system to reboot. The watchdog daemon isn't what actually ***causes*** the reboot, the system hardware does that when the watchdog isn't "petted". Getting a watchdog to function properly requires that you understand how your device's hardware works, and how the daemon which will be "petting" the watchdog makes the "pet" or "don't pet" decision. You've not provided enough information for me to look at either your embedded device's documentation, or the source code for the daemon itself. My suggestion would be for you to do "whatever you can" to cause NTP to set the system time prior to the watchdog having an opportunity to time out. I'd also suggest you look at the package information for the watchdog version you're using and report this as a bug to the maintainer. Just as the hardware watchdog isn't affected by changes to the system time, the software watchdog shouldn't either. Current system uptime *is* available and I'd expect the software watchdog -- the daemon -- to be able to handle changes with the wall-clock time without rebooting the system. Upvotes: 1 <issue_comment>username_2: Yes, it's a huge weakness of the watchdog service. Whenever the OS time is updated by an amount that exceeds the file check interval, the file check fails, and the system gets rebooted. What I did was abandon the `file=` check, and used `test-binary=` instead. This way, the watchdog executes a script, and the script uses the monotonic clock instead of the wall clock to check the file for changes. Upvotes: 2
2018/03/18
1,301
4,395
<issue_start>username_0: I have a dataset of images on my Google Drive. I have this dataset both in a compressed .zip version and an uncompressed folder. I want to train a CNN using Google Colab. How can I tell Colab where the images in my Google Drive are? 1. [official tutorial does not help me as it only shows how to upload single files, not a folder with 10000 images as in my case.](https://colab.research.google.com/notebooks/snippets/importing_libraries.ipynb) 2. [Then I found this answer, but the solution is not finished, or at least I did not understand how to go on from unzipping. Unfortunately I am unable to comment this answer as I don't have enough "stackoverflow points"](https://stackoverflow.com/questions/49088159/add-a-folder-with-20k-of-images-into-google-colaboratory) 3. [I also found this thread, but here all the answer use other tools, such as Github or dropbox](https://stackoverflow.com/questions/46986398/import-data-into-google-colaboratory) I hope someone could explain me what I need to do or tell me where to find help. Edit1: [I have found yet another thread asking the same question as mine:](https://stackoverflow.com/questions/48860586/how-to-upload-and-save-large-data-to-google-colaboratory-from-local-drive) Sadly, of the 3 answers, two refer to Kaggle, which I don't know and don't use. The third answer provides two links. The first link refers to the 3rd thread I linked, and the second link only explains how to upload single files manually.<issue_comment>username_1: As mentioned by @yl\_low [here](https://stackoverflow.com/questions/46986398/import-data-into-google-colaboratory) Step 1: ``` !apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse ``` Step 2: ``` from google.colab import auth auth.authenticate_user() ``` Step 3: ``` from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} ``` Both Step 2 and 3 will require to fill in the verification code provided by the URLs Step 4: ``` !mkdir -p drive !google-drive-ocamlfuse drive ``` Step 5: ``` print('Files in Drive:') !ls drive/ ``` Upvotes: 3 <issue_comment>username_2: To update the answer. You can right now do it from Google Colab ``` # Load the Drive helper and mount from google.colab import drive # This will prompt for authorization. drive.mount('/content/drive') !ls "/content/drive/My Drive" ``` [Google Documentation](https://colab.research.google.com/notebooks/io.ipynb) Upvotes: 5 [selected_answer]<issue_comment>username_3: Other answers are excellent, but they require everytime to authenticate in Google Drive, that is not very comfortable if you want to run top down your notebook. I had the same need, I wanted to download a single zip file containing dataset from Drive to Colab. I preferred to get shareable link of that file and run following cell (substitute drive\_url with your shared link): ``` import urllib drive_url = 'https://drive.google.com/uc?export=download&id=1fBVMX66SlvrYa0oIau1lxt1_Vy-XYZWG' file_name = 'downloaded.zip' urllib.request.urlretrieve(drive_url, file_name) print('Download completed!') ``` Upvotes: 3 <issue_comment>username_4: I saw and tried all the above but it didn't work for me. So here is a simple solution with simple explanation that can help you load a .zip image folder and extract images from it. * Connect to google drive ``` from google.colab import drive drive.mount('/content/drive') ``` (you will get a link sign in to your google account and copy the code and paste onto the code asked in the colab) * Install and import keras library ``` !pip install -q keras import keras ``` (the zip file is loaded into the colab) * Unzip the folder ``` ! unzip 'zip-file-path' ``` To get the path: * select file on left side of google colab * browse for the file click on the 3 dots * copy path **Now the unzipped image folder is loaded onto your colab use it as you wish** Upvotes: 2
2018/03/18
886
3,505
<issue_start>username_0: I am having trouble getting a video player to persist playing after a window resize in my responsive web app. I am using the react-responsive library to help with this. I have a different MediaQuery element handling each window size range in my outer "content" component which handles the overall page layout using antd layout component, so I'm guessing what is happening is a different part of the component is showing for each range, causing the video to also "reset" each time a new range is detected. (The problem goes away when I remove the media queries.) How do I make it so that when the video player is currently playing the video(which is an iframe rendering an api resource), it continues to play through a window resize into a different range? I'm guessing I can change the media query ranges and nest them in some specific way but I can't figure out how to do this. the page layout in the content component: ``` const Inside = props => ( ); const Content = () => ( ); ``` the video player: ``` const CurrentVideo = (props) => { const pubDateFormatted = Moment(props.pub_date).format('MMM. D, YYYY h:mma'); return ( {props.name} ------------ *Posted by {props.user} | {pubDateFormatted}* {props.deck} ); }; CurrentVideo.propTypes = { name: PropTypes.string.isRequired, user: PropTypes.string.isRequired, pub_date: PropTypes.string.isRequired, deck: PropTypes.string.isRequired, embed_player: PropTypes.string.isRequired, }; export default CurrentVideo; ``` the different media queries set to specific window widths: ``` export const XXL = props => ( {props.children} ); export const XL = props => ( {props.children} ); export const LG = props => ( {props.children} ); export const MD = props => ( {props.children} ); export const SM = props => ( {props.children} ); export const XS = props => ( {props.children} ); ```<issue_comment>username_1: You should take a look at the [React rules for reconciliation](https://reactjs.org/docs/reconciliation.html). They say: "Whenever the root elements have different types, React will tear down the old tree and build the new tree from scratch." It looks like what's happening here is that when the screen size changes, a new component with a different type is getting created, ultimately causing `CurrentVideo` to be torn down and replaced. Don't use react-responsive here. Use React to set up the dom elements, then use [normal CSS media queries](https://www.w3schools.com/css/css3_mediaqueries.asp) to change the display based on window size. If you really insist on using react-responsive, you could try rendering the `iframe` in `Inside`, passing it a `ref` argument to allow you to store a reference to the dom element (take a look at [Refs and the DOM](https://reactjs.org/docs/refs-and-the-dom.html)), and then passing that reference down to the `CurrentVideo`. Upvotes: 2 <issue_comment>username_2: Without digging too much into your implementation, my guess would be that react-responsive is causing the CurrentVideo component to re-render. One possible way to mitigate that would be to use the `shouldComponentUpdate()` method to block re-rendering, then check if the component's props or state have actually changed enough for a re-render to be required. There's more information on the lifecycle methods and `shouldComponentUpdate()` specifically in the [React Docs](https://reactjs.org/docs/react-component.html#shouldcomponentupdate) Upvotes: 0
2018/03/18
524
2,097
<issue_start>username_0: My website has two static pages and I need two other pages but in sub folders. Under views I have a folder static\_pages and the current about page route is ``` get '/about', to: 'static_pages#about' ``` I have created a sub folder under static\_pages with the name: "es" which will include the about page in Spanish. How can I write the route for this? ``` get 'es/about', to: 'static_pages/es#about' ``` does not seem to work. And what empty method to add to the controller?<issue_comment>username_1: You should take a look at the [React rules for reconciliation](https://reactjs.org/docs/reconciliation.html). They say: "Whenever the root elements have different types, React will tear down the old tree and build the new tree from scratch." It looks like what's happening here is that when the screen size changes, a new component with a different type is getting created, ultimately causing `CurrentVideo` to be torn down and replaced. Don't use react-responsive here. Use React to set up the dom elements, then use [normal CSS media queries](https://www.w3schools.com/css/css3_mediaqueries.asp) to change the display based on window size. If you really insist on using react-responsive, you could try rendering the `iframe` in `Inside`, passing it a `ref` argument to allow you to store a reference to the dom element (take a look at [Refs and the DOM](https://reactjs.org/docs/refs-and-the-dom.html)), and then passing that reference down to the `CurrentVideo`. Upvotes: 2 <issue_comment>username_2: Without digging too much into your implementation, my guess would be that react-responsive is causing the CurrentVideo component to re-render. One possible way to mitigate that would be to use the `shouldComponentUpdate()` method to block re-rendering, then check if the component's props or state have actually changed enough for a re-render to be required. There's more information on the lifecycle methods and `shouldComponentUpdate()` specifically in the [React Docs](https://reactjs.org/docs/react-component.html#shouldcomponentupdate) Upvotes: 0
2018/03/18
551
2,302
<issue_start>username_0: I have project of my own that employ stm32f407 mcu. I am using the discovery kit on board st-link to flash my project. Furthermore, I use the cubemx tool to configure the HAL of the project. The problem is that while generating the HAL layer I checked power optimization check box which by default configure un-configured pins to analog input and I was not configuring the swdio and swclk pins. I was able to flash one and I can not connect to the project board again. I tried to use the NRST and configuring the stlink to connect under reset with no luck. The NRST pin does not do anything when connected to GND??!!!! Any idea how to erase the flashed SW to gain ability to flash again?<issue_comment>username_1: You should take a look at the [React rules for reconciliation](https://reactjs.org/docs/reconciliation.html). They say: "Whenever the root elements have different types, React will tear down the old tree and build the new tree from scratch." It looks like what's happening here is that when the screen size changes, a new component with a different type is getting created, ultimately causing `CurrentVideo` to be torn down and replaced. Don't use react-responsive here. Use React to set up the dom elements, then use [normal CSS media queries](https://www.w3schools.com/css/css3_mediaqueries.asp) to change the display based on window size. If you really insist on using react-responsive, you could try rendering the `iframe` in `Inside`, passing it a `ref` argument to allow you to store a reference to the dom element (take a look at [Refs and the DOM](https://reactjs.org/docs/refs-and-the-dom.html)), and then passing that reference down to the `CurrentVideo`. Upvotes: 2 <issue_comment>username_2: Without digging too much into your implementation, my guess would be that react-responsive is causing the CurrentVideo component to re-render. One possible way to mitigate that would be to use the `shouldComponentUpdate()` method to block re-rendering, then check if the component's props or state have actually changed enough for a re-render to be required. There's more information on the lifecycle methods and `shouldComponentUpdate()` specifically in the [React Docs](https://reactjs.org/docs/react-component.html#shouldcomponentupdate) Upvotes: 0
2018/03/18
613
2,577
<issue_start>username_0: I built a website that gets data from an API using RestSharp. I am able to post the data and display it on the website, it works perfect when I run it on Loaclhost. After completing this website, I uploaded it to my hosting online and when I run the page I get this error: **Server Error** in '/Application' Application. **Compilation Error Description:** An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. **Compiler Error Message:** CS0246: The type or namespace name 'RestSharp' could not be found (are you missing a using directive or an assembly reference?) **Source Error:** Line 1: using System; Line 2: using RestSharp; Line 3: using RestSharp.Deserializers; Line 4: using RestSharp.Authenticators; I have tried different solutions posted online but cannot resolve this issue. Any help will be greatly appreciated.<issue_comment>username_1: You should take a look at the [React rules for reconciliation](https://reactjs.org/docs/reconciliation.html). They say: "Whenever the root elements have different types, React will tear down the old tree and build the new tree from scratch." It looks like what's happening here is that when the screen size changes, a new component with a different type is getting created, ultimately causing `CurrentVideo` to be torn down and replaced. Don't use react-responsive here. Use React to set up the dom elements, then use [normal CSS media queries](https://www.w3schools.com/css/css3_mediaqueries.asp) to change the display based on window size. If you really insist on using react-responsive, you could try rendering the `iframe` in `Inside`, passing it a `ref` argument to allow you to store a reference to the dom element (take a look at [Refs and the DOM](https://reactjs.org/docs/refs-and-the-dom.html)), and then passing that reference down to the `CurrentVideo`. Upvotes: 2 <issue_comment>username_2: Without digging too much into your implementation, my guess would be that react-responsive is causing the CurrentVideo component to re-render. One possible way to mitigate that would be to use the `shouldComponentUpdate()` method to block re-rendering, then check if the component's props or state have actually changed enough for a re-render to be required. There's more information on the lifecycle methods and `shouldComponentUpdate()` specifically in the [React Docs](https://reactjs.org/docs/react-component.html#shouldcomponentupdate) Upvotes: 0
2018/03/18
766
3,037
<issue_start>username_0: I've updated Spring Boot from version 1.5.6 to 2.0.0 and a lot of problems have started. One is the problem given in the subject. I have a class with properties ``` @Data @ConfigurationProperties("eclipseLink") public class EclipseLinkProperties { ... } ``` which I use in configuration ``` @Configuration @EnableConfigurationProperties(EclipseLinkProperties.class) public class WebDatasourceConfig { ... } ``` during compilation, he throws me away ``` 2018-03-18 18:44:58.560 INFO 3528 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.boot.context.properties.ConversionServiceDeducer$Factory' of type [org.springframework.boot.context.properties.ConversionServiceDeducer$Factory] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2018-03-18 18:44:58.575 WARN 3528 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webDatasourceConfig': Unsatisfied dependency expressed through field 'eclipseLinkProperties'; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'eclipseLink-com.web.web.config.properties.EclipseLinkProperties': Could not bind properties to 'EclipseLinkProperties' : prefix=eclipseLink, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is org.springframework.boot.context.properties.source.InvalidConfigurationPropertyNameException: Configuration property name 'eclipseLink' is not valid ``` It means ``` Configuration property name 'eclipseLink' is not valid ``` Before the Spring Boot update everything worked.<issue_comment>username_1: `eclipseLink` isn't a valid prefix. As [described in the documentation](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-external-config-relaxed-binding) kebab-case should be used rather than camelCase. So your prefix should be `eclipse-link` rather than `eclipseLink`. Upvotes: 6 [selected_answer]<issue_comment>username_2: Camel case is not supported in Spring boot 2.0. It would throw InvalidConfigurationPropertyNameException: Configuration property name '\*\*\*\*\*\*\*\*' is not valid. Upvotes: 2 <issue_comment>username_3: Faced this issue when a new config in one of the .yml files was not added in all .yml files(test.yml to be specific) Upvotes: 0 <issue_comment>username_4: You can change: ``` @ConfigurationProperties("eclipseLink") ``` to: ``` @ConfigurationProperties("eclipselink") ``` You don't need to change properties file. This will avoid error. Spring will be able to find eclipseLink.\* properties. Upvotes: 3 <issue_comment>username_5: Faced same issue after upgrade spring boot version from 1.5 to 2.5 here it support kabab-case you can also change to **eclipse-link** Upvotes: 0
2018/03/18
1,200
4,457
<issue_start>username_0: In my project I have table view with cells with 3 buttons. When I added target action for them, it did not work. To avoid bad interaction with other items I created a clean project and tried to `addTarget` to button in `TableViewCell`. Even here, it did not work. In this project I set one button to each row. `addTarget` method should call `presentWebBrowser` method. But doesn't. Any ideas? Here is my code: ``` import UIKit class TableViewController: UITableViewController { let array = ["button1","button2","button3"] override func viewDidLoad() { super.viewDidLoad() tableView.register(MyCell.self, forCellReuseIdentifier: "cellID") } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } // MARK: - Table view data source override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete implementation, return the number of rows return array.count } override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "cellID", for: indexPath) as! MyCell return cell } } class MyCell: UITableViewCell { var button: UIButton = { let button = UIButton(type: UIButtonType.system) button.setTitle("Button", for: UIControlState.normal) button.addTarget(self, action: #selector(presentWebBrowser), for: .touchUpInside) // button.titleLabel?.font = UIFont.boldSystemFont(ofSize: 8) return button }() override init(style: UITableViewCellStyle, reuseIdentifier: String?) { super.init(style: style, reuseIdentifier: reuseIdentifier) addSubview(button) button.translatesAutoresizingMaskIntoConstraints = false button.topAnchor.constraint(equalTo: topAnchor).isActive = true button.leftAnchor.constraint(equalTo: leftAnchor).isActive = true button.bottomAnchor.constraint(equalTo: bottomAnchor).isActive = true } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } @objc func presentWebBrowser(sender: UIButton!) { print("tapped") } } ```<issue_comment>username_1: First remove the add target in lazy initialization of button variable ``` var button: UIButton = { let button = UIButton(type: UIButtonType.system) button.setTitle("Button", for: UIControlState.normal) return button }() ``` Modify your code in init as ``` required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) addSubview(button) button.addTarget(self, action: #selector(presentWebBrowser), for: .touchUpInside) button.translatesAutoresizingMaskIntoConstraints = false button.topAnchor.constraint(equalTo: topAnchor).isActive = true button.leftAnchor.constraint(equalTo: leftAnchor).isActive = true button.bottomAnchor.constraint(equalTo: bottomAnchor).isActive = true } ``` Note use of addTarget on `button` outside the lazy initialization. Upvotes: 3 [selected_answer]<issue_comment>username_2: the solution is very simple. add below code in `init` function: ``` button.addTarget(self, action: #selector(presentWebBrowser), for: .touchUpInside) ``` that's it! Upvotes: 0 <issue_comment>username_3: Must be in a lazy declaration or after the initializer: ``` lazy var button: UIButton = { let button = UIButton(type: .system) button.setTitle("Button", for: UIControlState.normal) button.addTarget(self, action: #selector(presentWebBrowser), for: .touchUpInside) return button }() ``` This is because the target view must finish initialization before a selector can be bound to it. Upvotes: 2 <issue_comment>username_4: Well, maybe I can help with what worked for me so far: Create UIButton --------------- ``` lazy var _actionButton: UIButton = { let button = UIButton() button.setImage(UIImage(named: "reset"), for: .normal) button.translatesAutoresizingMaskIntoConstraints = false button.addTarget(self, action: #selector(handleAction), for: .touchUpInside) return button }() ``` @objc Function -------------- ``` @objc func handleAction(){ print("test reset via action button...") } ``` In tableviewcell class, precisely in `init` Method, instead of using : `addsubview(_actionButton)` use `contentView.addSubview(_actionButton)` **Cheers** Upvotes: 1
2018/03/18
1,216
4,872
<issue_start>username_0: I use the library *multiprocessing* in a *flask*-based web application to start long-running processes. The function that does it is the following: ``` def execute(self, process_id): self.__process_id = process_id process_dir = self.__dependencies["process_dir"] self.fit_dependencies() process = Process(target=self.function_wrapper, name=process_id, args=(self.__parameters, self.__config, process_dir,)) process.start() ``` When I want to deploy some code on this web application, I restart a service that restarts *gunicorn*, served by *nginx*. My problem is that this restart kills all children processes started by this application as if a *SIGINT* signal were sent to all children. How could I avoid that ? **EDIT:** After reading [this post](https://stackoverflow.com/questions/21665341/python-multiprocessing-and-independence-of-children-processes), it appears that this behavior is normal. The answer suggests to use the *subprocess* library instead. So I reformulate my question: how should I proceed if I want to start long-running tasks (which are python functions) in a python script and make sure they would survive the parent process **OR** make sure the parent process (which is a gunicorn instance) would survive a deployement ? **FINAL EDIT:** I chose @noxdafox answer since it is the more complete one. First, using process queuing systems might be the best practice here. Then as a workaround, I can still use *multiprocessing* but using the *python-daemon* context (see [here](https://dpbl.wordpress.com/2017/02/12/a-tutorial-on-python-daemon/) ans [here](https://www.python.org/dev/peps/pep-3143/)) inside the function wrapper. Last, @Rippr suggests using *subprocess* with a different process group, which is cleaner than forking with *multiprocessing* but involves having standalone functions to launch (in my case I start specific functions from imported libraries).<issue_comment>username_1: I would recommend against your design as it's quite error prone. Better solutions would de-couple the workers from the server using some sort of queuing system (`RabbitMQ`, `Celery`, `Redis`, ...). Nevertheless, here's a couple of "hacks" you could try out. 1. Turn your child processes into UNIX daemons. The [python daemon](https://pypi.python.org/pypi/python-daemon/) module could be a starting point. 2. Instruct your child processes to ignore the `SIGINT` signal. The service orchestrator might work around that by issuing a `SIGTERM` or `SIGKILL` signal if child processes refuse to die. You might need to disable such feature. To do so, just add the following line at the beginning of the `function_wrapper` function: ``` signal.signal(signal.SIGINT, signal.SIG_IGN) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Ultimately, this problem comes down to misunderstanding of what it means to do the deployment. In non-enterprise languages like Python (compared to enterprise ones like Erlang), it is generally understood that deployment wipes out any of the preceding artefacts of running the process. As such, it would clearly be a bug if your old children/functions don't actually terminate once a new deployment is performed. To play the devil's advocate, it is even unclear from your question/spec of what your actual expectation for the deployment is β€” do you simply expect your old "functions" to run forever? How do those functions get started in the first place? Who's supposed to know whether or not those "functions" were modified in a given deployment, and whether or not they're supposed to be restarted and in what fashion? A lot of consideration to these very questions is given in Erlang/OTP (which are unrelated to Python), and, as such, you can't simply expect the machine to read your mind when you use a language like Python that's not even designed for such a use-case. As such, it may be a better option to separate the long-running logic from the rest of the code, and perform the deployment appropriately. As the other answer mentions, this may involve spawning a separate [UNIX `daemon`](http://mdoc.su/-/daemon.3) directly from within Python, or maybe even using an entirely separate logic to handle the situation. Upvotes: 1 <issue_comment>username_3: Adding on to @username_1's excellent [answer](https://stackoverflow.com/a/49406652/6740515), I think you can consider this alternative: ``` subprocess.Popen(['nohup', 'my_child_process'], preexec_fn=os.setpgrp) ``` Basically, the child processes are killed because they belong to the same process group as the parent. By adding the `preexec_fn=os.setpgrp` parameter, you are simply requesting the child processes to spawn in their own process groups which means they will not receive the terminate signal. Explanation taken from [here](https://stackoverflow.com/a/16928558/6740515). Upvotes: 2
2018/03/18
1,289
5,223
<issue_start>username_0: Is it possible to setup Serilog minimum log level from environment variable? If I try to configure it like this ``` "Serilog": { "MinimumLevel": "%LOG_LEVEL%", "WriteTo": [ { "Name": "RollingFile", "Args": { "outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level}] [v{SourceSystemInformationalVersion}] {Message}{NewLine}{Exception}", "pathFormat": "%LOG_FOLDER%/sds-osdr-domain-saga-host-{Date}.log", "retainedFileCountLimit": 5 } } ] } ``` it returns error > > The value %LOG\_LEVEL% is not a valid Serilog level. > > > Is it possible to propagate log level from environment variable somehow?<issue_comment>username_1: Not sure about using an environment variable in the config file, but it's easy to do from code. Here is a class that sets the logging level dynamically. You can read your environment variable and pass to: SetLoggingLevel ```cs internal static class SerilogConfig { private const int OneDayInMilliseconds = 24 * 60 * 60 * 1000; private static Timer ResetLogLevelTimer = null; public static LoggingLevelSwitch LoggingLevel { get; set; } static SerilogConfig() { LoggingLevel = new LoggingLevelSwitch(); LogEventLevel defaultLevel = LogEventLevel.Information; bool res = Enum.TryParse(Program.Configuration["DefaultLoggingLevel"], true, out defaultLevel); LoggingLevel.MinimumLevel = res ? defaultLevel : LogEventLevel.Information; } public static void Initialize(string serviceName) { var logConfig = new LoggerConfiguration(); logConfig.MinimumLevel.ControlledBy(LoggingLevel); logConfig.MinimumLevel.Override("Microsoft", LogEventLevel.Warning); logConfig.MinimumLevel.Override("System", LogEventLevel.Error); if (Debugger.IsAttached) { Serilog.Debugging.SelfLog.Enable(msg => Console.WriteLine(msg)); logConfig.WriteTo.Console(); } Log.Logger = logConfig.CreateLogger(); } public static void SetLoggingLevel(LogEventLevel minimumLevel) { if (minimumLevel == LoggingLevel.MinimumLevel) { Log.Verbose("Requested log verbosity level change to the same level. No action taken."); return; } Log.Warning("Changing log verbosity level from {originalLevel} to {newLevel}", LoggingLevel.MinimumLevel.ToString(), minimumLevel.ToString()); LoggingLevel.MinimumLevel = minimumLevel; if (minimumLevel != LogEventLevel.Information) { int resetLogLevelTimeout = Int32.Parse(Program.Configuration["DetailedLoggingTimeDays"]) \* OneDayInMilliseconds; ResetLogLevelTimer = new Timer(resetLogLevelTimerCallback, null, resetLogLevelTimeout, Timeout.Infinite); } else { if (ResetLogLevelTimer != null) { ResetLogLevelTimer.Dispose(); ResetLogLevelTimer = null; } } } private static void resetLogLevelTimerCallback(object value) { if (LoggingLevel.MinimumLevel != LogEventLevel.Information) { Log.Warning("AUTO RESET: Changing log verbosity level from {0} back to Information", LoggingLevel.MinimumLevel); LoggingLevel.MinimumLevel = LogEventLevel.Information; ResetLogLevelTimer.Dispose(); ResetLogLevelTimer = null; } } } ``` Upvotes: 2 <issue_comment>username_2: After some thinking I ended up with the tiny class below ``` public class EnvironmentVariableLoggingLevelSwitch : LoggingLevelSwitch { public EnvironmentVariableLoggingLevelSwitch(string environmentVariable) { LogEventLevel level = LogEventLevel.Information; if (Enum.TryParse(Environment.ExpandEnvironmentVariables(environmentVariable), true, out level)) { MinimumLevel = level; } } } ``` and using it when configure logger ``` Log.Logger = new LoggerConfiguration() .ReadFrom.Configuration(Configuration) .MinimumLevel.ControlledBy(new EnvironmentVariableLoggingLevelSwitch("%LOG_LEVEL%")) .CreateLogger(); ``` So, if you don't declare environment variable you still may configure logging level from config file, or override it with environment variable. Upvotes: 3 <issue_comment>username_3: I think you are asking about [configuration by environment](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#configuration-by-environment) which is not specific to serilog. If the `LOG_LEVEL` is fixed with the specific environment (development, staging or production), you can set the each `LOG_LEVEL` in `appsettings..json`, and set configuration like this: ``` var config = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) .Build(); ``` If you need to config the `LOG_LEVEL` from environment variable in docker-compose file or kubernetes deployment file, then you can read values from environment variables by calling `AddEnvironmentVariables`: ``` var config = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false) .AddEnvironmentVariables() .Build(); ``` And set the environment `Serilog:MinimumLevel` in windows or ``Serilog\_\_MinimumLevel` in linux and mac. Upvotes: 3
2018/03/18
1,060
3,478
<issue_start>username_0: I'm new to Python and trying to verify if a given port number is valid or not. **1st Attempt** ``` PortNumber = input("Enter a port number: ") if PortNumber.isdigit() == True: print("This is a VALID port number.") else: print("This is NOT a valid port number.") ``` **Output** ``` C:\> python test.py Enter a port number: a This is NOT a valid port number. C:\> python test.py Enter a port number: -1 This is NOT a valid port number. C:\> python test.py Enter a port number: 8 This is a VALID port number. C:\> python test.py Enter a port number: 88888 This is a VALID port number. C:\> ``` The only problem with this code is the port number has to be an integer between `1-65535`. **2nd attempt** ``` PortNumber = int(input("Enter a port number: ")) if 1<= PortNumber <= 65535: print('This is a VALID port number.') else: print('This is NOT a valid port number.') ``` **Output** ``` C:\> python test2.py Enter a port number: 65535 This is a VALID port number. C:\> python test2.py Enter a port number: 65536 This is NOT a valid port number. C:\> python test2.py Enter a port number: -1 This is NOT a valid port number. C:\> python test2.py Enter a port number: a Traceback (most recent call last): File "test2.py", line 1, in PortNumber = int(input("Enter a port number: ")) ValueError: invalid literal for int() with base 10: 'a' C:\> ``` I managed to filter out the numbers between `1-65535` in second code, however there is another problem with the `a` character. How can I combine both ideas in the code?<issue_comment>username_1: You can try ``` try: port = int(input("Enter a port number: ")) if 1 <= port <= 65535: print("This is a VALID port number.") else: raise ValueError except ValueError: print("This is NOT a VALID port number.") ``` Upvotes: 2 <issue_comment>username_2: You can combine your two approaches like: ``` PortNumber = input("Enter a port number: ") if PortNumber.isdigit() and 1 <= int(PortNumber) <= 65535: print("This is a VALID port number.") else: print("This is NOT a valid port number.") ``` Upvotes: 0 <issue_comment>username_3: You might want to do this: ``` while True: try: PortNumber = int(input("Enter a port number: ")) except ValueError: print("Error: expect an integer. Try again.") continue else: break if 1 <= PortNumber <= 65535: print('This is a VALID port number.') else: print('This is NOT a valid port number.') ``` The `while` loop won't let the user any further until an integer is entered. Also, there's no need to go for `is_digit()` in this approach. Upvotes: 0 <issue_comment>username_4: your solution does not check for conditions such as what if a `float` or some other datatype is entered ``` PortNumber = input("Enter a port number: \n") if not(type(PortNumber) == int): print("This is NOT a valid port number.") elif 1<= PortNumber <= 65535: print('This is a VALID port number.') else: print("This is NOT a valid port number.") ``` Upvotes: 0 <issue_comment>username_5: Building further I would probably separate the conditions for readability: ``` PortNumber = input("Enter a port number: ") cond1 = PortNumber.isdigit() # True/False cond2 = (1 <= int(PortNumber) <= 65535) # True/False if cond1 and cond2: print("This is a VALID port number.") else: print("This is NOT a valid port number.") ``` Upvotes: 2
2018/03/18
770
2,618
<issue_start>username_0: I am trying to get the links I have routed to open in a new window using the "attr" feature in jQuery with no luck. Here is the client site: <http://www.sunsetstudiosent.com> working in the "News" section. Additionally is there a way to differentiate and apply this only to links that link offsite? ``` var sBlock = $('[data-block-json\*="9dac9e62587eab820575"]'), items = sBlock.find('.summary-item'); $.each(items, function() { var $this = $(this), itemLink = $this.find('.summary-title a').attr('href'); $this.find('.summary-read-more-link').attr('href', itemLink); $this.find('.summary-metadata-item a').attr('href', itemLink); $(this).attr('target', '\_blank'); }); ```<issue_comment>username_1: You can try ``` try: port = int(input("Enter a port number: ")) if 1 <= port <= 65535: print("This is a VALID port number.") else: raise ValueError except ValueError: print("This is NOT a VALID port number.") ``` Upvotes: 2 <issue_comment>username_2: You can combine your two approaches like: ``` PortNumber = input("Enter a port number: ") if PortNumber.isdigit() and 1 <= int(PortNumber) <= 65535: print("This is a VALID port number.") else: print("This is NOT a valid port number.") ``` Upvotes: 0 <issue_comment>username_3: You might want to do this: ``` while True: try: PortNumber = int(input("Enter a port number: ")) except ValueError: print("Error: expect an integer. Try again.") continue else: break if 1 <= PortNumber <= 65535: print('This is a VALID port number.') else: print('This is NOT a valid port number.') ``` The `while` loop won't let the user any further until an integer is entered. Also, there's no need to go for `is_digit()` in this approach. Upvotes: 0 <issue_comment>username_4: your solution does not check for conditions such as what if a `float` or some other datatype is entered ``` PortNumber = input("Enter a port number: \n") if not(type(PortNumber) == int): print("This is NOT a valid port number.") elif 1<= PortNumber <= 65535: print('This is a VALID port number.') else: print("This is NOT a valid port number.") ``` Upvotes: 0 <issue_comment>username_5: Building further I would probably separate the conditions for readability: ``` PortNumber = input("Enter a port number: ") cond1 = PortNumber.isdigit() # True/False cond2 = (1 <= int(PortNumber) <= 65535) # True/False if cond1 and cond2: print("This is a VALID port number.") else: print("This is NOT a valid port number.") ``` Upvotes: 2
2018/03/18
703
2,367
<issue_start>username_0: I've got a list of path from a folder, so I have: 1. `folder/subfolder1/file1` 2. `folder/subfolder1/file2` 3. `folder/subfolder2/file1` 4. `folder/subfolder2/file2` 5. `folder/subfolder3/file1` 6. `folder/subfolder3/file2` etc. From this list of path I want iteratively extract the element `file1`, `file2`, `file1`, `file2` from my first list as a separate list. It's always the `element [2]` but I'm not understanding how to iterate<issue_comment>username_1: You can try ``` try: port = int(input("Enter a port number: ")) if 1 <= port <= 65535: print("This is a VALID port number.") else: raise ValueError except ValueError: print("This is NOT a VALID port number.") ``` Upvotes: 2 <issue_comment>username_2: You can combine your two approaches like: ``` PortNumber = input("Enter a port number: ") if PortNumber.isdigit() and 1 <= int(PortNumber) <= 65535: print("This is a VALID port number.") else: print("This is NOT a valid port number.") ``` Upvotes: 0 <issue_comment>username_3: You might want to do this: ``` while True: try: PortNumber = int(input("Enter a port number: ")) except ValueError: print("Error: expect an integer. Try again.") continue else: break if 1 <= PortNumber <= 65535: print('This is a VALID port number.') else: print('This is NOT a valid port number.') ``` The `while` loop won't let the user any further until an integer is entered. Also, there's no need to go for `is_digit()` in this approach. Upvotes: 0 <issue_comment>username_4: your solution does not check for conditions such as what if a `float` or some other datatype is entered ``` PortNumber = input("Enter a port number: \n") if not(type(PortNumber) == int): print("This is NOT a valid port number.") elif 1<= PortNumber <= 65535: print('This is a VALID port number.') else: print("This is NOT a valid port number.") ``` Upvotes: 0 <issue_comment>username_5: Building further I would probably separate the conditions for readability: ``` PortNumber = input("Enter a port number: ") cond1 = PortNumber.isdigit() # True/False cond2 = (1 <= int(PortNumber) <= 65535) # True/False if cond1 and cond2: print("This is a VALID port number.") else: print("This is NOT a valid port number.") ``` Upvotes: 2
2018/03/18
1,041
3,458
<issue_start>username_0: Title says it all, I'm trying to remove an specific entry, for example an entry ofwhich the users id is 1. I've made a [JSFiddle](https://jsfiddle.net/eqntaqbt/8/) to demostrate how it isn't working, for some reason not every message gets removed but there's always one left. ``` var messages = [ { user:1, message:'hello' }, { user:1, message:'hello' }, { user:1, message:'hello' } ]; messages.forEach(function(message, index){ console.log(message.message); if(message.user === 1){ console.log('remove this message!'); messages.splice(index, 1); } }); console.log(messages); ```<issue_comment>username_1: The best approach is using the function **[`filter`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter)**: ```js var messages = [{ user: 1, message: 'hello' }, { user: 1, message: 'hello' }, { user: 3, message: 'hello' }], result = messages.filter(({user}) => user !== 1); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` **Your approach using the function [`splice`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice):** *The problem is the modification you're applying to the array using the function **[`splice`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice)**, the length of that array is modified as well.* An alternative is decreasing the index using a `for-loop`. ```js var messages = [{ user: 6, message: 'hello' },{ user: 1, message: 'hello' }, { user: 3, message: 'hello' }, { user: 1, message: 'hello' }, { user: 4, message: 'hello' }]; for (var i = 0; i < messages.length; i++) { if (messages[i].user === 1) messages.splice(i--, 1); } console.log(messages); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You're removing items from the array while iterating through it, which is confusing the iteration: The first step of the iteration removes the first element in the array, shifting every remaining element by one. The second step of the iteration looks at the second element in the array... which used to be the third element before you shortened the array. The iterator therefore never sees the (original) second element. One way to work around this this is to start at the end of the array and work towards the beginning; that way changes in the array length don't matter, as they'll only affect elements you've already iterated over. ```js var messages = [ { user:1, message:'A' }, { user:1, message:'B' }, { user:1, message:'C' } ]; for (var i=messages.length-1; i>-1; i--) { var message = messages[i]; if(message.user === 1){ console.log('remove this message!'); messages.splice(i, 1); } } console.log(messages); ``` Upvotes: 1 <issue_comment>username_3: ```js var messages = [ { user:1, message:'hello' }, { user: 2, message:'hello' }, { user: 3, message:'hello' } ]; var newMessages = messages.filter(message => message.user !== 1); console.log(newMessages); ``` With your method, you're modifying the array on the fly which led to inconsistencies. Upvotes: 0
2018/03/18
753
2,590
<issue_start>username_0: I'm trying to created a dynamic form in rails with a little bit of javascript I have a problem I only get one row in the output when using `pry` apparently it's because I have the same params for every field input since I use jQuery .clone, maybe someone has struggled with something similar can share some knowledge, how to dynamically add index to params in this form with javascript ? Thanks. [![dynamic form](https://i.stack.imgur.com/wjVZb.png)](https://i.stack.imgur.com/wjVZb.png) jQuery to clone the element ``` $(document).on("click", ".button-remove", function(e) { $(this).closest(".duplicate").remove(); e.preventDefault(); }); $(".btn-add").click(function(e) { e.preventDefault(); let cloned = $(".duplicate:last").clone(); cloned.insertBefore(this); cloned.find(":text").val(""); }); ``` HTML to clone ``` Number of items Category EntrΓ©e Plat Dessert Softs Alcool Autres [Remove item](#) Add new row ```<issue_comment>username_1: Why not use a hidden field? Somewhere in your html i.e. ``` ``` so something like: ``` Number of items Category EntrΓ©e Plat Dessert Softs Alcool Autres [Remove item](#) ``` You'll need to update your JS to increment the value of your hidden field but that should be easy. Upvotes: 1 <issue_comment>username_2: I think what you try to do is done in this Railscasts episode about dynamic forms: <http://railscasts.com/episodes/403-dynamic-forms> For nested dynamic forms (that I think it is not your case but just in case), you can use [cocoon gem](https://github.com/nathanvda/cocoon). Upvotes: 1 <issue_comment>username_3: I actually managed to solve this one by changed the name input of each field, I simply added an index, I'm sure there is a cleaner solution but until something better this works ! ``` function addParams(){ let items = $('.duplicate').length; items -= 1; $('.duplicate:last input.item_name').attr('name', function() { return `contribution[item_name_${items}]`; }); $('.duplicate:last input.item_quantity').attr('name', function() { return `contribution[item_quantity_${items}]`; }); $('.duplicate:last select.item_type').attr('name', function() { return `contribution[item_type_${items}]`; }); } ``` binding.pry ``` "contribution"=>{ "item_name"=>"un", "item_quantity"=>"1", "item_type"=>"1", "item_name_1"=>"deux", "item_quantity_1"=>"1", "item_type_1"=>"2", "item_name_2"=>"trois", "item_quantity_2"=>"1", "item_type_2"=>"3" } ``` Upvotes: -1 [selected_answer]
2018/03/18
466
1,672
<issue_start>username_0: In Android Studio I integrated an ImageView with an unwanted drop shadow, which I cant seem to get rid off. How do you make the picture blend in to the background? I tried setting the background of the button to transparent and `android:shadowRadius="0"` did not work. [![My ImageView](https://i.stack.imgur.com/sorlS.png)](https://i.stack.imgur.com/sorlS.png) My .xml file ``` ```<issue_comment>username_1: Change the `android:background="@color/colorWhiteText"` to `android:background="#ffffffff"` as the background is important here. ``` ``` Maybe better form to set this in res\values\colors.xml: ``` #FFFFFFFF ``` and use: ``` android:background="@color/windowBackground" ``` Tests **ok** for me (no drop shadow) with dependencies: ``` implementation 'com.android.support.constraint:constraint-layout:1.0.2' ``` Upvotes: 0 <issue_comment>username_2: I just found my stupid solution to this question. Make sure the picture itself has no drop shadow, which somehow wasn't the case. I was sure I had the picture as I wanted it to have it but somehow in the process a drop shadow was created. Upvotes: 3 [selected_answer]<issue_comment>username_3: 1: You can check to value it is set ``` android:background="@color/colorWhiteText" ``` 2: If your image is a PNG try using the below ``` android:src="@drawable/logo_174" ``` 3: If the image is a vector. SrcCompact is used for vector image support for devices less than API level 21 ``` android { defaultConfig { vectorDrawables.useSupportLibrary = true } } ``` Note:The code shared by you is not giving any drop shadow when I tried. Upvotes: 0
2018/03/18
986
2,161
<issue_start>username_0: i have a three columns table of 20k lines. 1st column: list of gene IDs (there can be duplicated IDs) 2nd column: a constant string 3rd column: a value What i want is to rank my list leaving with only unique gene IDs. For the duplicated gene IDs i want to leave only the ones with the highest score. here an example, Thanks in advance ``` TMCS09g1008699 ensembl 6.4 TMCS09g1008671 ensembl 6.4 TMCS09g1008672 ensembl 6.5 TMCS09g1008673 ensembl 6 TMCS09g1008674 ensembl 5.4 TMCS09g1008675 ensembl 5.4 TMCS09g1008676 ensembl 4.9 TMCS09g1008677 ensembl 4.6 TMCS09g1008677 ensembl 4.4 TMCS09g1008679 ensembl 4.3 TMCS09g1008680 ensembl 3.9 TMCS09g1008681 ensembl 3.8 TMCS09g1008682 ensembl 3.6 TMCS09g1008683 ensembl 3.5 TMCS09g1008684 ensembl 3.5 TMCS09g1008685 ensembl 3.4 TMCS09g1008686 ensembl 3.4 TMCS09g1008687 ensembl 3.4 TMCS09g1008688 ensembl 3 TMCS09g1008689 ensembl 2.6 TMCS09g1008690 ensembl 2 TMCS09g1008699 ensembl 5.9 ```<issue_comment>username_1: Could you please try following `awk` and let me know if this helps you. ``` awk '{b[$1]=a[$1]>$NF?b[$1]?b[$1]:$0:$0;a[$1]=a[$1]>$NF?a[$1]:$NF;} END{for(i in a){print b[i]}}' Input_file ``` Adding a non-one liner form of solution too now. ``` awk ' { b[$1]=a[$1]>$NF?b[$1]?b[$1]:$0:$0; a[$1]=a[$1]>$NF?a[$1]:$NF} END{ for(i in a){ print b[i]} } ' Input_file ``` Upvotes: 0 <issue_comment>username_2: You could use awk for this: * Store the highest score per gene ID in an array + Scan the input + If the score is higher than what was seen before, replace it * In the end, print the content of the array Here's one way to do that: ``` awk '{ m[$1] = m[$1] > $3 ? m[$1] : $3; } END { for (i in m) print i, "ensembl", m[i] }' file ``` If you would like to see the output sorted by gene ID, then simply pipe the above awk to `sort`. Upvotes: 0 <issue_comment>username_3: You can just use `sort`: ``` sort -k3rn file | sort -u -k1,1 ``` The first sort sorts the file by the 3rd column (`k3`) numerically (`n`) in descending order (`r`), the second one uniques the output based on the first column. Upvotes: 3 [selected_answer]
2018/03/18
1,061
3,752
<issue_start>username_0: My app displays the price of something when it boots and I have the list of currencies hidden. When the user taps the price, I want to it reveal the list of currencies hidden below and then hide them again after one is selected. Can't figure out how though, not finding any Swift code for a tap gesture recogniser? I'm you could just do something like priceLabel.isTappedUp = blah blah, years ago, maybe it was Objec C. Any ideas? Code below: ``` class ViewController: UIViewController, UIPickerViewDelegate, UIPickerViewDataSource { let baseURL = "https://apiv2.bitcoinaverage.com/indices/global/ticker/BTC" // API let currencyArray = ["AUD", "BRL","CAD","CNY","EUR","GBP","HKD","IDR","ILS","INR","JPY","MXN","NOK","NZD","PLN","RON","RUB","SEK","SGD","USD","ZAR"] // List of currencies let currencySymbolArray = ["$", "R$", "$", "Β₯", "€", "Β£", "$", "Rp", "β‚ͺ", "β‚Ή", "Β₯", "$", "kr", "$", "zΕ‚", "lei", "β‚½", "kr", "$", "$", "R"] // Currency symbols var currencySelected = "" var finalURL = "" // Pre-setup IBOutlets @IBOutlet weak var priceLabel: UILabel! @IBOutlet weak var currencyPicker: UIPickerView! override func viewDidLoad() { super.viewDidLoad() currencyPicker.delegate = self currencyPicker.dataSource = self currencyPicker.selectRow(5, inComponent:0, animated:false) // Select default currency choice to Β£ // Print out the default row price finalURL = baseURL + currencyArray[5] print(finalURL) currencySelected = currencySymbolArray[5] getBitcoinData(url: finalURL) currencyPicker.isHidden = true priceLabel.isUserInteractionEnabled = true } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() } // Number of columns func numberOfComponents(in pickerView: UIPickerView) -> Int { return 1 } // Number of rows func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int { return currencyArray.count // Number of rows = the amount in currency array } // Row Title func pickerView(_ pickerView: UIPickerView, titleForRow row: Int, forComponent component: Int) -> String? { return currencyArray[row] } func pickerView(_ pickerView: UIPickerView, didSelectRow row: Int, inComponent component: Int) { finalURL = baseURL + currencyArray[row] print(finalURL) currencySelected = currencySymbolArray[row] getBitcoinData(url: finalURL) } ```<issue_comment>username_1: You can try gesture ``` let tapRound = UITapGestureRecognizer(target: self, action: #selector(self.handleTap(_:))) priceLabel.isUserInteractionEnabled = true priceLabel.addGestureRecognizer(tapRound) ``` // ``` @objc func handleTap(_ sender: UITapGestureRecognizer? = nil) { self.currencyPicker.isHidden = false } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use `UIPickerView` as inputView for `UITextField` but with `UITapGestureRecognizer` and by keeping your `UILabel`, Define the following variables: ``` var currencyPicker = UIPickerView() var textField : UITextField! ``` And then add the gesture to the label in `viewDidLoad` : ``` let tapGesture = UITapGestureRecognizer(target: self, action: #selector(displayPickerView)) tapGesture.numberOfTapsRequired = 1 priceLabel.addGestureRecognizer(tapGesture) ``` Finally add the gesture handler to display the picker, this function will create a hidden `UITextField` ones: ``` @objc private func displayPickerView(){ if textField == nil { self.textField = UITextField(frame:.zero) textField.inputView = self.currencyPicker self.view.addSubview(textField) } textField.becomeFirstResponder() } ``` let me know if that helps you. Upvotes: 2
2018/03/18
531
2,017
<issue_start>username_0: My Tcl application will work with a [hypergraph](https://en.wikipedia.org/wiki/Hypergraph), i.e. edges having single-start-multiple-end or just many-ends. Looking at available live implementations I see ::struct::graph as a potential alternative. It seem however to be limited to single-start-single-end. Is there a (preferably trivial) way of expressing hyper-edges in ::struct::graph? If not, how could I extend ::struct::graph? (...maybe there is a better solution than ::struct::graph?)<issue_comment>username_1: > > (preferably trivial) way of expressing hyper-edges in ::struct::graph? > > > Not trivial, and not necessarily adequate: The only thing I could imagine right now is to use an arc's attributes to store additional pairs of source and targets for a given, well, hyperedge: ``` struct::graph myHyperGraph myHyperGraph node insert node0 myHyperGraph node insert node1 myHyperGraph node insert node2 myHyperGraph node insert node3 myHyperGraph arc insert node0 node1 harc0 myHyperGraph arc lappend harc0 ends [list node0 node2] myHyperGraph arc lappend harc0 ends [list node0 node3] ``` Based on this piggybacking, some processing operations should be doable with reasonable effort, e.g., into an incidence matrix. Upvotes: 1 <issue_comment>username_2: Gonna post some of my progress and thoughts, allowing people to post comments. I'm leaning towards inventing "hub-nodes". Whenever there is need of hyperedges I make all participating nodes have edges to the hub. For a one-to-many hyperedge I would add a hub, add one edge from start node to hub, then one edge each from hub to the end nodes. For many-to-many I would add one hub and then add edges to all participating nodes. The direction of edges affects how `walk` works. I'm not sure what to prefer but it is simple enough for deciding later. `delete` and `walk` commands will work nicely. The only issue is making a hub-node go away when its last edge is removed, but I belive this is trivial. Upvotes: 0
2018/03/18
1,822
5,667
<issue_start>username_0: I would like to identify the best process for producing summary text in a final report. ``` x <- tribble( ~year, ~service, ~account, ~amount, "2001", "Army", "operations", 5000000, "2001", "Navy", "operations", 1500000, "2002", "Army", "operations", 6000000, "2002", "Navy", "operations", 1700000, "2001", "Army", "repair", 500000, "2001", "Navy", "repair", 300000, "2002", "Army", "repair", 400000, "2002", "Navy", "repair", 600000) ``` Desired text, for each service. ``` "Between [year.min] and [year.max], the [service] spent an average of [average amount]. The largest account in terms of spending within the [service] was [account], which ranked [rank] and fluctuated between [min amount] and [max amount], with a high of [max amount] in [year] to a low of [min] in [year]." ``` Desired Output would be in a table. The process would repeated at many sublevels (account, sub-account, etc). ``` service summary_text 1 Army concatenated 2 Navy concatenated ``` Ultimately, I would like to export the result as an html table beside sparklines, which is fairly trivial in Excel. ``` service sparkline summary_text 1 Army sparkline concatenated text 2 Navy sparkline concatenated text ```<issue_comment>username_1: Using `dplyr` and `glue` with different strategies of grouping: ``` library(dplyr) library(glue) output <- x %>% group_by(service,account) %>% mutate(amount_sum = sum(amount)) %>% group_by(service) %>% mutate(average.amount=mean(amount)) %>% filter(amount_sum == max(amount_sum)) %>% summarize( year.min=min(year), year.max=max(year), average.amount=first(average.amount), account=first(account), rank=1, min.amount =min(amount), max.amount=max(amount), year.min.amount = year[which.min(amount)], year.max.amount = year[which.max(amount)]) %>% transmute(service, summary_text= glue("Between {year.min} and {year.max}, the {service} spent an average of {average.amount}. The largest account in terms of spending within the {service} was {account}, which ranked {rank} and fluctuated between {min.amount} and {max.amount}, with a high of {max.amount} in {year.max.amount} to a low of {min.amount} in {year.min.amount}.")) output %>% pull(summary_text) # Between 2001 and 2002, the Army # spent an average of 2975000. The largest account # in terms of spending within the Army was operations, # which ranked NA and fluctuated between 5e+06 # and 6e+06, with a high of 6e+06 in 2002 to # a low of 5e+06 in 2001. # Between 2001 and 2002, the Navy # spent an average of 1025000. The largest account # in terms of spending within the Navy was operations, # which ranked NA and fluctuated between 1500000 # and 1700000, with a high of 1700000 in 2002 to # a low of 1500000 in 2001. ``` You could use `paste` or `sprintf` instead of `glue` if you want to limit external library dependencies, but your example is more readable this way. I assumed `rank` was always `1` in this example. If you want to deal with subaccounts I suggest you use the same trick as I did, before the `summarize` call use `group_by` and `mutate`, so you can create a new column constant by group. Then call `first` in `summarize`. Upvotes: 3 [selected_answer]<issue_comment>username_2: <NAME>'s answer with sparklines. ``` library(tidyverse) library(sparkline) library(formattable) library(glue) #Data x <- tribble( ~year, ~service, ~account, ~amount, "2001", "Army", "operations", 5000000, "2001", "Navy", "operations", 1500000, "2002", "Army", "operations", 6000000, "2002", "Navy", "operations", 1700000, "2001", "Army", "repair", 500000, "2001", "Navy", "repair", 300000, "2002", "Army", "repair", 400000, "2002", "Navy", "repair", 600000) # Assemble Text table <- x %>% group_by(service, year) %>% summarise(total = sum(amount)) %>% group_by(service) %>% summarise(mean_annual_service = mean(total), # years range first.year = min(year), last.year = max(year), # min and max years, amounts year.min= year[which.min(total)], year.max = year[which.max(total)], min.amount = total[which.min(total)], max.amount = total[which.max(total)]) %>% # Final Text mutate(Description = glue('Between {first.year} and {last.year}, the average spending in the {service} was ${prettyNum(mean_annual_service, big.mark = ",")}, with a high of ${prettyNum(max.amount, big.mark = ",")} in {year.max}, and a low of ${prettyNum(min.amount, big.mark = ",")} in {year.min}') ) %>% select(service, Description) # Add Sparkline x %>% group_by(service, year) %>% summarise(total = sum(amount)) %>% summarise( Sparkline = spk_chr( total, type = "line", chartRangeMin=min(total), chartRangeMax=max(total))) %>% left_join(table) %>% formattable() %>% as.htmlwidget() %>% spk_add_deps() ``` [![Text and Sparklines](https://i.stack.imgur.com/IYRnz.png)](https://i.stack.imgur.com/IYRnz.png) Upvotes: 0
2018/03/18
934
3,329
<issue_start>username_0: I'm using Swift 4 `Codable` and I'm receiving this JSON from my web service: ``` { "status": "success", "data": { "time": "00:02:00", "employees": [ { "id": 001, "name": "foo" }, { "id": 002, "name": "bar" } ] } } ``` I want to decode only employees array into employee objects (the time property will only be saved once), but nothing works. I read a lot of materials about Swift 4 `Codable` but don't get how can I decode this array. EDIT: My employee class: ``` import Foundation struct Employee: Codable { var id: Int var time: Date enum CodingKeys: String, CodingKey { case id = "id" case time = "time" } } ``` The request: ``` Alamofire.SessionManager.default.request(Router.syncUsers) .validate(contentType: ["application/json"]) .responseJSON { response in if response.response?.statusCode == 200 { guard let jsonDict = response as? Dictionary, let feedPage = Employee(from: jsonDict as! Decoder) else { return } guard let response = response.result.value as? [String: Any] else { return } guard let data = response["data"] as? [String: Any] else { return } guard let users = data["employees"] else { return } guard let usersData = try? JSONSerialization.data(withJSONObject: users, options: .prettyPrinted) else { return } guard let decoded = try? JSONSerialization.jsonObject(with: usersData, options: []) else { return } let decoder = JSONDecoder() guard let employee = try? decoder.decode(Employee.self, from: usersData) else { print("errorMessage") return } } else { print("errorMessage") } } ```<issue_comment>username_1: When using `Codable` you cannot decode inner data without decoding the outer. But it's pretty simple, for example you can omit all CodingKeys. ``` struct Root : Decodable { let status : String let data : EmployeeData } struct EmployeeData : Decodable { let time : String let employees : [Employee] } struct Employee: Decodable { let id: Int let name: String } ``` --- ``` let jsonString = """ { "status": "success", "data": { "time": "00:02:00", "employees": [ {"id": 1, "name": "foo"}, {"id": 2, "name": "bar"} ] } } """ do { let data = Data(jsonString.utf8) let result = try JSONDecoder().decode(Root.self, from: data) for employee in result.data.employees { print(employee.name, employee.id) } } catch { print(error) } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can solve this kind of JSON parsing problem very easily with [quicktype](https://app.quicktype.io/?l=swift&gist=ed798bf41236c555ed1f1e0e1ccdfd73). Just paste in your JSON on the left and you'll get types and serialization/deserialization code on the right, for a variety of languages. Here are the types it produces for your JSON: ``` struct Employees: Codable { let status: String let data: EmployeesData } struct EmployeesData: Codable { let time: String let employees: [Employee] } struct Employee: Codable { let id: Int let name: String } ``` Upvotes: 2
2018/03/18
721
2,465
<issue_start>username_0: I'm trying to run a `for` loop inside of a `while` loop, but for some reason it's not being run at all. Here's the code: ``` with open("Nominees_18.csv") as nominees: reader = csv.reader(nominees) next(reader) for c, row in enumerate(reader): print(row[0], row[1], c) while True: name = input("chose a number") print(4) for c, row in enumerate(reader): print(4) if str(c) in name: print(len([i for i in row if row[i] == "y"])) if input("chose another?") == ("no" or "No"): break ``` The script asks you for a number, then asks if you want to choose another number. I put `print(4)` to test the for loop and it doesn't come up. There's further code above this but I haven't included it as I don't think it's relevant, but if you want it, then let me know. I have no idea why this could be happening. Thanks.<issue_comment>username_1: When using `Codable` you cannot decode inner data without decoding the outer. But it's pretty simple, for example you can omit all CodingKeys. ``` struct Root : Decodable { let status : String let data : EmployeeData } struct EmployeeData : Decodable { let time : String let employees : [Employee] } struct Employee: Decodable { let id: Int let name: String } ``` --- ``` let jsonString = """ { "status": "success", "data": { "time": "00:02:00", "employees": [ {"id": 1, "name": "foo"}, {"id": 2, "name": "bar"} ] } } """ do { let data = Data(jsonString.utf8) let result = try JSONDecoder().decode(Root.self, from: data) for employee in result.data.employees { print(employee.name, employee.id) } } catch { print(error) } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can solve this kind of JSON parsing problem very easily with [quicktype](https://app.quicktype.io/?l=swift&gist=ed798bf41236c555ed1f1e0e1ccdfd73). Just paste in your JSON on the left and you'll get types and serialization/deserialization code on the right, for a variety of languages. Here are the types it produces for your JSON: ``` struct Employees: Codable { let status: String let data: EmployeesData } struct EmployeesData: Codable { let time: String let employees: [Employee] } struct Employee: Codable { let id: Int let name: String } ``` Upvotes: 2
2018/03/18
1,423
3,862
<issue_start>username_0: I am using laravel mix to bundle my js libraries and codes. I am trying to use ES6 style of importing and use ES6 codes if possible. I need to also import jQuery and it's library. So, i have imported jQuery and bootstrap like this: ``` import "jquery"; import "bootstrap"; ``` At first when i import them i was getting: ``` Uncaught ReferenceError: jQuery is not defined at Object../node_modules/bootstrap/js/transition.js (vendor.js?id=74f7f0c463b407c6bdf5:2449) ``` which is due to the bootstrap not getting jQuery. To solve this i have added this configuration to replace $, jQuery with jquery ``` mix.webpackConfig(webpack => { return { plugins: [ new webpack.ProvidePlugin({ $: 'jquery', jQuery: 'jquery', 'window.jQuery': 'jquery', }) ] }; }); ``` This works for every scripts and libraries that requires jQuery. Now the problem is with the other scripts that we add without mixing using a single js or using blade section. ``` @section('scripts') window.onload = function() { if (window.jQuery) { // jQuery is loaded console.log("Yeah!"); } else { // jQuery is not loaded console.log("Doesn't Work"); } } $().ready(function () { console.log('works'); }) @endsection ``` The console error shows: ``` datatables:125 Uncaught ReferenceError: $ is not defined at datatables:125 (anonymous) @ datatables:125 vendor.js?id=2b5ccc814110031408ca:21536 jQuery.Deferred exception: $(...).uniform is not a function TypeError: $(...).uniform is not a function at HTMLDocument. (http://localhost/assets/admin/app.js?id=da888f45698c53767fca:18419:18) at mightThrow (http://localhost/assets/admin/vendor.js?id=2b5ccc814110031408ca:21252:29) at process (http://localhost/assets/admin/vendor.js?id=2b5ccc814110031408ca:21320:12) undefined jQuery.Deferred.exceptionHook @ vendor.js?id=2b5ccc814110031408ca:21536 process @ vendor.js?id=2b5ccc814110031408ca:21324 setTimeout (async) (anonymous) @ vendor.js?id=2b5ccc814110031408ca:21358 fire @ vendor.js?id=2b5ccc814110031408ca:20986 fireWith @ vendor.js?id=2b5ccc814110031408ca:21116 fire @ vendor.js?id=2b5ccc814110031408ca:21124 fire @ vendor.js?id=2b5ccc814110031408ca:20986 fireWith @ vendor.js?id=2b5ccc814110031408ca:21116 ready @ vendor.js?id=2b5ccc814110031408ca:21596 completed @ vendor.js?id=2b5ccc814110031408ca:21606 datatables:122 Doesn't Work vendor.js?id=2b5ccc814110031408ca:21545 Uncaught TypeError: $(...).uniform is not a function at HTMLDocument. (app.js?id=da888f45698c53767fca:18419) at mightThrow (vendor.js?id=2b5ccc814110031408ca:21252) at process (vendor.js?id=2b5ccc814110031408ca:21320) ``` The problem gets solved when i compile and mix the scripts using laravel mix but when i write same scripts in the blade or use without mixing it shows the jQuery / $ is not defined error. What is the best way in this case? The master page looks like this: ``` @yield('body') @section('scripts') @show @yield('footer') ```<issue_comment>username_1: According to [this answer](https://stackoverflow.com/a/55771405/6569224) you need to import `jquery` this way ``` window.$ = window.jQuery = require('jquery'); ``` Upvotes: 4 <issue_comment>username_2: It's an old question, however, this may help others finding answer for this. The vendor js takes time to load and the inline javascript we use in blade templates fires before vendor js is completely loaded. To fix this, wrap inline javascript in setTimeout function like below: ``` @push('scripts-or-whatever') setTimeout(function(){ // $, jQuery, Vue is all ready to use now $(document).ready(function(){ // code here }) }, 100) @endpush ``` Upvotes: -1 <issue_comment>username_3: before: ```html ``` after: ```html ``` in layout worked for me. Upvotes: 2
2018/03/18
519
1,260
<issue_start>username_0: I have a doubt, is there a function in haskell that can help me solve this? I'm trying to receive as an input a String with the form of a list and the convert it into an actual list in haskell for example: convert `"[ [1,1] , [2,2] ]"` into `[ [1,1] , [2,2] ]` Example 2: convert `"[ [ [1,1], [1,1] ], [ [2,2] , [2,2] ] , [ [1,1] ,[1,1] ] ]"` into `[ [ [1,1], [1,1] ], [ [2,2] , [2,2] ] , [ [1,1] ,[1,1] ] ]` thanks in advance!<issue_comment>username_1: Yes, that function is `read`. If you specify what it should read in a type annotation, you will get the desired result, provided there is an instance of `Read` for that type. ``` read "[ [1,1] , [2,2] ]" :: [[Int]] -- [[1,1],[2,2]] read "[ [ [1,1], [1,1] ], [ [2,2] , [2,2] ] , [ [1,1] ,[1,1] ] ]" :: [[[Int]]] -- [[[1,1],[1,1]],[[2,2],[2,2]],[[1,1],[1,1]]] ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: One option could be to use the Aeson package and treat the text as json data. ``` Prelude> :set -XOverloadedStings Prelude> import Data.Aeson Prelude Data.Aeson> jsString = "[ [ [1,1], [1,1] ], [ [2,2] , [2,2] ] , [ [1,1] ,[1,1] ] ]" Prelude Data.Aeson> decode jsString :: Maybe [[[Int]]] Just [[[1,1],[1,1]],[[2,2],[2,2]],[[1,1],[1,1]]] ``` Upvotes: 1
2018/03/18
335
1,114
<issue_start>username_0: I'm building a string and sharing it in an email. Problem is that everything is in the same line (it works fine when the string is built in the server side) ``` var sharing_txt = ""; sharing_txt = sharing_txt.concat("hello"); sharing_txt = sharing_txt.concat("\n"); sharing_txt = sharing_txt.concat("hello"); window.location.href = "mailto:?subject="+'hello subject'+"&body="+sharing_txt; ``` console output: hello hello Email: hellohello<issue_comment>username_1: You can pass `%0D` as the new line (Carriage return). ```js var sharing_txt = ""; sharing_txt = sharing_txt.concat("Hello"); sharing_txt = sharing_txt.concat("%0D"); sharing_txt = sharing_txt.concat("Ele"); document.getElementById('mailto').setAttribute('href', "mailto:?subject=" + 'hello subject' + "&body=" + sharing_txt); ``` ```html Send ``` [![enter image description here](https://i.stack.imgur.com/dr7Xl.png)](https://i.stack.imgur.com/dr7Xl.png) Upvotes: 1 <issue_comment>username_2: Instead of \n use HTML breakline br ``` ``` New line not recognised in browser. Try. Upvotes: 0
2018/03/18
549
1,941
<issue_start>username_0: I have an interface ``` public interface IIdentity { T GetUser(); } ``` I have a base class that implements the Interface as an abstract method ``` public abstract class BaseUser : IIdentity { public string UserId { get; set; } public string AuthType { get; set; } public List Claims { get; set; } public abstract T GetUser(); } ``` In the class that inherits the base class ``` public class JwtUser : BaseUser { public string Sub { get; set; } } ``` I get an error using the generic type BaseUser requires 1 argument, what do i do here, basically I'd like my user to inherit shared properties from the base class which it does (i think) and to implement the generic method from the base class as I'm going to have different types of users (JWT/Windows etc) I need to abstract away the getUsers method, hope that makes sense ?<issue_comment>username_1: it should be like , for Ref : [Generic Classes (C# Programming Guide)](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/generics/generic-classes) ``` public class JwtUser : BaseUser { public string Sub { get; set; } } ``` or ``` public class JwtUser : BaseUser { public string Sub { get; set; } } ``` and when create instace ``` var jwtUser =new JwtUser (); ``` or ``` class JwtUser: BaseUser { } ``` In all way at the end you must need to specify value for `T template` as its generic. For example if you take `List` for using it you must need to intialize with proper type like if `interger` list then `List intlist = new List();` Upvotes: 1 <issue_comment>username_2: You have to ways to implement this, both require to set the generic in BaseUser. You could expose that generic: ``` public class JwtUser : BaseUser { public string Sub { get; set; } } ``` Or, just set the generic: ``` public class JwtUser : BaseUser { public string Sub { get; set; } } ``` Upvotes: 3 [selected_answer]
2018/03/18
459
1,737
<issue_start>username_0: I have two files for deployment, 1) `deploymentpackage.zip` -> It contains the database package with few shell scripts. 2) `deployment.sh` -> It is the primary shell script which first unzips the deploymentpackage.zip and then execute the list of shell files inside it. It is working as expected. But what I need is, I need to make the zip file as executable so that I dont want to deliver both `deploymentpackage.zip` and `deployment.sh` to client. So Is it possible to make the `deploymentpackage.zip` as executable so that I don't want to have another script deployment.sh. Expectation : Running this `deploymentpackage.zip` should unzip the same file and run the list of scripts inside it.<issue_comment>username_1: Write a readme file, and ask your users to chmod the script, then to execute it. For security reason I hope there is no way to auto-execute such things... Edit: received a vote down because the OP did not like it, thanks a lot :) Upvotes: 0 <issue_comment>username_2: If it's ok to assume that the user who will run the script has the `unzip` utility, then you can create a script like this: ``` #!/usr/bin/env bash # commands that you need to do ... # ... unzip <(tail -n +$((LINENO + 2)) "$0") exit ``` Make sure the script has a newline `\n` character at the end of the line of `exit`. And, it's important that the last line of the script is the `exit` command, and that the `unzip` command with `tail` is right in front of it. Then, you can append to this file the zipped content, for example with: ``` cat file.zip >> installer.sh ``` Users will be able to run `installer.sh`, which will `unzip` the zipped content at the end of the file. Upvotes: 5 [selected_answer]
2018/03/18
1,892
4,879
<issue_start>username_0: It is possible to sum the values of an array if they are the same like this: ``` var COLLECTION = [ { "coords":[1335,2525], "items":[ {id: "boletus",qty: 1}, {id: "lepiota",qty: 3}, {id: "boletus",qty: 2}, {id: "lepiota",qty: 4}, {id: "carbonite",qty: 4}, ], }, { "coords":[1532,2889], "items":[ {id: "boletus",qty: 2}, {id: "lepiota",qty: 6}, {id: "boletus",qty: 1}, {id: "lepiota",qty: 4}, {id: "chamomile",qty: 4}, ], }] ``` To return something like this: ``` var COLLECTION = [ { "coords":[1335,2525], "items":[ {id: "boletus",qty: 3}, {id: "lepiota",qty: 7}, {id: "carbonite",qty: 4}, ], }, { "coords":[1532,2889], "items":[ {id: "boletus",qty: 3}, {id: "lepiota",qty: 10}, {id: "chamomile",qty: 4}, ], }] ``` Wihout losing the other parts of the array? (doing by hand is hard because I have more than 10 thousand duplicates like the example above, and the array have 600 thousand entries.<issue_comment>username_1: You could use `map()` to create new array and inside `reduce()` to group `items` objects by id and sum qty. ```js var data = [{"coords":[1335,2525],"items":[{"id":"boletus","qty":1},{"id":"lepiota","qty":3},{"id":"boletus","qty":2},{"id":"lepiota","qty":4},{"id":"carbonite","qty":4}]},{"coords":[1532,2889],"items":[{"id":"boletus","qty":2},{"id":"lepiota","qty":6},{"id":"boletus","qty":1},{"id":"lepiota","qty":4},{"id":"chamomile","qty":4}]}] const result = data.map(function({coords, items}) { return {coords, items: Object.values(items.reduce(function(r, e) { if(!r[e.id]) r[e.id] = Object.assign({}, e) else r[e.id].qty += e.qty return r; }, {}))} }) console.log(result) ``` Upvotes: 2 <issue_comment>username_2: You can use the functions **[`forEach`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach)** and **[`reduce`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce)** *This approach mutates the original array* ```js var COLLECTION = [ { "coords":[1335,2525], "items":[ {id: "boletus",qty: 1}, {id: "lepiota",qty: 3}, {id: "boletus",qty: 2}, {id: "lepiota",qty: 4}, {id: "carbonite",qty: 4}, ], }, { "coords":[1532,2889], "items":[ {id: "boletus",qty: 2}, {id: "lepiota",qty: 6}, {id: "boletus",qty: 1}, {id: "lepiota",qty: 4}, {id: "chamomile",qty: 4}, ], }]; COLLECTION.forEach((o) => { o.items = Object.values(o.items.reduce((a, c) => { (a[c.id] || (a[c.id] = {id: c.id, qty: 0})).qty += c.qty; return a; }, {})); }); console.log(COLLECTION); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` **If you want to create a new array and keep the original data:** *This approach uses the function **[`map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map)** to create a new "cloned" array.* ```js var COLLECTION = [ { "coords":[1335,2525], "items":[ {id: "boletus",qty: 1}, {id: "lepiota",qty: 3}, {id: "boletus",qty: 2}, {id: "lepiota",qty: 4}, {id: "carbonite",qty: 4}, ], }, { "coords":[1532,2889], "items":[ {id: "boletus",qty: 2}, {id: "lepiota",qty: 6}, {id: "boletus",qty: 1}, {id: "lepiota",qty: 4}, {id: "chamomile",qty: 4}, ] }], result = COLLECTION.map(o => o); result.forEach((o) => { o.items = Object.values(o.items.reduce((a, c) => { (a[c.id] || (a[c.id] = {id: c.id, qty: 0})).qty += c.qty; return a; }, {})); }); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 2 <issue_comment>username_3: You could take the power of [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) and render the result by using [`Array.from`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/from) with a mapping function which builds new objects for `items`. ```js var COLLECTION = [{ coords: [1335, 2525], items: [{ id: "boletus", qty: 1 }, { id: "lepiota", qty: 3 }, { id: "boletus", qty: 2 }, { id: "lepiota", qty: 4 }, { id: "carbonite", qty: 4 }], }, { coords: [1532, 2889], items: [{ id: "boletus", qty: 2 }, { id: "lepiota", qty: 6 }, { id: "boletus", qty: 1 }, { id: "lepiota", qty: 4 }, { id: "chamomile", qty: 4 }] }]; COLLECTION.forEach(o => { var map = new Map; o.items.forEach(({ id, qty }) => map.set(id, (map.get(id) || 0) + qty)); o.items = Array.from(map, ([id, qty]) => ({ id, qty })); }); console.log(COLLECTION); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 2
2018/03/18
987
3,604
<issue_start>username_0: I have a `json` file for tweet data. The data that I want to look at is the text of the tweet. For some reason, some of the tweets are too long to put into the normal text part of the dictionary. It seems like there is a dictionary within another dictionary and I can't figure out how to access it very well. Basically, what I want in the end is one column of a data frame that will have all of the text from each individual tweet. Here is a [link](https://docs.google.com/document/d/1q-ele3faEJkx1e065x1mMw3k9aVC1JGBpCPp5geCmDM/edit?usp=sharing) to a small sample of the data that contains a problem tweet. Here is the code I have so far: ``` import json import pandas as pd tweets = [] #This writes the json file so that I can work with it. This part works correctly. with open("filelocation.txt") as source for line in source: if line.strip(): tweets.append(json.loads(line)) print(len(tweets) df = pd.DataFrame.from_dict(tweets) df.info() ``` When looking at the info you can see that there will be a column called extended\_tweet that only encompasses one of the two sample tweets. Within this column, there seems to be another dictionary with one of those keys being full\_text. I want to add another column to the dataframe that just has this information along with the normal text column when the full\_text is null. My first thought was to try and read that specific column of the dataframe as a dictionary again using: ``` d = pd.DataFrame.from_dict(tweets['extended_tweet]['full_text]) ``` But this doesn't work. I don't really understand why that doesn't work as that is how I read the data the first time. My guess is that I can't look at the specific names because I am going back to the list and it would have to read all or none. The error it gives me says "KeyError: 'full\_text' " I also tried using the recommendation provided by this [website](https://www.haykranen.nl/2016/02/13/handling-complex-nested-dicts-in-python/). But this gave me a `None value` no matter what. Thanks in advance! I tried to do what @<NAME>. suggested, however, this still gave me errors. But it gave me the idea to try this: tweet[0]['extended\_tweet']['full\_text'] This works and gives me the value that I am looking for. But I need to run through the whole thing. So I tried this: df['full'] = [tweet[i]['extended\_tweet']['full\_text'] for i in range(len(tweet)) This gives me "Key Error: 'extended\_tweet' " Does it seem like I am on the right track?<issue_comment>username_1: I would suggest to flatten out the dictionaries like this: ``` tweet = json.loads(line) tweet['full_text'] = tweet['extended_tweet']['full_text'] tweets.append(tweet) ``` Upvotes: 2 <issue_comment>username_2: I don't know if the answer suggested earlier works. I never got that successfully. But I did figure out something else that works well for me. What I really needed was a way to display the full text of a tweet. I first loaded the tweets from the json with what I posted above. Then I noticed that in the data file, there is something called truncated. If this value is true, the tweet is cut short and the full tweet is placed within the ``` tweet[i]['extended_tweet]['full_text] ``` In order to access it, I used this: ``` tweet_list = [] for i in range(len(tweets)): if tweets[i]['truncated'] == 'True': tweet_list.append(tweets[i]['extended_tweet']['full_text'] else: tweet_list.append(tweets[i]['text'] ``` Then I can work with the data using the whol text from each tweet. Upvotes: 1 [selected_answer]
2018/03/18
397
1,780
<issue_start>username_0: I now understand that async functions return promises of values not values directly. But what I don't understand is what the point of doing that in the first place. As far as I understand, async functions are used to synchronize code inside them, now after we got the value from a promise using the await statement inside the async function, why should we return another promise? Why don't we just return the result directly or return void?<issue_comment>username_1: Because you can't know when the asynchronous call will be done. So it just returns a promise to let you make your rest logic with asynchronous call by making `then`-s chain. Upvotes: 2 [selected_answer]<issue_comment>username_2: > > async functions are used to synchronize code inside them > > > They aren't. `async` functions provide syntactic sugar for promises, thus eliminating the need to use callbacks. `async` function acts exactly like regular function that returns a promise. There's no way how the result can be returned synchronously from asynchronous function. If the result should be returned from `async` function, it should be called and `await`ed inside another `async` function, and so on - possibly up to application entry point. Upvotes: 2 <issue_comment>username_3: Yes, `async function` are used to sequentialise the code **inside** them. They do not - cannot - stop the code execution outside of them. As you probably remember, **blocking is bad**. You cannot get the result from the future of course, but you don't want to stop the world just to wait for that function to finish either. And that's why when you call such a function, you get back a promise that you either `await`, or can schedule a callback on and do other things while it waits. Upvotes: 1
2018/03/18
608
2,206
<issue_start>username_0: What are the differences between SSDT and SSDT- Business Intelligence? I've installed SQL Server 2017 and then proceeded to download Business Intelligence Development Studio but found out that it was replaced by SSDT/SSDT-BI (don't know the difference if there's any). Do I need just SSDT or SSDT-BI? I can only find SSDT-BI for Visual Studio 2012 and 2013, not for VS 2017 while SSDT for VS 2017 is available. Will this create any problems since I'm working with SQL Server 2017? I'm a complete beginner at all this and this is only for a uni project. Please keep answers as simple as possible. Thank you very much.<issue_comment>username_1: There is only SSDT nowadays, which includes support for SQL Server Database, SSRS, SSRS, and SSIS projects. The download links are [here](https://learn.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt). The current SSDT version (15.5.2 as of this writing) allows you to target SQL Azure Database, SQL Server 2017, as well as older versions so you don't need multiple versions of SSDT installed. SSDT will install a minimal Visual Studio shell if VS is not already installed. If you already have VS installed, those project types will be added to the existing installation. **EDIT:** With Visual Studio 2019, SSDT for SQL Server database projects remains intregrated into the VS 2019 installer. Select the Data Storage and processing workload during install and choose SQL Server Data Tools. However, SAS, SSIS, and SSRS SSDT projects are now moved to separate Visual Studio extensions. These extensions can be manged post install from within Visual Studio under Extensions-->Manage Exentsions. Upvotes: 3 <issue_comment>username_2: Yeah, this got a lot of people confused. According to [this link](http://sqlblog.com/blogs/jamie_thomson/archive/2013/04/03/ssdt-naming-confusion-cleared-up-somewhat.aspx) (VS2012 & VS2013 timeframe): * SSDT is for building databases ONLY i.e. only base functionality. * SSDT-BI is for building SSIS/SSAS/SSRS solutions But then it looks like from VS2015 onward they merged the two together into just SSDT, so after VS2013 there is no separate SSDT-BI install. I think. Upvotes: 2
2018/03/18
250
1,112
<issue_start>username_0: I have a column STR which may contain any strings. I'm using MySql. How to find strings which don't contain letters in SQL without using Regular Expressions? As I understand RegExp in SQL is [^...]. So how to select the strings without using [^...]?<issue_comment>username_1: I am not sure which RDBMS you're using. But, if you do not want to use regular expression, you can loop through every character in the string and check the ASCII code. If they are only falling in the range 48 to 57, they are only numbers. Note : This may be very costly operation Upvotes: 0 <issue_comment>username_2: Regexp is the most sensible way of doing this. An alternative without... ``` SELECT STR FROM YourTable WHERE NOT EXISTS (SELECT * FROM (SELECT 'A' AS C UNION ALL SELECT 'B' UNION ALL SELECT 'C' /* Todo. Add remaining letters */ ) Chars WHERE INSTR(STR, C) > 0) ``` Upvotes: 1
2018/03/18
468
2,067
<issue_start>username_0: I've got a strange problem.. FIrst, take a look at my code: This one is where I use my await.. ``` case "detail": { const lineMessages = []; let item = await this.getItem(postback.id); let lineMessage = { type: "text", text: item.longDesc }; lineMessages.push(lineMessage); return new Promise(function(resolve, reject) { if(lineMessages != []) { resolve(lineMessages); } else { let error = new Error("Cannot catch item ${postback.id}"); reject(error); } }); ``` This is the getItem(id) method.. ``` getItem(id) { return Item.find({_id: id}).exec(); } ``` But it turns out the text key on my lineMessage is undefined.. Then, the lineMessage is `LineMessages: [{"type":"text"}]` ( I once logged it on my console) Why await doesn't stop the execution in my case? It seems it tries to look up `item.longDesc` before `item` is resolved (just my guess tho). Please help<issue_comment>username_1: I am not sure which RDBMS you're using. But, if you do not want to use regular expression, you can loop through every character in the string and check the ASCII code. If they are only falling in the range 48 to 57, they are only numbers. Note : This may be very costly operation Upvotes: 0 <issue_comment>username_2: Regexp is the most sensible way of doing this. An alternative without... ``` SELECT STR FROM YourTable WHERE NOT EXISTS (SELECT * FROM (SELECT 'A' AS C UNION ALL SELECT 'B' UNION ALL SELECT 'C' /* Todo. Add remaining letters */ ) Chars WHERE INSTR(STR, C) > 0) ``` Upvotes: 1
2018/03/18
414
1,728
<issue_start>username_0: I am using Anypoint studio. I have used esper CEP engine for event detection using java file. Once the event is detected i am getting output in the console from java file as system.out.println(Object). I want the Obejct to be sent from java output to the mule flow either as a message property or payload, so I can store in MongoDB or I can reuse it for another event detection. here is my flow: [mule flow](https://i.stack.imgur.com/EClRf.png) Here I want the "event.getUnderlying()" Object to be sent to mule flow. ``` public void update(EventBean[] newData, EventBean[] oldData) { EventBean event = newData[0]; obj=event.getUnderlying(); if(a2==0){ i++; System.out.println("Event received:"+i+" "+event.getUnderlying()); ``` Thanks in Advance :)<issue_comment>username_1: I am not sure which RDBMS you're using. But, if you do not want to use regular expression, you can loop through every character in the string and check the ASCII code. If they are only falling in the range 48 to 57, they are only numbers. Note : This may be very costly operation Upvotes: 0 <issue_comment>username_2: Regexp is the most sensible way of doing this. An alternative without... ``` SELECT STR FROM YourTable WHERE NOT EXISTS (SELECT * FROM (SELECT 'A' AS C UNION ALL SELECT 'B' UNION ALL SELECT 'C' /* Todo. Add remaining letters */ ) Chars WHERE INSTR(STR, C) > 0) ``` Upvotes: 1
2018/03/18
1,774
5,866
<issue_start>username_0: I would like users to be able to click on a plot, and when they do record leave a mark or a message at that the point they clicked. I am using reactive values within the plotting environment. This seems to be resetting the plot. Almost immediately after the message appears. Here is a minimum not-fully-working example ``` library(shiny) ## ui.R ui <- fluidPage( shinyjs::useShinyjs(), column(12, plotOutput("Locations", width=500, height=500, click="plot_click") ) ) ## server.R server <- function( input, output, session){ ## Source Locations (Home Base) source_coords <- reactiveValues(xy=c(x=1, y=2) ) ## Dest Coords dest_coords <- reactive({ if (is.null(input$plot_click) ){ list( x=source_coords$xy[1], y=source_coords$xy[2]) } else { list( x=floor(input$plot_click$x), y=floor(input$plot_click$y)) } }) ## Calculate Manhattan Distance from Source to Destination DistCost <- reactive({ list( Lost=sum( abs( c(dest_coords()$x, dest_coords()$y) - source_coords$xy ) ) ) }) ## RenderPlot output$Locations <- renderPlot({ par(bg=NA) plot.new() plot.window( xlim=c(0,10), ylim=c(0,10), yaxs="i", xaxs="i") axis(1) axis(2) grid(10,10, col="black") box() ## Source points( source_coords$xy[1], source_coords$xy[2], cex=3, pch=intToUtf8(8962)) ## Destination text(dest_coords()$x, dest_coords()$y, paste0("Distance=", DistCost() )) }) } ### Run Application shinyApp(ui, server) ```<issue_comment>username_1: The problem is that `input$plot_click` flushes itself immediately after it gets values from user click, and returns to `NULL`. You can test if by yourself by creating empty list `stored <- list()`, and after that add ``` stored[[length(stored)+1]] <<- as.character(c(input$plot_click$x, input$plot_click$y)) ``` inside your dest\_coords reactive. You can see that if you click the plot just once it will store three values. First is `NULL`, second is the clicked point coordinates, but there will be also the third one, which is `NULL` again. So it will flush its values away immediately after pushing them to reactives which are dependent at him. But the reactives will also take dependency at any change in the input, even if it is `NULL`. The way around this is to use `eventReactive` or `observeEvent` and make sure that `ignoreNULL` parameter is set to `TRUE` (it is actually set to `TRUE` by default). To make it work for your app, you should already store all the minimum values required to create your plot in `reactiveValues`, and after the click is made just overwrite data with those provided by `input$plot_click`. Here is my modified example: ``` library(shiny) ## ui.R ui <- fluidPage( shinyjs::useShinyjs(), column(12, plotOutput("Locations", width=500, height=500, click="plot_click")) ) ## server.R server <- function( input, output, session){ source_coords <- reactiveValues(xy=data.frame(x=c(1,1), y=c(1,1))) observeEvent(input$plot_click, { source_coords$xy[2,] <- c(input$plot_click$x, input$plot_click$y) }) ## RenderPlot output$Locations <- renderPlot({ par(bg=NA) plot.new() plot.window( xlim=c(0,10), ylim=c(0,10), yaxs="i", xaxs="i") axis(1) axis(2) grid(10,10, col="black") box() ## Source points( source_coords$xy[1,1], source_coords$xy[1,2], cex=3, pch=intToUtf8(8962)) ## Destination text(source_coords$xy[2,1], source_coords$xy[2,2], paste0("Distance=", sum(abs(source_coords$xy[1,]-source_coords$xy[2,])))) }) } ### Run Application shinyApp(ui, server) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: I'm not sure if the intent was to only show the most recently clicked point, or to show all the points clicked. Since the answer by Pawel covers the former case (and is already an accepted answer, which means it probably was the intent), I'll post a solution to the former, for future reference in case it helps anymore ``` library(magrittr) library(shiny) ## ui.R ui <- fluidPage( shinyjs::useShinyjs(), column(12, plotOutput("Locations", width=500, height=500, click="plot_click") ) ) ## server.R server <- function( input, output, session){ initX <- 1 initY <- 2 ## Source Locations (Home Base) source_coords <- reactiveValues(xy=c(x=initX, y=initY) ) ## Dest Coords dest_coords <- reactiveValues(x=initX, y=initY) observeEvent(plot_click_slow(), { dest_coords$x <- c(dest_coords$x, floor(plot_click_slow()$x)) dest_coords$y <- c(dest_coords$y, floor(plot_click_slow()$y)) }) ## Don't fire off the plot click too often plot_click_slow <- debounce(reactive(input$plot_click), 300) ## Calculate Manhattan Distance from Source to Destination DistCost <- reactive({ num_points <- length(dest_coords$x) list( Lost= lapply(seq(num_points), function(n) { sum( abs( c(dest_coords$x[n], dest_coords$y[n]) - source_coords$xy ) ) }) ) }) ## RenderPlot output$Locations <- renderPlot({ par(bg=NA) plot.new() plot.window( xlim=c(0,10), ylim=c(0,10), yaxs="i", xaxs="i") axis(1) axis(2) grid(10,10, col="black") box() ## Source points( source_coords$xy[1], source_coords$xy[2], cex=3, pch=intToUtf8(8962)) ## Destination text(dest_coords$x, dest_coords$y, paste0("Distance=", DistCost()$Lost )) }) } ### Run Application shinyApp(ui, server) ``` Upvotes: 2
2018/03/18
591
2,604
<issue_start>username_0: Using .NET Core and C# I'm trying to make an HTTPS request to my Vizio TV, the API is somewhat documented [here](https://github.com/exiva/Vizio_SmartCast_API). When visiting the HTTP server in Chrome I receive a "NET::ERR\_CERT\_AUTHORITY\_INVALID" error. When I make the request in C# with a `HttpClient`, a `HttpRequestException` is thrown. I've tried adding the certificate to Windows but I'm just not familiar enough with TLS. I'm also not concerned about my communications being snooped on so I would like to just ignore any HTTPS errors. Here's the relevant code I'm working with. ``` public async Task Pair(string deviceName) { using (var httpClient = new HttpClient()) try { httpClient.BaseAddress = new Uri($"https://{televisionIPAddress}:9000/"); // Assume all certificates are valid? ServicePointManager.ServerCertificateValidationCallback = (sender, certificate, chain, sslPolicyErrors) => true; deviceID = Guid.NewGuid().ToString(); var startPairingRequest = new HttpRequestMessage(HttpMethod.Put, "/pairing/start"); startPairingRequest.Content = CreateStringContent(new PairingStartRequestBody { DeviceID = deviceID, DeviceName = deviceName }); var startPairingResponse = await httpClient.SendAsync(startPairingRequest); // HttpRequestException thrown here Console.WriteLine(startPairingResponse); } catch (HttpRequestException e) { Console.WriteLine(e.InnerException.Message); // prints "A security error occurred" } } StringContent CreateStringContent(object obj) { return new StringContent(JsonConvert.SerializeObject(obj), Encoding.UTF8, "application/json"); } ```<issue_comment>username_1: Resolved the issue by setting up a `HttpClientHandler` and setting `ServerCertificateCustomValidationCallback` to return true. ``` using (var handler = new HttpClientHandler { ServerCertificateCustomValidationCallback = (sender, certificate, chain, sslPolicyErrors) => true }) using (var httpClient = new HttpClient(handler)) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Too late for the party here, but if you are looking for .net core solution then please try the below code ``` using (var handler = new HttpClientHandler { ServerCertificateCustomValidationCallback = (sender, certificate, chain, sslPolicyErrors) => true }) { using (var httpClient = new HttpClient(handler)) { //your business logic goes here } } ``` Upvotes: 2
2018/03/18
472
1,507
<issue_start>username_0: I'm trying to make the Bootstrap 4 Dropdown fade in when clicked, however I am unable to achieve this result with using transitions: ``` .dropdown-menu { -webkit-transition: 0.25s; transition: 0.25s; } ``` Thanks!<issue_comment>username_1: This is a working fade transition for dropdown in Bootstrap 4: ``` .dropdown-menu.fade { display: block; opacity: 0; pointer-events: none; } .show > .dropdown-menu.fade { pointer-events: auto; opacity: 1; } ``` Credit: <https://stackoverflow.com/a/47986695/1821637> Upvotes: 1 <issue_comment>username_2: ``` .dropdown .dropdown-menu{ display: block; opacity:0; -webkit-transition: all 200ms ease-in; -moz-transition: all 200ms ease-in; -ms-transition: all 200ms ease-in; -o-transition: all 200ms ease-in; transition: all 200ms ease-in; } .dropdown:hover .dropdown-menu { display: block; opacity: 1; } ``` Found this on the internet when I was looking for something similar on hover Upvotes: 2 <issue_comment>username_3: **you can try this. it will also work all the mobile devices** ``` `@include media-breakpoint-up(xl) { .dropdown-on-hover { &.dropdown { .dropdown-menu{ display: block; visibility: hidden; opacity: 0; transition: visibility 0s, opacity 0.5s linear;} @include hover-focus { .dropdown-menu { visibility: visible; opacity: 1; } } } } }` ``` Upvotes: 0
2018/03/18
758
2,660
<issue_start>username_0: While trying to tune the ANN model using GridSearchCV, I faced the following error in Google Colab. Can anyone help me on this or have faced any similar issue like this? ``` def build_classifier(optimizer): classifier = Sequential() classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11)) classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu')) classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid')) classifier.compile(optimizer = optimizer, loss = 'binary_crossentropy', metrics = ['accuracy']) return classifier classifier = KerasClassifier(build_fn = build_classifier) parameters = {'batch_size': [25, 32], 'epochs': [100, 500], 'optimizer': ['adam', 'rmsprop']} grid_search = GridSearchCV(estimator = classifier, param_grid = parameters, scoring = 'accuracy', cv = 10) grid_search = grid_search.fit(X_train, y_train) best_parameters = grid_search.best_params_ best_accuracy = grid_search.best_score_ ``` **Error:** ----------- ``` Epoch 1/100 InternalError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1360 try: -> 1361 return fn(*args) 1362 except errors.OpError as e: /usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1339 return tf_session.TF_Run(session, options, feed_dict, fetch_list, -> 1340 target_list, status, run_metadata) 1341 /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg) 515 compat.as_text(c_api.TF_Message(self.status.status)), --> 516 c_api.TF_GetCode(self.status.status)) 517 # Delete the underlying status object from memory otherwise it stays alive InternalError: GPU sync failed ```<issue_comment>username_1: Try to limit the amount of used memory. It will slow down the training but that's what fixed that for me.. ``` config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.3 set_session(tf.Session(config=config)) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Try to change the runtime type from 'GPU' to 'None'. This will probably give you a different error that is more useful! Upvotes: -1
2018/03/18
1,799
6,314
<issue_start>username_0: I'm trying to connect to my database stored in GCloud from a PHP Laravel 5.5 app in the same Gcloud project. When I deploy my app, the homepage is displayed well but, when I try to connect the user, I get these errors that show on the browser: [![PDOException](https://i.stack.imgur.com/LTBqI.jpg)](https://i.stack.imgur.com/LTBqI.jpg) [![QueryException](https://i.stack.imgur.com/cCYQI.jpg)](https://i.stack.imgur.com/cCYQI.jpg) I followed this tutorial: [Run Laravel on Google App Engine Flexible Environment](https://cloud.google.com/community/tutorials/run-laravel-on-appengine-flexible) My app.yaml file looks like this: ``` runtime: php env: flex runtime_config: document_root: public skip_files: - .env env_variables: APP_LOG: errorlog APP_DEBUG: true APP_KEY: MY-APP-KEY STORAGE_DIR: /tmp CACHE_DRIVER: file SESSION_DRIVER: file DB_CONNECTION : mysql DB_HOST: localhost DB_PORT: 3306 DB_DATABASE: MY DB NAME DB_USERNAME: USERNAME DB_PASSWORD: <PASSWORD> DB_SOCKET: "/cloudsql/MY-PROJECT-NAME:us-central1:MY-SQL-INSTANCE-NAME" ``` In the tutorial, they said to put this: ``` beta_settings: # for Cloud SQL, set this value to the Cloud SQL connection name, # e.g. "project:region:cloudsql-instance" cloud_sql_instances: "YOUR_CLOUDSQL_CONNECTION_NAME" ``` But I removed it beacause when I run the command **gcloud app deploy**, I get this error: **An error occurred while parsing file : app.yaml at line xx column xx** In my **database.php** file, I've tryed this: ``` 'mysql' => [ 'driver' => 'mysql', 'host' => 'localhost', 'port' => '3306', 'database' => 'DBNAME', 'username' => 'USERNAME', 'password' => '<PASSWORD>', 'unix_socket' => env('DB_SOCKET', ''), 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', 'strict' => false, 'engine' => null, ] ``` And this (providing **unix\_socket**): ``` 'mysql' => [ 'driver' => 'mysql', 'host' => 'localhost', 'port' => '3306', 'database' => 'DBNAME', 'username' => 'USERNAME', 'password' => '<PASSWORD>', 'unix_socket' => env('DB_SOCKET', '/cloudsql/MY-PROJECT-NAME:us-central1:MY-SQL-INSTANCE-NAME'), 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', 'strict' => false, 'engine' => null, ] ``` PLEASE NOTE THAT: 1. My API is enabled. 2. Billing is enabled.<issue_comment>username_1: You have to keep this, it is vital: ``` beta_settings: # for Cloud SQL, set this value to the Cloud SQL connection name, # e.g. "project:region:cloudsql-instance" cloud_sql_instances: "YOUR_CLOUDSQL_CONNECTION_NAME" ``` Replace "YOUR\_CLOUDSQL\_CONNECTION\_NAME" with connection name you see from the following command: ``` gcloud sql instances describe YOUR_INSTANCE_NAME ``` Also I'm not sure if this is a copy paste problem but you have missing spaces in your configuration before ``` runtime_config: document_root: public # <- missing spaces here ``` Upvotes: 1 <issue_comment>username_2: Just like username_1 said, you have to keep the `beta_settings` part of the `app.yaml`, otherwise the database connection won't work, as in, I don't think the Cloud SQL Proxy executable will be included in the deployed app. Besides this, you should also make sure that the account you're using for accessing the database has the correct grants for `@cloudsqlproxy~%`. --- I've (successfully) tried deploying the default Laravel website (created via `laravel new`). The following snippets are for the latest version of Laravel (version 5.6), but from my testing this should work pretty much the same for 5.5. Here's my `app.yaml` file. The user you'll be using for accessing the database should have grants for `@cloudsqlproxy~%` in the Cloud SQL instance. From my experimentation, the Cloud SQL Proxy runs in the AEF Compute Instances in Unix socket mode, so my guess is that at the end of the day, whatever you set in DB\_HOST shouldn't really matter. Finally, be **very careful** with the spaces (actual spaces, not tabs) in the `app.yaml` file, and make sure you're not using some weird quotation mark character. And again, include `cloud_sql_instances` in `beta_settings`: ``` runtime: php env: flex runtime_config: document_root: public # Ensure we skip ".env", which is only for local development skip_files: - .env env_variables: # Put production environment variables here. APP_LOG: errorlog APP_KEY: INSERT_APPKEY_HERE STORAGE_DIR: /tmp CACHE_DRIVER: database SESSION_DRIVER: database DB_HOST: 127.0.0.1 DB_DATABASE: laravel DB_USERNAME: INSERT_USERNAME DB_PASSWORD: <PASSWORD> DB_SOCKET: "/cloudsql/project-name:region:cloudsql-instance-name" beta_settings: cloud_sql_instances: "project-name:region:cloudsql-instance-name" ``` Here's my `mysql` section of `config/database.php`. Mind you, I didn't change a thing, these are the default values for 5.6: ``` 'mysql' => [ 'driver' => 'mysql', 'host' => env('DB_HOST', '127.0.0.1'), 'port' => env('DB_PORT', '3306'), 'database' => env('DB_DATABASE', 'forge'), 'username' => env('DB_USERNAME', 'forge'), 'password' => env('DB_PASSWORD', ''), 'unix_socket' => env('DB_SOCKET', ''), 'charset' => 'utf8mb4', 'collation' => 'utf8mb4_unicode_ci', 'prefix' => '', 'strict' => true, 'engine' => null, ] ``` Finally, here's the `post-install-cmd` section of the `composer.json` file. In 5.5, you also have to add `php artisan optimize` **before** the `chmod` command: ``` "post-install-cmd": [ "Illuminate\\Foundation\\ComposerScripts::postInstall", "chmod -R 755 bootstrap\/cache" ] ``` Check if there's anything in your app's deployment settings that differs from what I'm posting here! The parsing error from your `gcloud app deploy` is very strange; as long as you format the file correctly and have gcloud up-to-date, it should work without issues. Upvotes: 1 [selected_answer]
2018/03/18
727
2,240
<issue_start>username_0: I'm using `Django 2` I have a model `Course` to create a course and upload banner image as `ImageField()` ``` class Course(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) name = models.CharField(max_length=250) banner = models.ImageField(upload_to='course/%Y/%m/%d', blank=True) ``` In my `settings` files located at `app/settings/local.py` ``` # at top of settings file, defined BASE_DIR BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, 'static_dir') ] STATIC_ROOT = os.path.join(os.path.dirname(BASE_DIR), 'static_cdn', 'static_root') MEDIA_URL = '/media/' MEDIA_ROOT = os.path.join(os.path.dirname(BASE_DIR), 'static_cdn', 'media_root') ``` The image is uploading perfectly but it's uploaded location is outside of project directory. Say if my project resides in `my_project` directory which contains `app` module and other modules along with `static_dir` to store static files. But the `static_cdn` directory is created outside `my_project` directory. I want it to be inside `my_project` directory. **What is wrong there?** > > Edit 2 > > > ``` DEBUG = False STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, 'static_dir') ] STATIC_ROOT = os.path.join(BASE_DIR, 'static_cdn', 'static_root') ```<issue_comment>username_1: Remove one os.path.dirname() call , should work fine ``` BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) ``` Or you can keep BASE\_DIR as it is and change MEDIA\_ROOT and STATIC \_ROOT ``` STATIC_ROOT = os.path.join(BASE_DIR, 'static_cdn', 'static_root') MEDIA_ROOT = os.path.join(BASE_DIR, 'static_cdn', 'media_root') ``` I think second option will be best Upvotes: 3 [selected_answer]<issue_comment>username_1: Answer to EDIT 2 As you have DEBUG = False your are running in production mode Please read <https://docs.djangoproject.com/en/2.0/howto/static-files/deployment/> Also you can use <http://whitenoise.evans.io/en/stable/> Also if you set DEBUG = True it should work fine Recommend: Do not set DEBUG = True in production Upvotes: 0
2018/03/18
4,663
17,162
<issue_start>username_0: I'm just starting to get the hang of Flutter, but I'm having trouble figuring out how to set the enabled state of a button. From the docs, it says to set `onPressed` to null to disable a button, and give it a value to enable it. This is fine if the button continues to be in the same state for the lifecycle. I get the impression I need to create a custom Stateful widget that will allow me to update the button's enabled state (or onPressed callback) somehow. So my question is how would I do that? This seems like a pretty straightforward requirement, but I can't find anything in the docs on how to do it. Thanks.<issue_comment>username_1: I think you may want to introduce some helper functions to `build` your button as well as a Stateful widget along with some property to key off of. * Use a StatefulWidget/State and create a variable to hold your condition (e.g. `isButtonDisabled`) * Set this to true initially (if that's what you desire) * When rendering the button, **don't directly set the `onPressed`** value to either `null` or some function `onPressed: () {}` * **Instead**, conditionally set it using a ternary or a helper function *(example below)* * Check the `isButtonDisabled` as part of this conditional and return either `null` or some function. * When the button is pressed (or whenever you want to disable the button) use `setState(() => isButtonDisabled = true)` to flip the conditional variable. * Flutter will call the `build()` method again with the new state and the button will be rendered with a `null` press handler and be disabled. Here's is some more context using the Flutter counter project. ``` class MyHomePage extends StatefulWidget { @override _MyHomePageState createState() => new _MyHomePageState(); } class _MyHomePageState extends State { int \_counter = 0; bool \_isButtonDisabled; @override void initState() { \_isButtonDisabled = false; } void \_incrementCounter() { setState(() { \_isButtonDisabled = true; \_counter++; }); } @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( title: new Text("The App"), ), body: new Center( child: new Column( mainAxisAlignment: MainAxisAlignment.center, children: [ new Text( 'You have pushed the button this many times:', ), new Text( '$\_counter', style: Theme.of(context).textTheme.display1, ), \_buildCounterButton(), ], ), ), ); } Widget \_buildCounterButton() { return new RaisedButton( child: new Text( \_isButtonDisabled ? "Hold on..." : "Increment" ), onPressed: \_isButtonDisabled ? null : \_incrementCounter, ); } } ``` In this example I am using an inline ternary to conditionally set the `Text` and `onPressed`, but it may be more appropriate for you to extract this into a function (you can use this same method to change the text of the button as well): ``` Widget _buildCounterButton() { return new RaisedButton( child: new Text( _isButtonDisabled ? "Hold on..." : "Increment" ), onPressed: _counterButtonPress(), ); } Function _counterButtonPress() { if (_isButtonDisabled) { return null; } else { return () { // do anything else you may want to here _incrementCounter(); }; } } ``` Upvotes: 9 [selected_answer]<issue_comment>username_2: According to the [docs](https://docs.flutter.io/flutter/material/RaisedButton-class.html): > > If the `onPressed` callback is null, then the button will be disabled > and by default will resemble a flat button in the `disabledColor`. > > > So, you might do something like this: ```dart RaisedButton( onPressed: calculateWhetherDisabledReturnsBool() ? null : () => whatToDoOnPressed, child: Text('Button text') ); ``` Upvotes: 8 <issue_comment>username_3: The simple answer is `onPressed : null` gives a disabled button. Upvotes: 7 <issue_comment>username_4: For a specific and limited number of widgets, wrapping them in a widget [IgnorePointer](https://docs.flutter.io/flutter/widgets/IgnorePointer-class.html) does exactly this: when its `ignoring` property is set to true, the sub-widget (actually, the entire subtree) is not clickable. ``` IgnorePointer( ignoring: true, // or false child: RaisedButton( onPressed: _logInWithFacebook, child: Text("Facebook sign-in"), ), ), ``` Otherwise, if you intend to disable an entire subtree, look into AbsorbPointer(). Upvotes: 5 <issue_comment>username_5: **Disables click:** ``` onPressed: null ``` **Enables click:** ``` onPressed: () => fooFunction() // or onPressed: fooFunction ``` **Combination:** ```dart onPressed: shouldEnable ? fooFunction : null ``` Upvotes: 6 <issue_comment>username_6: You can also use the AbsorbPointer, and you can use it in the following way: ``` AbsorbPointer( absorbing: true, // by default is true child: RaisedButton( onPressed: (){ print('pending to implement onPressed function'); }, child: Text("Button Click!!!"), ), ), ``` If you want to know more about this widget, you can check the following link [Flutter Docs](https://docs.flutter.io/flutter/widgets/AbsorbPointer-class.html) Upvotes: 4 <issue_comment>username_7: Enable and Disable functionality is same for most of the widgets. Ex, button , switch, checkbox etc. Just set the `onPressed` property as shown below `onPressed : null` returns **Disabled widget** `onPressed : (){}` or `onPressed : _functionName` returns **Enabled widget** Upvotes: 4 <issue_comment>username_8: This is the easiest way in my opinion: ``` RaisedButton( child: Text("PRESS BUTTON"), onPressed: booleanCondition ? () => myTapCallback() : null ) ``` Upvotes: 4 <issue_comment>username_9: You can set also blank condition, in place of set null ``` var isDisable=true; RaisedButton( padding: const EdgeInsets.all(20), textColor: Colors.white, color: Colors.green, onPressed: isDisable ? () => (){} : myClickingData(), child: Text('Button'), ) ``` Upvotes: 0 <issue_comment>username_10: I like to use flutter\_mobx for this and work on the state. Next I use an observer: ``` Container(child: Observer(builder: (_) { var method; if (!controller.isDisabledButton) method = controller.methodController; return RaiseButton(child: Text('Test') onPressed: method); })); ``` On the Controller: ``` @observable bool isDisabledButton = true; ``` Then inside the control you can manipulate this variable as you want. Refs.: [Flutter mobx](https://pub.dev/packages/flutter_mobx) Upvotes: -1 <issue_comment>username_11: For disabling any **Button** in flutter such as `FlatButton`, `RaisedButton`, `MaterialButton`, `IconButton` etc all you need to do is to set the `onPressed` and `onLongPress` properties to **null**. Below is some simple examples for some of the buttons: **FlatButton (Enabled)** ``` FlatButton( onPressed: (){}, onLongPress: null, // Set one as NOT null is enough to enable the button textColor: Colors.black, disabledColor: Colors.orange, disabledTextColor: Colors.white, child: Text('Flat Button'), ), ``` [![enter image description here](https://i.stack.imgur.com/4T5j1.jpg)](https://i.stack.imgur.com/4T5j1.jpg) [![enter image description here](https://i.stack.imgur.com/poccN.jpg)](https://i.stack.imgur.com/poccN.jpg) **FlatButton (Disabled)** ``` FlatButton( onPressed: null, onLongPress: null, textColor: Colors.black, disabledColor: Colors.orange, disabledTextColor: Colors.white, child: Text('Flat Button'), ), ``` [![enter image description here](https://i.stack.imgur.com/tpdXI.jpg)](https://i.stack.imgur.com/tpdXI.jpg) **RaisedButton (Enabled)** ``` RaisedButton( onPressed: (){}, onLongPress: null, // Set one as NOT null is enough to enable the button // For when the button is enabled color: Colors.lightBlueAccent, textColor: Colors.black, splashColor: Colors.blue, elevation: 8.0, // For when the button is disabled disabledTextColor: Colors.white, disabledColor: Colors.orange, disabledElevation: 0.0, child: Text('Raised Button'), ), ``` [![enter image description here](https://i.stack.imgur.com/tR8It.jpg)](https://i.stack.imgur.com/tR8It.jpg) **RaisedButton (Disabled)** ``` RaisedButton( onPressed: null, onLongPress: null, // For when the button is enabled color: Colors.lightBlueAccent, textColor: Colors.black, splashColor: Colors.blue, elevation: 8.0, // For when the button is disabled disabledTextColor: Colors.white, disabledColor: Colors.orange, disabledElevation: 0.0, child: Text('Raised Button'), ), ``` [![enter image description here](https://i.stack.imgur.com/4k0W8.jpg)](https://i.stack.imgur.com/4k0W8.jpg) **IconButton (Enabled)** ``` IconButton( onPressed: () {}, icon: Icon(Icons.card_giftcard_rounded), color: Colors.lightBlueAccent, disabledColor: Colors.orange, ), ``` [![enter image description here](https://i.stack.imgur.com/vjbtz.jpg)](https://i.stack.imgur.com/vjbtz.jpg) [![enter image description here](https://i.stack.imgur.com/yRBlp.jpg)](https://i.stack.imgur.com/yRBlp.jpg) **IconButton (Disabled)** ``` IconButton( onPressed: null, icon: Icon(Icons.card_giftcard_rounded), color: Colors.lightBlueAccent, disabledColor: Colors.orange, ), ``` [![enter image description here](https://i.stack.imgur.com/5j0qk.jpg)](https://i.stack.imgur.com/5j0qk.jpg) **Note**: Some of buttons such as `IconButton` have only the `onPressed` property. Upvotes: 3 <issue_comment>username_12: This answer is based on updated Buttons `TextButton/ElevatedButton/OutlinedButton` for `Flutter 2.x` Still, buttons are enabled or disabled based on `onPressed` property. If that property is null then button would be disabled. If you will assign function to `onPressed` then button would be enabled. In the below snippets, I have shown how to enable/disable button and update it's style accordingly. > > This post also indicating that how to apply different styles to new > Flutter 2.x buttons. > > > [![enter image description here](https://i.stack.imgur.com/pMnRY.png)](https://i.stack.imgur.com/pMnRY.png) ``` import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, visualDensity: VisualDensity.adaptivePlatformDensity, ), home: MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { MyHomePage({Key key, this.title}) : super(key: key); final String title; @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State { bool textBtnswitchState = true; bool elevatedBtnSwitchState = true; bool outlinedBtnState = true; @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Padding( padding: const EdgeInsets.all(16.0), child: Column( children: [ Row( mainAxisAlignment: MainAxisAlignment.spaceBetween, children: [ TextButton( child: Text('Text Button'), onPressed: textBtnswitchState ? () {} : null, style: ButtonStyle( foregroundColor: MaterialStateProperty.resolveWith( (states) { if (states.contains(MaterialState.disabled)) { return Colors.grey; } else { return Colors.red; } }, ), ), ), Column( children: [ Text('Change State'), Switch( value: textBtnswitchState, onChanged: (newState) { setState(() { textBtnswitchState = !textBtnswitchState; }); }, ), ], ) ], ), Divider(), Row( mainAxisAlignment: MainAxisAlignment.spaceBetween, children: [ ElevatedButton( child: Text('Text Button'), onPressed: elevatedBtnSwitchState ? () {} : null, style: ButtonStyle( foregroundColor: MaterialStateProperty.resolveWith( (states) { if (states.contains(MaterialState.disabled)) { return Colors.grey; } else { return Colors.white; } }, ), ), ), Column( children: [ Text('Change State'), Switch( value: elevatedBtnSwitchState, onChanged: (newState) { setState(() { elevatedBtnSwitchState = !elevatedBtnSwitchState; }); }, ), ], ) ], ), Divider(), Row( mainAxisAlignment: MainAxisAlignment.spaceBetween, children: [ OutlinedButton( child: Text('Outlined Button'), onPressed: outlinedBtnState ? () {} : null, style: ButtonStyle( foregroundColor: MaterialStateProperty.resolveWith( (states) { if (states.contains(MaterialState.disabled)) { return Colors.grey; } else { return Colors.red; } }, ), side: MaterialStateProperty.resolveWith((states) { if (states.contains(MaterialState.disabled)) { return BorderSide(color: Colors.grey); } else { return BorderSide(color: Colors.red); } })), ), Column( children: [ Text('Change State'), Switch( value: outlinedBtnState, onChanged: (newState) { setState(() { outlinedBtnState = !outlinedBtnState; }); }, ), ], ) ], ), ], ), ), ); } } ``` Upvotes: 4 <issue_comment>username_13: If you are searching for a quick way and don't care about letting the user actually clicking more then once on a button. You could do it also the following way: ``` // Constant whether button is clicked bool isClicked = false; ``` and then checking in the onPressed() function whether the user has already clicked the button or not. ``` onPressed: () async { if (!isClicked) { isClicked = true; // await Your normal function } else { Toast.show( "You click already on this button", context, duration: Toast.LENGTH_LONG, gravity: Toast.BOTTOM); } } ``` Upvotes: -1 <issue_comment>username_14: You can use this code in your app for button with loading and disable: ``` class BtnPrimary extends StatelessWidget { bool loading; String label; VoidCallback onPressed; BtnPrimary( {required this.label, required this.onPressed, this.loading = false}); @override Widget build(BuildContext context) { return ElevatedButton.icon( icon: loading ? const SizedBox( child: CircularProgressIndicator( color: Colors.white, ), width: 20, height: 20) : const SizedBox(width: 0, height: 0), label: loading ? const Text('Waiting...'): Text(label), onPressed: loading ? null : onPressed, ); } } ``` I hope useful Upvotes: 2 <issue_comment>username_15: This is the easiest way to disable a button in Flutter is assign the `null` value to the `onPressed` ``` ElevatedButton( style: ElevatedButton.styleFrom( primary: Colors.blue, // background onPrimary: Colors.white, // foreground ), onPressed: null, child: Text('ElevatedButton'), ), ``` Upvotes: 3 <issue_comment>username_16: see below of the possible solution, add a 'ValueListenableBuilder' of 'TextEditingValue' listening to controller (TextEditingController) and return your function call if controller.text is not empty and return 'null' if it is empty. // valuelistenablebuilder wraped around button ``` ValueListenableBuilder( valueListenable: textFieldController, builder: (context, ctrl, \_\_) => ElevatedButton( onPressed: ctrl.text.isNotEmpty ? yourFunctionCall : null, child: Text( 'SUBMIT', style: GoogleFonts.roboto(fontSize: 20.0), ), ), ), ``` //texfield ``` TextField(controller: textFieldController, onChanged: (newValue) { textFieldText = newValue; }, ), ``` builder will listen to controller and enables button only when textfield is in use. I hope this answers the question. let me know.. Upvotes: 0 <issue_comment>username_17: There are two ways of doing this: 1- <https://stackoverflow.com/a/49354576/5499531> 2- You can use a MaterialStatesController: ``` final _statesController = MaterialStatesController(); ``` and then change the state to: ``` _statesController.update( MaterialState.disabled, true, // or false depending on your logic ); ``` On your button ``` ElevatedButton( onPressed: _onPressed, statesController: _statesController, child: Text("Awesome"), ), ``` In addition you can change the button style when is disable: in your theme setup: ``` .... elevatedButtonTheme: ElevatedButtonThemeData( style: ElevatedButton.styleFrom( backgroundColor: colors.primary500, // set your own color textStyle: button, // set your own style onPrimary: colors.onPrimary100, // set your own color enableFeedback: true, disabledBackgroundColor: colors.primary300, // set your own color disabledForegroundColor: colors.primary300, // set your own color disabledMouseCursor: SystemMouseCursors.forbidden, // when is disable the change the cursor type ), ), ... ``` Upvotes: 2
2018/03/18
298
1,204
<issue_start>username_0: In JavaScript, we have 6 primitive data types (each with their own object wrapper) and 1 object data type. Where / how does v8 store a value's data type?<issue_comment>username_1: The data type is part of the value. The type of JS values is a [sum type](https://en.wikipedia.org/wiki/Sum_type) that lets us distinguish the primitive types and objects. For example `typeof` is an operator that lets us access (parts of) the bit that stores the type. Of course, an optimising compiler is free to drop that information when it can prove that a certain variable will only ever store values of the same type, so in the implementation the information might be moved to an annotation on the variable. Upvotes: 2 [selected_answer]<issue_comment>username_2: Your only access to these types is with [typeof](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof "typeof"). There are more primitive types but they are not visible in a normal JavaScript environment. If you want to see how these are handeld inside the engine I recommand watching [this video](https://www.youtube.com/watch?v=EhpmNyR2Za0 "this video") I coincidently watched today. Upvotes: 0
2018/03/18
434
1,679
<issue_start>username_0: I want interate a DataSetIterator and add it into a DataSet. Iterate is easy: ``` while (iterator.hasNext()) { DataSet next = iterator.next(); dataSet.addRow(next, dataSet.numExamples()); // isn't work } ``` if the DataSetIterator batch size is 1, when I do `dataSet.addRow(next, 1);` this just replace this first element with the next one. If batch size is 2, then raise the exception: `Exception in thread "main" java.lang.IllegalArgumentException: NDArrayIndex is out of range. Beginning index: 2 must be less than its size: 2` I also want know how add a DataSet into another DataSet.<issue_comment>username_1: The exception thrown by the statement: ``` dataSet.addRow(next, dataSet.numExamples()); // isn't work ``` should give you a clue of why it is not working. The likely reason the above is raising an exception is because the row index specified by the second parameter of addRow() is 0 based, so valid values range from 0 to numExamples() -1. Regarding adding rows to a DataSet, check if there is an append() method in the DataSet or an addRow() method that does not require the caller to specify the row index. To merge one DataSet instance with another, check if there is a merge() method available. Hope this helps! Upvotes: 0 <issue_comment>username_2: `DataSet` class has the static `merge()` method. On this method, you pass a `List` and returns all `DataSet` contains into `List` like one only `DataSet`. ``` ArrayList data\_list = new ArrayList(); // add some data into data\_list DataSet allData = DataSet.merge(data\_list); // all DataSet into data\_list are merged into allData ``` Upvotes: 2 [selected_answer]
2018/03/18
672
2,077
<issue_start>username_0: Suppose `L = [{'C', 'T'}, {'L'}, {'M'}]` let `c1 = 'C' and c2 = 'M'` I want to union if c1 and c2 are in different sets. How would I check that 'C' is in a different set than 'M' so that I can union it efficiently. I am trying to avoid multiple loops. (If c1 and c2 are in same set then do nothing) For this example the output would be: `[{'C', 'T', 'M'}, {'L'}]`<issue_comment>username_1: The first step is to find the two sets that contain "C" and "M". Here's a solution that uses a generator expression to do so: ``` try: i1 = next(i for i, values in enumerate(L) if 'C' in values) i2 = next(i for i, values in enumerate(L) if 'M' in values) except StopIteration: # "C" or "M" wasn't found pass else: if i1 != i2: set1 = L[i1] set1.update(L.pop(i2)) ``` This is in my opinion a good and readable solution, but if you *really* want to avoid looping over the data twice, you can merge the two generator expressions into a single loop: ``` i1 = i2 = None for i, values in enumerate(L): if 'C' in values: i1 = i if 'M' in values: i2 = i if i1 is not None and i2 is not None: if i1 != i2: set1 = L[i1] set1.update(L.pop(i2)) break ``` Both solutions modify `L` in-place. They do not create a new list. Upvotes: 2 <issue_comment>username_2: If I understood correctly, this pretty straight forward function should do the trick. Uses one loop. ``` >>> def merge(sets, c1, c2): ... merged = set() ... other = [merged] ... for s in sets: ... if c1 in s or c2 in s: ... merged.update(s) ... else: ... other.append(s) ... return other ... >>> L = [{'C', 'T'}, {'L'}, {'M'}] >>> c1 = 'C' >>> c2 = 'M' >>> >>> merge(L, c1, c2) [set(['C', 'M', 'T']), set(['L'])] ``` I did not use the fact that each letter can appear only once since this would result in such a minor optimization (set membership test runs in O(1)) that I don't see the point in sacrificing readbility here. Upvotes: 1
2018/03/18
616
1,786
<issue_start>username_0: so i'm having some trouble creating a barplot with strings as the x axis and the height as its averages. so say i have a column called `fruits = c("strawberry", "apple", "strawberry", "banana", "apple"....)` and a corresponding column with its `count = c(2, 3, 4, 2,...)` I've tried doing ``` barplot(as.numeric(fruits$amount), names.arg = fruits$type) ``` but that seems gives a bar for every single occurrence of the fruit. so I'm getting 100+ bars even though i have only around 10 types of fruit. I've also tried converting it to a table before hand and plotting that but that also doesn't work. ``` test <- table(as.numeric(fruits$amount), row.names = fruits$type) barplot(test) ``` I'm new to R so I aplogize if this is a obvious fix/dumb question. Any suggestions? thanks!<issue_comment>username_1: ``` #DATA df1 = data.frame(fruits = c("strawberry", "apple", "strawberry", "banana", "apple"), counts = c(2, 3, 4, 2, 4)) #Summarize temp = aggregate(counts~fruits, df1, sum) #Plot barplot(temp[["counts"]], names.arg = temp[["fruits"]], las = 1, horiz = TRUE) ``` Upvotes: 1 <issue_comment>username_2: Maybe this would be helpful: ``` fruits <- data.frame(amount=c(2,3,5,1,7), type=c("Strawberry", "Apple", "Pear", "Banana", "Orange")) b <- barplot(as.numeric(fruits$amount), names.arg = fruits$type, horiz = T, las=1, xlim=c(0,10), col="steelblue") abline(v=mean(fruits$amount), col="red", lwd=2) axis(1,mean(fruits$amount), col.axis ="red") text(y = b, x = fruits$amount, pos = 4, cex = 2, col="darkblue") ``` The output is: [![enter image description here](https://i.stack.imgur.com/t8Fyf.png)](https://i.stack.imgur.com/t8Fyf.png) Upvotes: 3 [selected_answer]
2018/03/18
1,200
4,094
<issue_start>username_0: I am trying to solve a homework task involving date validation: "*What’s the day? Design a program to take 3 inputs, one for day, one for month and one for year. Get your program to validate if this is an actual day, and, if it is, output the day of the week it is!*" I have done the first validation bit with the following code but can't do the second bit: ``` import datetime print("This program will check if a date is correct and output what day of the week it was.") day = input("Please input the day>") month = input("Please input the month in number format>") year = input("Please input the year, it must be after 1900>") date_range = False leap_year_check = 0 if (date in range(1,31)) and (month in range (1, 12)) and (year in range(1900, 2018)): date_range = True else: date_range = False if date_range == True: leap_year_check = year % 4 if leap_year_check == 0: if month == 2 and day in range(1, 29): print("The date entered is a correct date") elif month == "1" or "3" or "5" or "7" or "8" or "11" or "12" and day in range (1, 31): print("The date entered is a correct date") elif month == "4" or "6" or "10" or "9" and day in range (1, 30): print("The date entered is a correct date") elif leap_year_check != 0: if month == 2 and day in range(1, 28): print("The date entered is a correct date") elif month == "1" or "3" or "5" or "7" or "8" or "11" or "12" and day in range (1, 31): print("The date entered is a correct date") elif month == "4" or "6" or "10" or "9" and day in range (1, 30): print("The date entered is a correct date") if date_range == False: print("The date entered is incorrect") ```<issue_comment>username_1: Simply attempt to create it: ``` >>> datetime.date(2018,2,34) Traceback (most recent call last): File "", line 1, in ValueError: day is out of range for month ``` You catch the exception and consider it: ``` d = None try: d = datetime.date(year, month, day) except ValueError: print("The date entered is incorrect") if d is not None: print("The date entered is a correct date") ``` Upvotes: 2 <issue_comment>username_2: A Python program should have the following skeleton: ``` # import statements # ... # global constants # ... # functions # ... def main(): # TODO: the main task: parse command line arguments, etc pass if __name__ == "__main__": main() ``` Now, your task is to implement date validation. Think of the big steps, and create some more functions accordingly: ``` def main(): year, month, day = inputYearMonthDay() print(isValidYearMonthDay(year, month, day)) ``` You already have the content of `inputYearMonthDay`, it could be something like this: ``` def inputYearMonthDay(): print("This program will check if a date is correct and output what day of the week it was.") dayStr = input("Please input the day>") monthStr = input("Please input the month in number format>") yearStr = input("Please input the year, it must be after 1900>") return int(year), int(month), int(day) ``` What would be the big steps of `isValidYearMonthDay`? ``` def isValidYearMonthDay(year, month, day): if not validateSaneYear(year) or not validateSaneMonth(month) or not validateSaneDay(day): return False # TODO validate days per month, with special treatment for february in leap years return False ``` `validateSaneYear` could be: ``` def validateSaneYear(year): if 1900 <= year < 2018: return True print("Invalid year: {}; expected to be in the range [1900,2018)".format(year)) return False ``` `validateSaneMonth` and `validateSaneDay` could be implemented similarly. And so on. If you break down the problem to its big steps, and each step to its big own big steps, you can reduce the big problem to tiny sub-problems that can be easily solved and tested individually, building up to a complete program that does something interesting. Good luck! Upvotes: 0
2018/03/18
2,548
7,505
<issue_start>username_0: When my website is on 100% zoom it looks alright but... * when I zoom in it all goes right * when I zoom out it all goes left How do I make it so my website zooms in and out without effecting the layout of the website from the center? e.g. Like this website <http://www.johnlewis.com> I am a beginner at coding. HTML and CSS is shown below. HTML ``` <NAME> <NAME> =========== * Contact * CV * Portfolio * Home A BIT ABOUT ME --------------- this crap is only here to show what it would look like mate I don't want it sounding boring like my CV and needs some life so give it some josh ### INTERESTS ![](CopenhagenJosh.png) ![](CopenhagenJosh.png) ![](CopenhagenJosh.png) ![](CopenhagenJosh.png) ![](CopenhagenJosh.png) ![](CopenhagenJosh.png) Hello Hello Hello Hello Hello PORTFOLIO ---------- This is my creative work ayoooo below... ![](AppIcon.png) ![](AppIcon.png) ![](AppIcon.png) ![](AppIcon.png) ![](AppIcon.png) ![](AppIcon.png) CV --- ![](CopenhagenJosh.png) CONTACT -------- ``` CSS ``` body { font-family: 'Lato', sans-serif; font-style:italic } html{ padding: 0px; margin: 0px; background: url(sky.jpg); background-size:contain; background-repeat:no-repeat; display: inline-block; } /* FOOTER NAVIGATION */ #nav-div { opacity: 1; font-size: 15px; } #nav-div h1{ color: lightskyblue; cursor: pointer; width: px; float: left; margin-left: 600px; margin-top: 0px; margin-bottom: 0px; padding: 0px; font-size: 25px; } #nav-div h1:hover{ color: white; transition:all 0.40s; } #nav-div ul{ margin: 0px; padding: 0px; width: 100%; height: 80px; background: ; line-height: 80px; float:right; border-bottom: px solid black; margin-right: 300px; } #nav-div ul a{ text-decoration: none; color: lightskyblue; padding: 25px; } #nav-div ul a:hover{ color:white; transition:all 0.40s; font-style:italic; } #nav-div ul li { list-style-type: none; display: inline-block; float: right; font-style:normal; font-size: 15px; } #main-left{ float: left; display: inline-block; width: 40%; height: 250px; margin-top: 100px; } #main-right{ float: left; padding: px; display: inline-block; width: 22%; height: 175px; margin-top: 100px; } #main-right img{ float: left; margin-left ; width: 150px; height: 150px; padding: 10px; display: inline-block; } #main-social{ float: left; display: inline-block; margin-left: 7%; width: 10%; height: 250px; margin-top: 100px; } #main-left h2{ width: 300px; height: 50px; font-size: 35px; color: white; display: inline-block; margin-left: 300px; margin-right: %; margin-top: 0; margin-bottom:0; float: left; } #main-left p{ width: 250px; height: 100px; margin-top: px; margin-left: 300px; margin-right: 5%; font-size: 17px; color: darkgrey; display: inline-block; position: relative; } #main-left h3{ margin-top: px; width: 150px; height: 30px; margin-left: 300px; font-size: 20px; color: white; display: inline-block; float: left; } #interests { width: 100%; height:125px; margin-top:px; margin-left: 300px; display: inline-block; } #interests img{ padding: 14px; Height: 100px; Width: 100px; margin-bottom: 0px; } #intereststitles{ width: 100%; height:100px; margin-top:0px; margin-left: 300px; display: inline-block; color: white; } #intereststitles p{ padding: 14px; margin-top:0px; Height: 10px; Width: 100px; font-style: normal; display: inline-block; text-align: center } #portfolio { width: 100%; height: 100%; background-color: gray; opacity: 1; } #portfolio-left{ background-color: gray; float: left; width: 25%; height: 100px; } #portfolio-left h2{ width: 200px; height: 50px; font-size: 35px; color: white; display: inline-block; margin-left: 200px; margin-right: ; margin-top: 25px; margin-bottom:0; float: left; } #portfolio-right{ background-color: gray; float: right; width: 75%; height: 100px; margin-top: 0px; } #portfolio-right p{ font-size: 20px; color: white; display: inline-block; margin-left: 0%; margin-right: %; margin-top: 36px; margin-bottom:0; padding: 0px; float: left; } #portfolio-1{ margin-left: 0%; width: 100%; display: inline-block; text-align: center; padding:0%; margin:0; background-color: gray; } #portfolio-1 img{ display: inline-block; padding: 0px; width:33%; } #CV { width: 100%; height: 900px; background-color: skyblue; opacity: 1; } #CV-left{ float: left; width: 430px; height: 100px; } #CV-left h2{ font-size: 35px; color: white; display: inline-block; margin-left: 200px; margin-right: ; margin-top: 25px; margin-bottom:0; float: left; } #CV-right{ float: right; width: 75%; height: 100px; margin-top: 0px; } #CV-right p{ font-size: 20px; color: white; display: inline-block; margin-left: 0%; margin-right: %; margin-top: 40px; margin-bottom:0; padding: 0px; float: left; } #contact { width: 100%; height: 500px; background-color: mediumpurple; opacity: 1; } #contact-left{ float: left; width: 50%; height: 900px; } #contact-left h2{ font-size: 35px; color: white; display: inline-block; margin-left: 200px; margin-right: ; margin-top: 25px; margin-bottom:0; float: left; } #contact-right{ float: right; width: 50%; height: 900px; } ```<issue_comment>username_1: You need a `div` as a wrapper for your webpage, and set a fixed `width` or `max-width` if you want the content to be able to become smaller on smaller screens with `margin` set as auto for `margin-left` and `margin-right` to keep the page centered. Which is precisely what the webpage you mentioned does - [![Wrapper](https://i.stack.imgur.com/1UVba.png)](https://i.stack.imgur.com/1UVba.png) Upvotes: 2 <issue_comment>username_2: You can center center content horizontally in CSS by putting it inside a block-level element (like a `div`), with left and right margins set to "auto". Take a look at [Centering in CSS: A Complete Guide > Horizontally > Block level element](https://css-tricks.com/centering-css-complete-guide/#horizontal-block). In your case, you would probably want to add a `div` around all the content you currently have in the `body`, give it a set width, and add `margin: 0 auto`. Note that you probably want to use `max-width` instead of just `width` to support smaller browsers (see [CSS Layout - width and max-width](https://www.w3schools.com/css/css_max-width.asp)). In the example site you mentioned, there's a `div` with an id of "wrapper" that contains all the centered content. Upvotes: 0
2018/03/18
416
1,518
<issue_start>username_0: I'm trying to show an alert when a button is clicked, but it is not working for some reason: ``` ``` This is my jQuery code, I haven't given any id or class to it. ``` $(":button").click(function(){ $(":button").alert("pressed"); }); ```<issue_comment>username_1: You need a `div` as a wrapper for your webpage, and set a fixed `width` or `max-width` if you want the content to be able to become smaller on smaller screens with `margin` set as auto for `margin-left` and `margin-right` to keep the page centered. Which is precisely what the webpage you mentioned does - [![Wrapper](https://i.stack.imgur.com/1UVba.png)](https://i.stack.imgur.com/1UVba.png) Upvotes: 2 <issue_comment>username_2: You can center center content horizontally in CSS by putting it inside a block-level element (like a `div`), with left and right margins set to "auto". Take a look at [Centering in CSS: A Complete Guide > Horizontally > Block level element](https://css-tricks.com/centering-css-complete-guide/#horizontal-block). In your case, you would probably want to add a `div` around all the content you currently have in the `body`, give it a set width, and add `margin: 0 auto`. Note that you probably want to use `max-width` instead of just `width` to support smaller browsers (see [CSS Layout - width and max-width](https://www.w3schools.com/css/css_max-width.asp)). In the example site you mentioned, there's a `div` with an id of "wrapper" that contains all the centered content. Upvotes: 0
2018/03/18
576
1,965
<issue_start>username_0: Having a bit of trouble passing props down from a parent component to a child component with es6. ```js // Parent component const Carousel = () => ( ); // Child component const CarouselItem = () => ( ); // Child component where prop is inserted const Title = ({ title }) => ( [{title} -------](/mowat.html) ); ``` Do I need to pass the prop from the Carousel to the CarouselItem then to the Title component?<issue_comment>username_1: The answer is yes. With React 16.2 and before, you have to explicitly pass props down the component tree to reach a component descendant. ``` const CarouselItem = ({ title }) => ( {/\* ^^^^^^^^^^^^^ pass prop to grandchild \*/} ); ``` Starting from React 16.3, you can use the [ContextApi](https://medium.freecodecamp.org/replacing-redux-with-the-new-react-context-api-8f5d01a00e8c) to skip this and inject state to designate components and circumvent the actual component tree. This functionality is also available with libraries like [Redux](http://redux.js.org) or [Mobx](https://mobx.js.org) which take advantage of the *unofficial* [context api](https://reactjs.org/docs/context.html) (not the same as the one coming out in 16.3). Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes, the most straight-forward way would be to pass the prop through `CarouselItem` as you say. ``` const CarouselItem = (props) => ``` However, doing this multiple levels deep can get a little unwieldy, and has even gotten the name "prop drilling". To solve that, a new context API is being added in React 16.3, which allows passing data down through multiple levels. [Here's a good blog post about that.](https://medium.com/dailyjs/reacts-%EF%B8%8F-new-context-api-70c9fe01596b) Upvotes: 1 <issue_comment>username_3: use spread operator `{...props}` ``` const CarouselItem = (props) => ( ); ``` CarouselItem is parent and Title is child recieving all props from parent Upvotes: 1
2018/03/18
637
2,136
<issue_start>username_0: I have a corpus of text that I'm parsing with a regular expression to find the most common words. Currently I'm using [`.match(/(?!'.*')\b\[\w'\]+\b/g)`](https://regex101.com/r/5rgYDx/1). My problem is that `\w` does not match on non-alphanumeric characters and my emoji never get parsed. Specifically, I'm trying to make a regex that will identify words (including contractions) and emoji, separating on the word boundary. As an example I'd like to be able to take `"Hey there! , let's go to the moon "` and get ``` Array( "Hey", "there", "", "let's", "go", "to", "the", "moon", "", "") ```<issue_comment>username_1: The answer is yes. With React 16.2 and before, you have to explicitly pass props down the component tree to reach a component descendant. ``` const CarouselItem = ({ title }) => ( {/\* ^^^^^^^^^^^^^ pass prop to grandchild \*/} ); ``` Starting from React 16.3, you can use the [ContextApi](https://medium.freecodecamp.org/replacing-redux-with-the-new-react-context-api-8f5d01a00e8c) to skip this and inject state to designate components and circumvent the actual component tree. This functionality is also available with libraries like [Redux](http://redux.js.org) or [Mobx](https://mobx.js.org) which take advantage of the *unofficial* [context api](https://reactjs.org/docs/context.html) (not the same as the one coming out in 16.3). Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes, the most straight-forward way would be to pass the prop through `CarouselItem` as you say. ``` const CarouselItem = (props) => ``` However, doing this multiple levels deep can get a little unwieldy, and has even gotten the name "prop drilling". To solve that, a new context API is being added in React 16.3, which allows passing data down through multiple levels. [Here's a good blog post about that.](https://medium.com/dailyjs/reacts-%EF%B8%8F-new-context-api-70c9fe01596b) Upvotes: 1 <issue_comment>username_3: use spread operator `{...props}` ``` const CarouselItem = (props) => ( ); ``` CarouselItem is parent and Title is child recieving all props from parent Upvotes: 1
2018/03/18
1,267
3,855
<issue_start>username_0: I have a table with cus and rus. For some cus there are many rows but all of the rus are none. In this case I would like to return a message, e.g. "no ru unit." For cus that have rus I would like to return their rows. But not the rows that have none for rus.... I made up some data to work with. In this case I should get all rows for cu-1, one row with the message for cu-2, and three rows for cu-3. ``` create table cu_ru ( cu varchar2(30), ru varchar2(30) ) insert into cu_ru (cu, ru) values ('cu-1', 'ru-1b'); insert into cu_ru (cu, ru) values ('cu-1', 'ru-1a'); insert into cu_ru (cu, ru) values ('cu-2', 'None'); insert into cu_ru (cu, ru) values ('cu-2', 'None'); insert into cu_ru (cu, ru) values ('cu-2', 'None'); insert into cu_ru (cu, ru) values ('cu-3', 'ru-3a'); insert into cu_ru (cu, ru) values ('cu-3', 'None'); insert into cu_ru (cu, ru) values ('cu-3', 'None'); insert into cu_ru (cu, ru) values ('cu-3', 'ru-3b'); insert into cu_ru (cu, ru) values ('cu-3', 'ru-3c'); insert into cu_ru (cu, ru) values ('cu-3', 'None'); insert into cu_ru (cu, ru) values ('cu-3', 'None'); ``` My not working effort: ``` select distinct t.cu, (case when ( select count(cu) from cu_ru where ru not like 'None' and cu = t.cu group by cu ) is null then 'no ru unit' else ru end) as ru from cu_ru t order by cu, ru ``` The output is: ``` CU RU cu-1 ru-1a cu-1 ru-1b cu-2 no ru unit cu-3 None cu-3 ru-3a cu-3 ru-3b cu-3 ru-3c ``` How can I drop "cu-3 None" from my output<issue_comment>username_1: Hmmm. I'm inclined to think: ``` select cr.* from cu_ru cr where cr.ru <> 'None' union all select distinct cr.cu, 'None' from cu_ru cr where cr.ru = 'None' and not exists (select 1 from cu_ru cr2 where cr2.cu = cr.cu and cr2.ru <> 'None'); ``` **[EDITED by LF, to show that it really returns 3 rows for CU\_2]** ``` SQL> select cr.* 2 from cu_ru cr 3 where cr.ru <> 'None' 4 union all 5 select cr.cu, 'None' 6 from cu_ru cr 7 where not exists (select 1 from cu_ru cr2 where cr2.cu = cr.cu and cr2.ru <> 'None'); CU RU ------------------------------ ------------------------------ cu-1 ru-1b cu-1 ru-1a cu-3 ru-3a cu-3 ru-3b cu-3 ru-3c cu-2 None cu-2 None cu-2 None 8 rows selected. SQL> ``` Upvotes: 0 <issue_comment>username_2: How about this: ``` SQL> select cu, decode(ru, 'None', 'no ru unit', ru) ru 2 from (select distinct cu, ru, 3 count(distinct ru) over (partition by cu) cdr 4 from cu_ru 5 ) 6 group by cu, ru, cdr 7 having sum(decode(ru, 'None', 1, 0)) = 0 8 or cdr = 1 9 order by 1, 2; CU RU ------------------------------ ------------------------------ cu-1 ru-1a cu-1 ru-1b cu-2 no ru unit cu-3 ru-3a cu-3 ru-3b cu-3 ru-3c 6 rows selected. SQL> ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: This is a simplified variation of @username_2's query using Standard SQL, should run in any DBMS supporting Windowed Aggregates: ``` SELECT cu, CASE WHEN ru = 'None' THEN 'no ru unit' ELSE ru end FROM ( -- get distinct cu/ru combinations SELECT cu, ru, -- check if there's any other value besides None for a cu MAX(CASE WHEN ru <> 'None' THEN 1 ELSE 0 END) Over (PARTITION BY cu) AS OtherFlag FROM cu_ru GROUP BY cu, ru ) dt WHERE ru <> 'None' -- don't show None OR OtherFlag = 0 -- unless there's only None ``` Upvotes: 2
2018/03/18
539
2,000
<issue_start>username_0: I found a gif I would like to play on my LaunchScreen.storyboard, and since my research supported that Swift 4 didn't natively support playing gifs, I found a CocoaPod named [SwiftGif](https://github.com/swiftgif/SwiftGif) that auto-plays gifs in a UIImageView. With SwiftGif, you have to programmatically assign the gif to the UIImageView. `let jeremyGif = UIImage.gif(name: "jeremy")` If you just select the gif from the dropdown of images in the Storyboard, it won't play. The problem is that I can't give the LaunchScreen.storyboard file (or a view inside of it) a custom class, or I will get [this error](https://i.stack.imgur.com/FRprn.png). What can I do? I'm using Xcode 9.2, Swift 4, MacBook Pro (13-inch, 2017, Four Thunderbolt 3 Ports), and macOS High Sierra 10.13.2. Thank you!<issue_comment>username_1: Unfortunately the launch screen only allows for static images to be placed. Animations or custom code is not allowed. However, you can make the feeling of an animated launch screen by presenting a view with the same image and position as the launch screen and animating the view right after launch. Upvotes: 3 [selected_answer]<issue_comment>username_2: In order to implement a GIF animation on the launch screen, you'll need to do two things: 1. Add the first frame of the GIF as a static image in the launch screen 2. When the app launches add a view in the root view controller that loads the animated GIF at the same spot that you’ve put the static image in the launch screen This way, when the app launches the user will get the impression that the launch screen is animating. You can read a detailed explanation on how to do it [on this blog post](https://www.amerhukic.com/animating-launch-screen-using-gif). I've also set up [an example project on GitHub](https://github.com/amerhukic/AnimatedGifLaunchScreen-Example) that shows how to implement it using the [SwiftyGif](https://github.com/swiftgif/SwiftGif) pod that you used. Upvotes: 2
2018/03/18
528
1,946
<issue_start>username_0: I created a simple Maven project with Eclipse (Oxygen.2 Release (4.7.2)) with the standard src/main/resources folder and added it to the classpath. The problem is that Eclipse adds an exclusion pattern of `**` to the src/main/resources folder. Here are some pics to better explicate the situation: [![enter image description here](https://i.stack.imgur.com/LXeVs.png)](https://i.stack.imgur.com/LXeVs.png) [![enter image description here](https://i.stack.imgur.com/9dkFS.png)](https://i.stack.imgur.com/9dkFS.png) You can reproduce the situation yourself, just remember to run Maven -> Update project... According to [this](https://stackoverflow.com/questions/7754255/maven-m2eclipse-excludes-my-resources-all-the-time) answer this is not a bug but it's the correct behaviour. So the question is: **how do i read a resource file from src/main/resources?** I can't use the explicit path src/main/resources since there will be no such path in the compiled results and **I can't use .getResource or .getResourceAsStream since Eclipse put an exclusion pattern of `**` on that path**.<issue_comment>username_1: I write this answer in case someone else has the same problem. I found that the exclusion pattern in Eclipse on src/main/resource folder is normal (see the answer linked above). The exclusion means that it's not Eclipse handling the src/main/resources folder compilation but it's Maven (the Maven plugin of Eclipse to be precise, M2Eclipse). The fact that those resources weren't found in the classpath was due to an exclusion present in the pom.xml: ``` src/main/resources \*\*/\* ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: step 1. delete the project from eclipse step 2. import the project freshly step 3. Go to the folder required to be added as source folder. [![enter image description here](https://i.stack.imgur.com/r00MW.jpg)](https://i.stack.imgur.com/r00MW.jpg) Upvotes: -1
2018/03/18
354
1,269
<issue_start>username_0: I've got an expression like this: ``` /* pierwszy */ using System;/* drugi */ ``` and I want to match all comments in this line. But this regex: ``` \/\*(.*)\*\/ ``` is unfortunately not working, because it matches that: ``` pierwszy */ using System;/* drugi ``` So, as you can see, it matches the whole expression. Anybody knows how to write a regex to match subgroups, not the whole expression?<issue_comment>username_1: I write this answer in case someone else has the same problem. I found that the exclusion pattern in Eclipse on src/main/resource folder is normal (see the answer linked above). The exclusion means that it's not Eclipse handling the src/main/resources folder compilation but it's Maven (the Maven plugin of Eclipse to be precise, M2Eclipse). The fact that those resources weren't found in the classpath was due to an exclusion present in the pom.xml: ``` src/main/resources \*\*/\* ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: step 1. delete the project from eclipse step 2. import the project freshly step 3. Go to the folder required to be added as source folder. [![enter image description here](https://i.stack.imgur.com/r00MW.jpg)](https://i.stack.imgur.com/r00MW.jpg) Upvotes: -1
2018/03/18
1,863
5,251
<issue_start>username_0: I have this project structure: ``` /project | |____/bin | | |____/obj | | |____/include | | | |____aaa.h | | | |____bbb.h | | | |____ccc.h | | |____/src | | | |____aaa.c | | | |____bbb.c | | | |____ccc.c | | |____/test | | | |____aaa_test.c | | | |____bbb_test.c | | | |____ccc_test.c | | |____Makefile ``` I need to write a Makefile to test every module in this project. For each file in the `test` folder I should compile the relative module (same name without "\_test") in `src` folder and create the executable in the folder `bin`. List of commands to execute: ``` gcc -Wall -g -I include -c src/aaa.c -o obj/aaa.o gcc -Wall -g -I include -c test/aaa_test.c -o obj/aaa_test.o gcc obj/aaa.o obj/aaa_test.o -o bin/aaa_test.exe gcc -Wall -g -I include -c src/bbb.c -o obj/bbb.o gcc -Wall -g -I include -c test/bbb_test.c -o obj/bbb_test.o gcc obj/bbb.o obj/bbb_test.o -o bin/bbb_test.exe gcc -Wall -g -I include -c src/ccc.c -o obj/ccc.o gcc -Wall -g -I include -c test/ccc_test.c -o obj/ccc_test.o gcc obj/ccc.o obj/ccc_test.o -o bin/ccc_test.exe ``` My current Makefile: ``` CC := gcc CCFLAGS := -Wall -g INC := -I include BINDIR := bin OBJDIR := obj SRCDIR := src TESTDIR := test all: $(BINDIR)/aaa_test.exe $(BINDIR)/bbb_test.exe $(BINDIR)/ccc_test.exe NAME = aaa $(BINDIR)/$(NAME)_test.exe: $(OBJDIR)/$(NAME).o $(OBJDIR)/$(NAME)_test.o $(CC) $^ -o $@ $(OBJDIR)/$(NAME).o: $(SRCDIR)/$(NAME).c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ $(OBJDIR)/$(NAME)_test.o: $(TESTDIR)/$(NAME)_test.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ NAME = bbb $(BINDIR)/$(NAME)_test.exe: $(OBJDIR)/$(NAME).o $(OBJDIR)/$(NAME)_test.o $(CC) $^ -o $@ $(OBJDIR)/$(NAME).o: $(SRCDIR)/$(NAME).c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ $(OBJDIR)/$(NAME)_test.o: $(TESTDIR)/$(NAME)_test.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ NAME = ccc $(BINDIR)/$(NAME)_test.exe: $(OBJDIR)/$(NAME).o $(OBJDIR)/$(NAME)_test.o $(CC) $^ -o $@ $(OBJDIR)/$(NAME).o: $(SRCDIR)/$(NAME).c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ $(OBJDIR)/$(NAME)_test.o: $(TESTDIR)/$(NAME)_test.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ ``` Can I read the name of each file in the `test` folder and optimize my Makefile?<issue_comment>username_1: That Makefile looks strange to begin with. I'd advise you get rid of `$(OBJDIR)` and `$(BINDIR)` so Make can just compile to the current directory, and we can use its built-in rules to save us having to write our own. And we can use `$(wildcard )` to find the `*_test.c` source files and determine the appropriate targets using `$(patsubst )` like this: ``` CC := gcc CFLAGS := -Wall -g CFLAGS += -Iinclude SRCDIR := src TESTDIR := test # Source files locations VPATH = $(SRCDIR):$(TESTDIR) all-tests: $(patsubst $(TESTDIR)/%.c,%,$(wildcard $(TESTDIR)/*_test.c)) %_test: %_test.o %.o $(LINK.c) $^ $(LDLIBS) -o $@ ``` (Note that I changed `CCFLAGS` to `CFLAGS`, so we can just allow the default `%.o: %.c` rule to do its thing). It's now much shorter and simpler, and lower-maintenance. Upvotes: 0 <issue_comment>username_2: I see nothing wrong with having regular source files (`foo.c`), test sources (`foo_test.c`), header files (`foo.h`), object files (`foo.o`) and executables (`foo.exe`) in separate directories, and it's not difficult to write a makefile that will handle it. Let's start with your current makefile: ``` NAME = aaa $(BINDIR)/$(NAME)_test.exe: $(OBJDIR)/$(NAME).o $(OBJDIR)/$(NAME)_test.o $(CC) $^ -o $@ $(OBJDIR)/$(NAME).o: $(SRCDIR)/$(NAME).c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ $(OBJDIR)/$(NAME)_test.o: $(TESTDIR)/$(NAME)_test.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ # repeated for bbb and ccc ``` But instead of copy-and-paste with a `NAME` variable, we'll write these as [pattern rules](https://www.gnu.org/software/make/manual/make.html#Pattern-Rules): ``` $(BINDIR)/%_test.exe: $(OBJDIR)/%.o $(OBJDIR)/%_test.o $(CC) $^ -o $@ $(OBJDIR)/%.o: $(SRCDIR)/%.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ $(OBJDIR)/%_test.o: $(TESTDIR)/%_test.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ # no repetition needed ``` And if you want you can combine those last two rules into one, using the [vpath directive](https://www.gnu.org/software/make/manual/make.html#Selective-Search) to tell Make where to search for sources: ``` $(OBJDIR)/%.o: %.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ vpath %.c $(SRCDIR) $(TESTDIR) ``` Now instead of spelling out all of the tests: ``` all: $(BINDIR)/aaa_test.exe $(BINDIR)/bbb_test.exe $(BINDIR)/ccc_test.exe ``` You can have Make look in the test directory with [`wildcard`](https://www.gnu.org/software/make/manual/make.html#Wildcard-Function): ``` TEST_SOURCES := $(wildcard $(TESTDIR)/*.c) TESTS := $(patsubst $(TESTDIR)/%_test.c,$(BINDIR)/%_test.exe,$(TEST_SOURCES)) all: $(TESTS) ``` P.S. You can also use wildcard to find all sources of a module according to your naming convention, like src/aaa\_1.c and src/aaa\_2.c, but that's another matter and this answer is getting long. Upvotes: 3 [selected_answer]
2018/03/18
1,117
3,400
<issue_start>username_0: SQL PLUS: I am trying to return ONLY the most frequent 'MAKE' from one table and the 'NAME' of those customers from another table. This is what I have: ``` SELECT sv.make, c.first, c.MI, c.last FROM Sales s INNER JOIN Sale_Vehicles sv ON s.VIN = sv.VIN INNER JOIN Customers c ON s.cust_ID = c.cust_ID GROUP BY sv.make, c.first, c.MI, c.last ORDER BY sv.make, COUNT (*) DESC; ``` This returns the most frequent 'MAKE' at the top of the results with the 2nd and 3rd below it. How do I only return the most frequent?<issue_comment>username_1: That Makefile looks strange to begin with. I'd advise you get rid of `$(OBJDIR)` and `$(BINDIR)` so Make can just compile to the current directory, and we can use its built-in rules to save us having to write our own. And we can use `$(wildcard )` to find the `*_test.c` source files and determine the appropriate targets using `$(patsubst )` like this: ``` CC := gcc CFLAGS := -Wall -g CFLAGS += -Iinclude SRCDIR := src TESTDIR := test # Source files locations VPATH = $(SRCDIR):$(TESTDIR) all-tests: $(patsubst $(TESTDIR)/%.c,%,$(wildcard $(TESTDIR)/*_test.c)) %_test: %_test.o %.o $(LINK.c) $^ $(LDLIBS) -o $@ ``` (Note that I changed `CCFLAGS` to `CFLAGS`, so we can just allow the default `%.o: %.c` rule to do its thing). It's now much shorter and simpler, and lower-maintenance. Upvotes: 0 <issue_comment>username_2: I see nothing wrong with having regular source files (`foo.c`), test sources (`foo_test.c`), header files (`foo.h`), object files (`foo.o`) and executables (`foo.exe`) in separate directories, and it's not difficult to write a makefile that will handle it. Let's start with your current makefile: ``` NAME = aaa $(BINDIR)/$(NAME)_test.exe: $(OBJDIR)/$(NAME).o $(OBJDIR)/$(NAME)_test.o $(CC) $^ -o $@ $(OBJDIR)/$(NAME).o: $(SRCDIR)/$(NAME).c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ $(OBJDIR)/$(NAME)_test.o: $(TESTDIR)/$(NAME)_test.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ # repeated for bbb and ccc ``` But instead of copy-and-paste with a `NAME` variable, we'll write these as [pattern rules](https://www.gnu.org/software/make/manual/make.html#Pattern-Rules): ``` $(BINDIR)/%_test.exe: $(OBJDIR)/%.o $(OBJDIR)/%_test.o $(CC) $^ -o $@ $(OBJDIR)/%.o: $(SRCDIR)/%.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ $(OBJDIR)/%_test.o: $(TESTDIR)/%_test.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ # no repetition needed ``` And if you want you can combine those last two rules into one, using the [vpath directive](https://www.gnu.org/software/make/manual/make.html#Selective-Search) to tell Make where to search for sources: ``` $(OBJDIR)/%.o: %.c $(CC) $(CCFLAGS) $(INC) -c $< -o $@ vpath %.c $(SRCDIR) $(TESTDIR) ``` Now instead of spelling out all of the tests: ``` all: $(BINDIR)/aaa_test.exe $(BINDIR)/bbb_test.exe $(BINDIR)/ccc_test.exe ``` You can have Make look in the test directory with [`wildcard`](https://www.gnu.org/software/make/manual/make.html#Wildcard-Function): ``` TEST_SOURCES := $(wildcard $(TESTDIR)/*.c) TESTS := $(patsubst $(TESTDIR)/%_test.c,$(BINDIR)/%_test.exe,$(TEST_SOURCES)) all: $(TESTS) ``` P.S. You can also use wildcard to find all sources of a module according to your naming convention, like src/aaa\_1.c and src/aaa\_2.c, but that's another matter and this answer is getting long. Upvotes: 3 [selected_answer]
2018/03/18
490
1,678
<issue_start>username_0: i've just started learning Laravel and I have problem generating seed for my test table. Console error says: "Base table or view not found: 1146 Table 'laravel.testms' doesn't exists..." My table is called "testm" - I have no idea why it looks for testm**s** TestmFactory.php ``` use Faker\Generator as Faker; $factory->define(App\Testm::class, function (Faker $faker) { return [ 'test' => $faker->paragraph ]; }); ``` TestmTableSeeder.php ``` use Illuminate\Database\Seeder; class TestmTableSeeder extends Seeder { /** * Run the database seeds. * * @return void */ public function run() { factory(App\Testm::class, 5)->create(); } } ``` DatabaseSeeder.php ``` public function run() { $this->call(LinksTableSeeder::class); $this->call(TestmTableSeeder::class); } ``` app/Testm.php ``` class Testm extends Model { // Below line fixed my code :-) protected $table = 'testm'; protected $fillable = [ 'test' ]; } ```<issue_comment>username_1: Try adding this to your model ``` protected $table = 'testm'; ``` Upvotes: 2 <issue_comment>username_2: From [Laravels documentation](https://laravel.com/docs/5.6/eloquent#eloquent-model-conventions): > > By convention, the "snake case", plural name of the class will be used as the table name unless another name is explicitly specified. > > > And in order to explicitly define the table name in the model, `Testm.php` in your case, you would want to add the following code to the class: ``` protected $table = 'testm'; ``` Hope this helps! Upvotes: 5 [selected_answer]
2018/03/18
480
1,669
<issue_start>username_0: In Excel , I have to add the time allocated to different departments per month and also annually.So the excel has sheets 12 sheets that are from January to December. It looks like this : (Jan) [![January Sheet](https://i.stack.imgur.com/RZ2iC.png)](https://i.stack.imgur.com/RZ2iC.png) The same is repeated for other months of the year.But the time allocated may be different and some departments may not be mentioned at all. The task is I have to do is to add all the time effort allocated to different departments in different months and show it on a sheet. Something along the line : [![Time Effort for different months](https://i.stack.imgur.com/6DmVl.png)](https://i.stack.imgur.com/6DmVl.png) And at last in another sheet, I have to add all the time effort for different departments for the whole year. Something along the line: [![Time effort for the whole year](https://i.stack.imgur.com/cyC3s.png)](https://i.stack.imgur.com/cyC3s.png) I was wondering is there a way I could achieve this using VBA scripting.Kindly help.<issue_comment>username_1: Try adding this to your model ``` protected $table = 'testm'; ``` Upvotes: 2 <issue_comment>username_2: From [Laravels documentation](https://laravel.com/docs/5.6/eloquent#eloquent-model-conventions): > > By convention, the "snake case", plural name of the class will be used as the table name unless another name is explicitly specified. > > > And in order to explicitly define the table name in the model, `Testm.php` in your case, you would want to add the following code to the class: ``` protected $table = 'testm'; ``` Hope this helps! Upvotes: 5 [selected_answer]
2018/03/18
516
1,257
<issue_start>username_0: I have a huge tab-delimited file such as the following : ``` 3 Line1 0 100 A 4 Line1 100 200 A 7 Line1 200 300 B 2 Line1 300 400 B 12 Line1 400 500 C 10 Line1 500 600 C ``` For all the rows that have the letters (A, B, ect), I need to combine their values based upon the number in the first column. For example, what should be the result is below: ``` 7 A 9 B 22 C ``` I am currently using Pandas + Python to figure this out.<issue_comment>username_1: Suppose the df is as below: ``` val id line col1 col2 0 3 Line1 0 100 A 1 4 Line1 100 200 A 2 7 Line1 200 300 B 3 2 Line1 300 400 B 4 12 Line1 400 500 C 5 10 Line1 500 600 C ``` Then, I think you can use `groupby` followed by `sum`: ``` result_df = df.groupby('col2')['val'].sum().to_frame('Sum') print(result_df) ``` Result: ``` Sum col2 A 7 B 9 C 22 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You have to use *join()* method ``` Table1.join(table2.set_index(''key"),on='key') ``` Upvotes: 0 <issue_comment>username_3: ``` df = pd.DataFrame({'Col1':[3,4,7,2,12,10],'Col2':['A','A','B','B','C','C']}) df.groupby('Col2').sum() ``` Upvotes: 0
2018/03/18
1,703
6,221
<issue_start>username_0: Given is a struct which holds a struct with some byte code and an instruction pointer. It implements the pattern of fetch, decode, and execute: ``` use std::convert::TryFrom; /// Trait for a virtual machine. pub struct VirtualMachine { code: CodeMemory, instruction_pointer: usize, } impl VirtualMachine { pub fn new(byte_code: Vec) -> VirtualMachine { VirtualMachine { code: CodeMemory::new(byte\_code), instruction\_pointer: 0, } } /// Run a given program. pub fn run(&mut self) -> Result<(), &str> { loop { let opcode = self.fetch(); if opcode.is\_err() { return Err(opcode.unwrap\_err()); } let instruction = self.decode(opcode.unwrap()); if instruction.is\_err() { return Err("Bad opcode!"); } let instruction = instruction.unwrap(); if instruction == Instruction::Halt { return Ok(()); } self.execute(instruction); } } fn fetch(&mut self) -> Result { self.code.fetch(self.instruction\_pointer) } fn decode(&mut self, opcode: u8) -> Result { Instruction::try\_from(opcode) } fn execute(&mut self, instruction: Instruction) { self.inc\_instruction\_pointer(); match instruction { Instruction::Nop => (), Instruction::Halt => panic!("The opcode 'halt' should exit the loop before execute!"), } } fn inc\_instruction\_pointer(&mut self) { self.instruction\_pointer += 1; } } struct CodeMemory { byte\_code: Vec, } impl CodeMemory { fn new(byte\_code: Vec) -> CodeMemory { CodeMemory { byte\_code } } fn fetch(&self, index: usize) -> Result { if index < self.byte\_code.len() { Ok(self.byte\_code[index]) } else { Err("Index out of bounds!") } } } #[derive(Debug, PartialEq)] pub enum Error { UnknownInstruction(u8), UnknownMnemonic(String), } #[derive(Debug, Copy, Clone, PartialEq)] pub enum Instruction { Nop, // ... Halt, } impl TryFrom for Instruction { type Error = Error; fn try\_from(original: u8) -> Result { match original { 0x01 => Ok(Instruction::Nop), 0x0c => Ok(Instruction::Halt), n => Err(Error::UnknownInstruction(n)), } } } ``` The compiler complains that: ```none error[E0499]: cannot borrow `*self` as mutable more than once at a time --> src/lib.rs:20:26 | 18 | pub fn run(&mut self) -> Result<(), &str> { | - let's call the lifetime of this reference `'1` 19 | loop { 20 | let opcode = self.fetch(); | ^^^^ mutable borrow starts here in previous iteration of loop ... 23 | return Err(opcode.unwrap_err()); | ------------------------ returning this value requires that `*self` is borrowed for `'1` error[E0499]: cannot borrow `*self` as mutable more than once at a time --> src/lib.rs:26:31 | 18 | pub fn run(&mut self) -> Result<(), &str> { | - let's call the lifetime of this reference `'1` 19 | loop { 20 | let opcode = self.fetch(); | ---- first mutable borrow occurs here ... 23 | return Err(opcode.unwrap_err()); | ------------------------ returning this value requires that `*self` is borrowed for `'1` ... 26 | let instruction = self.decode(opcode.unwrap()); | ^^^^ second mutable borrow occurs here error[E0499]: cannot borrow `*self` as mutable more than once at a time --> src/lib.rs:38:13 | 18 | pub fn run(&mut self) -> Result<(), &str> { | - let's call the lifetime of this reference `'1` 19 | loop { 20 | let opcode = self.fetch(); | ---- first mutable borrow occurs here ... 23 | return Err(opcode.unwrap_err()); | ------------------------ returning this value requires that `*self` is borrowed for `'1` ... 38 | self.execute(instruction); | ^^^^ second mutable borrow occurs here ``` I think I understand the problem described by the compiler, but I can't find a solution or pattern how to implement this in Rust in a safe way. Is it possible to mutate a struct field inside a loop? I'm using Rust 1.34 to use the `TryFrom` trait.<issue_comment>username_1: You probably do not want to be returning `Result<_, &str>` from any of your functions. If you use `Result<_, &'static str>` or `Result<_, String>` you should have much less fighting with the borrow checker. Even better would be to use a dedicated error type, but that is beyond the scope of this answer. The reason that returning a `Result<_, &str>` is problematic is that it ends up tying the lifetime of the return value to the lifetime of `self`, which limits how you can use `self` during the lifetime of the result. Upvotes: 0 <issue_comment>username_2: There are two things preventing your code sample from compiling. First, you have a number of methods declared as taking `&mut self` when they don't need to be. * `VirtualMachine::fetch` only calls `CodeMemory::fetch` which does not require a mutable self. * `VirtualMachine::decode` does not even access any of the fields of `VirtualMachine` Secondly, as pointed out in [@fintella's answer](https://stackoverflow.com/a/49352170/155423), `CodeMemory::fetch` returns a string slice as an error. You don't specify a lifetime of this string slice so it is inferred to be the same as the lifetime of the `CodeMemory` instance, which in turn is tied back to the lifetime of the `VirtualMachine` instance. The effect of this is that the lifetime of the immutable borrow taken when you call `fetch` lasts for the whole scope of the return value from `fetch` - in this case pretty much the whole of your loop. In this case, the string slice you are returning as an error message is a string literal, which has static scope, so you can fix this by changing the definition of `CodeMemory::fetch` to: ``` fn fetch(&self, index: usize) -> Result { /\* ... \*/ } ``` and `VirtualMachine::fetch` to: ``` fn fetch(&self) -> Result { /\* ... \*/ } ``` After making those changes, it [compiles for me](https://play.rust-lang.org/?gist=9df23505d0321acb269ab70bebba07e0&version=nightly). Upvotes: 3 [selected_answer]
2018/03/18
505
1,839
<issue_start>username_0: How to print content of all files of a directory if one of its file contain required data..... using bash? I tried using 'grep' recursively but it only prints content of a single file! what to do ? for example, If I have two directories abc and xyz.abc has a file f1.txt("This is File1.") and f2.txt("This is File2.").xyz hav a file x.txt("fake f1")and y.txt("fake f2"). Now if I grep for File1, then output should be : This is File1. This is File2.<issue_comment>username_1: Try this : ``` find . -type f -exec bash -c 'grep -q "required data" "$1" && cat "$1"' -- {} \; ``` * `.` current directory * `-type f` : only *files* * `-exec` : execute * executed: `bash` taking $1 argument from following `{}`, and using boolean logic to do some actions : `true condition && cat file`, short version for `if condition; then action; fi` * `--` end of options security * `{}` the current file place holder * \; the find's end syntax Upvotes: 1 <issue_comment>username_2: The following bash script will print the contents of all files of a directory if any file in that directory contains the required data: ``` find . -type d | while read dir; do find ${dir} -maxdepth 1 -type f | while read file; do if grep -q "search string" ${file}; then cat ${dir}/* 2>/dev/null; break; fi; done; done ``` Replace `search string` with the string that you want to search. You can either run this directly on the command-line or put it in a bash script and then run it. **Explanation:** 1. Fetch all the directories using `find`. - For each directory found above, `find` all the files in that directory only. - For each file found above, if the file contains the required content, print the contents of all the files in that directory and exit that directory. - If no file contains the required content, do nothing. Upvotes: 0
2018/03/18
517
1,831
<issue_start>username_0: So, I have a `clojure.lang.PersistentArrayMap` with a key and another `clojure.lang.PersistentArrayMap` inside, like: ``` {:foo {:bar "bar"}} ``` When I use `for` with the bindings: ``` (for [[key value] {:foo {:bar "bar"}}] do-something) ``` It works fine, but when I try to use `let`, it doesnt works... ``` (let [[key value] {:foo {:bar "bar"}}] do-something) ``` Can somebody help me to understand how let binding works? Thanks!<issue_comment>username_1: Try this : ``` find . -type f -exec bash -c 'grep -q "required data" "$1" && cat "$1"' -- {} \; ``` * `.` current directory * `-type f` : only *files* * `-exec` : execute * executed: `bash` taking $1 argument from following `{}`, and using boolean logic to do some actions : `true condition && cat file`, short version for `if condition; then action; fi` * `--` end of options security * `{}` the current file place holder * \; the find's end syntax Upvotes: 1 <issue_comment>username_2: The following bash script will print the contents of all files of a directory if any file in that directory contains the required data: ``` find . -type d | while read dir; do find ${dir} -maxdepth 1 -type f | while read file; do if grep -q "search string" ${file}; then cat ${dir}/* 2>/dev/null; break; fi; done; done ``` Replace `search string` with the string that you want to search. You can either run this directly on the command-line or put it in a bash script and then run it. **Explanation:** 1. Fetch all the directories using `find`. - For each directory found above, `find` all the files in that directory only. - For each file found above, if the file contains the required content, print the contents of all the files in that directory and exit that directory. - If no file contains the required content, do nothing. Upvotes: 0
2018/03/18
498
1,898
<issue_start>username_0: In my application, users have their own profile page in which there is their profile picture (I'm using react router to deal with routes on client side). What I want to do is: when a user sends link to his profile on messenger/slack/twitter then the thumbnail of the link should be his profile photo. I know that I need to do this by sending correct meta tag, i.e. But I am not sure how to implement this, what should I know about it? I found a thing called prerender, but I am not sure if it's for React<issue_comment>username_1: Try this : ``` find . -type f -exec bash -c 'grep -q "required data" "$1" && cat "$1"' -- {} \; ``` * `.` current directory * `-type f` : only *files* * `-exec` : execute * executed: `bash` taking $1 argument from following `{}`, and using boolean logic to do some actions : `true condition && cat file`, short version for `if condition; then action; fi` * `--` end of options security * `{}` the current file place holder * \; the find's end syntax Upvotes: 1 <issue_comment>username_2: The following bash script will print the contents of all files of a directory if any file in that directory contains the required data: ``` find . -type d | while read dir; do find ${dir} -maxdepth 1 -type f | while read file; do if grep -q "search string" ${file}; then cat ${dir}/* 2>/dev/null; break; fi; done; done ``` Replace `search string` with the string that you want to search. You can either run this directly on the command-line or put it in a bash script and then run it. **Explanation:** 1. Fetch all the directories using `find`. - For each directory found above, `find` all the files in that directory only. - For each file found above, if the file contains the required content, print the contents of all the files in that directory and exit that directory. - If no file contains the required content, do nothing. Upvotes: 0
2018/03/18
557
2,236
<issue_start>username_0: I want to store the technical specifications of the Vehicles in a mysql table. Where table design should be optimized enough to manage various DB Operations (Such as Updating or Selecting datasets) using PHP. There are more than 100 fields of vehicle features and specifications, for which I am very much confused which database architecture to be followed for best optimization. For example, the fields are: **engineType, displacement, mileage, topSpeed, wheelSize, groundClearance, rearAcVents, frontAcVtents, cdPlayer** and so on... Should I create individual column for each new specification or feature or store all the specs and json in a single column with json encoded data? If i create columns, then there would be n number of columns logically. That would reach the maximum limit of mysql columns and may affect performance as well?<issue_comment>username_1: Try this : ``` find . -type f -exec bash -c 'grep -q "required data" "$1" && cat "$1"' -- {} \; ``` * `.` current directory * `-type f` : only *files* * `-exec` : execute * executed: `bash` taking $1 argument from following `{}`, and using boolean logic to do some actions : `true condition && cat file`, short version for `if condition; then action; fi` * `--` end of options security * `{}` the current file place holder * \; the find's end syntax Upvotes: 1 <issue_comment>username_2: The following bash script will print the contents of all files of a directory if any file in that directory contains the required data: ``` find . -type d | while read dir; do find ${dir} -maxdepth 1 -type f | while read file; do if grep -q "search string" ${file}; then cat ${dir}/* 2>/dev/null; break; fi; done; done ``` Replace `search string` with the string that you want to search. You can either run this directly on the command-line or put it in a bash script and then run it. **Explanation:** 1. Fetch all the directories using `find`. - For each directory found above, `find` all the files in that directory only. - For each file found above, if the file contains the required content, print the contents of all the files in that directory and exit that directory. - If no file contains the required content, do nothing. Upvotes: 0
2018/03/18
1,271
4,185
<issue_start>username_0: I have to set exif data on my Image. in Android 8 my code works fine but in Android 7 It report the error below and after that no exif data are saved on image: > > W/ExifInterface: Given tag (GPSLatitude) value didn't match with one of expected formats: URATIONAL (guess: STRING) > > > This is my code: ``` public void geoTag(String filename, double lng, double lat){ ExifInterface exif; try { exif = new ExifInterface(filename); int num1Lat = (int)Math.floor(lat); int num2Lat = (int)Math.floor((lat - num1Lat) * 60); double num3Lat = (lat - ((double)num1Lat+((double)num2Lat/60))) * 3600000; int num1Lon = (int)Math.floor(lng); int num2Lon = (int)Math.floor((lng - num1Lon) * 60); double num3Lon = (lng - ((double)num1Lon+((double)num2Lon/60))) * 3600000; exif.setAttribute(ExifInterface.TAG_GPS_LATITUDE, num1Lat+"/1,"+num2Lat+"/1,"+num3Lat+"/1000"); exif.setAttribute(ExifInterface.TAG_GPS_LONGITUDE, num1Lon+"/1,"+num2Lon+"/1,"+num3Lon+"/1000"); ........... ```<issue_comment>username_1: @greenapps....thank U for your time. Basically I have my code that retrieve position from map and save on SharedPref : ``` xml version='1.0' encoding='utf-8' standalone='yes' ? 11.812703;42.081890 ``` After that, I have a method that split that string in the longitude and latitude: ``` public final void notifyMediaStoreScanner(final File file,Activity mainActivityCatched) { double longituderetrieve; double latituderetrieve; try { MediaStore.Images.Media.insertImage(mainActivityCatched.getContentResolver(), file.getAbsolutePath(), file.getName(), null); mainActivityCatched.sendBroadcast(new Intent( Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, Uri.fromFile(file))); } catch (FileNotFoundException e) { e.printStackTrace(); } // SharedPref getPosition = new SharedPref(mainActivityCatched); SharedPreferences getLatLng = mainActivityCatched.getSharedPreferences("settingModeLatLng",MODE_PRIVATE); String LatLngShared = getLatLng.getString("LatLng","currentLatLng"); latituderetrieve = Double.parseDouble(LatLngShared.substring(0,LatLngShared.indexOf(";"))); longituderetrieve = Double.parseDouble(LatLngShared.substring(LatLngShared.indexOf(";")+1)); try { geoTag(file.getAbsolutePath(),longituderetrieve,latituderetrieve); } catch (IOException e) { e.printStackTrace(); } } ``` ...finally I pass all values to another method to set position in exif data. My original code was: ``` public void geoTag(String filename, double longi, double lati) throws IOException { ExifInterface exif; try { exif = new ExifInterface(filename); int num1Lat = (int)Math.floor(lat); int num2Lat = (int)Math.floor((lat - num1Lat) * 60); double num3Lat = (lat - ((double)num1Lat+((double)num2Lat/60))) * 3600000; int num1Lon = (int)Math.floor(lng); int num2Lon = (int)Math.floor((lng - num1Lon) * 60); double num3Lon = (lng - ((double)num1Lon+((double)num2Lon/60))) * 3600000; exif.setAttribute(ExifInterface.TAG_GPS_LATITUDE, num1Lat+"/1,"+num2Lat+"/1,"+num3Lat+"/1000"); exif.setAttribute(ExifInterface.TAG_GPS_LONGITUDE, num1Lon+"/1,"+num2Lon+"/1,"+num3Lon+"/1000"); if (lat > 0) { exif.setAttribute(ExifInterface.TAG_GPS_LATITUDE_REF, "N"); } else { exif.setAttribute(ExifInterface.TAG_GPS_LATITUDE_REF, "S"); } if (lng > 0) { exif.setAttribute(ExifInterface.TAG_GPS_LONGITUDE_REF, "E"); } else { exif.setAttribute(ExifInterface.TAG_GPS_LONGITUDE_REF, "W"); } exif.saveAttributes(); } catch (IOException e) { Log.e("PictureActivity", e.getLocalizedMessage()); } ``` The strange thing with my code is in Android api26 works.... Any suggestion is very appreciated! Alex Upvotes: 0 <issue_comment>username_2: I had the same issue. The fix is to use long instead of double for num3Lat and num3Lon. Cast to long and all works great Upvotes: 1
2018/03/18
1,238
2,897
<issue_start>username_0: I do a merge on table and need to update data if they are different. In MSSQL I usually do it by checking a checksum this way: ``` WHEN MATCHED AND CHECKSUM(TARGET.Field1,TARGET.Field2, ... TARGET.Field25) <> CHECKSUM(TARGET.Field1,TARGET.Field2, ... TARGET.Field25) THEN UPDATE SET FIELD1 = FIELD1 ``` How to achieve the same in Oracle?<issue_comment>username_1: Try with `ORA_HASH`, such as this example: ``` when matched and ora_hash(scol1 || scol2 || scol3 || ... || scol25) <> ora_hash(...) then update set field1 = field2 ``` `||` is the *concatenation* operator. Upvotes: 0 <issue_comment>username_2: Oracle's STANDARD\_HASH function "computes a hash value for a given expression" (see the [documentation](https://docs.oracle.com/database/121/SQLRF/functions183.htm#SQLRF55647)). Use the checksums in the WHERE clause of the UPDATE (in the MERGE statement). Tables for testing (Oracle 12c) ``` -- 2 tables create table table1 as select 1 id, 1 a1, 1 b1, 1 c1, 1 d1, 1 e1, 1 f1 from dual; create table table2 as select 2 id, 2 a2, 2 b2, 2 c2, 2 d2, 2 e2, 2 f2 from dual; ``` SHA256 checksum ``` -- eg select standard_hash ( T.id || T.a1 || T.b1 || T.c1 || T.d1 || T.e1 || T.f1, 'SHA256' ) from table1 T ; -- output SHA256 2558A34D4D20964CA1D272AB26CCCE9511D880579593CD4C9E01AB91ED00F325 ``` MERGE ``` merge into table1 T using ( select id, a2, b2, c2, d2, e2, f2 from table2 ) T2 on ( T.id = T2.id ) when matched then update set T.a1 = T2.a2 , T.b1 = T2.b2 , T.c1 = T2.c2 , T.d1 = T2.d2 , T.e1 = T2.e2 , T.f1 = T2.f2 where standard_hash ( T.id || T.a1 || T.b1 || T.c1 || T.d1 || T.e1 || T.f1, 'SHA256' ) <> standard_hash ( T2.id || T2.a2 || T2.b2 || T2.c2 || T2.d2 || T2.e2 || T2.f2, 'SHA256' ) when not matched then insert ( T.id, T.a1, T.b1, T.c1, T.d1, T.e1, T.f1 ) values ( T2.id, T2.a2, T2.b2, T2.c2, T2.d2, T2.e2, T2.f2 ) ; -- 1 row merged ``` After the MERGE statement has been executed, the tables contain: ``` SQL> select * from table1; ID A1 B1 C1 D1 E1 F1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 SQL> select * from table2; ID A2 B2 C2 D2 E2 F2 2 2 2 2 2 2 2 ``` Modify table2 and MERGE again: ``` update table2 set a2 = 20, c2 = 30, f2 = 50 where id = 2 ; insert into table2 ( id, b2, d2, e2 ) values (3, 33, 333, 3333 ) ; select * from table2; ID A2 B2 C2 D2 E2 F2 2 20 2 30 2 2 50 3 33 333 3333 ``` Execute the MERGE statement again. Table1 now contains: ``` SQL> select * from table1; ID A1 B1 C1 D1 E1 F1 1 1 1 1 1 1 1 2 20 2 30 2 2 50 3 33 333 3333 ``` Upvotes: 4 [selected_answer]
2018/03/18
1,231
2,855
<issue_start>username_0: I need to calculate the complexity of the following code: ``` function(a) n=length(a) i=1 while i<=n for j=n to i+1 print(a) i = i+5 ``` The while loop is running n/5 times, but I am confused by the for loop. is it n/5 as well? any guidance would be appreciated. thank you<issue_comment>username_1: Try with `ORA_HASH`, such as this example: ``` when matched and ora_hash(scol1 || scol2 || scol3 || ... || scol25) <> ora_hash(...) then update set field1 = field2 ``` `||` is the *concatenation* operator. Upvotes: 0 <issue_comment>username_2: Oracle's STANDARD\_HASH function "computes a hash value for a given expression" (see the [documentation](https://docs.oracle.com/database/121/SQLRF/functions183.htm#SQLRF55647)). Use the checksums in the WHERE clause of the UPDATE (in the MERGE statement). Tables for testing (Oracle 12c) ``` -- 2 tables create table table1 as select 1 id, 1 a1, 1 b1, 1 c1, 1 d1, 1 e1, 1 f1 from dual; create table table2 as select 2 id, 2 a2, 2 b2, 2 c2, 2 d2, 2 e2, 2 f2 from dual; ``` SHA256 checksum ``` -- eg select standard_hash ( T.id || T.a1 || T.b1 || T.c1 || T.d1 || T.e1 || T.f1, 'SHA256' ) from table1 T ; -- output SHA256 2558A34D4D20964CA1D272AB26CCCE9511D880579593CD4C9E01AB91ED00F325 ``` MERGE ``` merge into table1 T using ( select id, a2, b2, c2, d2, e2, f2 from table2 ) T2 on ( T.id = T2.id ) when matched then update set T.a1 = T2.a2 , T.b1 = T2.b2 , T.c1 = T2.c2 , T.d1 = T2.d2 , T.e1 = T2.e2 , T.f1 = T2.f2 where standard_hash ( T.id || T.a1 || T.b1 || T.c1 || T.d1 || T.e1 || T.f1, 'SHA256' ) <> standard_hash ( T2.id || T2.a2 || T2.b2 || T2.c2 || T2.d2 || T2.e2 || T2.f2, 'SHA256' ) when not matched then insert ( T.id, T.a1, T.b1, T.c1, T.d1, T.e1, T.f1 ) values ( T2.id, T2.a2, T2.b2, T2.c2, T2.d2, T2.e2, T2.f2 ) ; -- 1 row merged ``` After the MERGE statement has been executed, the tables contain: ``` SQL> select * from table1; ID A1 B1 C1 D1 E1 F1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 SQL> select * from table2; ID A2 B2 C2 D2 E2 F2 2 2 2 2 2 2 2 ``` Modify table2 and MERGE again: ``` update table2 set a2 = 20, c2 = 30, f2 = 50 where id = 2 ; insert into table2 ( id, b2, d2, e2 ) values (3, 33, 333, 3333 ) ; select * from table2; ID A2 B2 C2 D2 E2 F2 2 20 2 30 2 2 50 3 33 333 3333 ``` Execute the MERGE statement again. Table1 now contains: ``` SQL> select * from table1; ID A1 B1 C1 D1 E1 F1 1 1 1 1 1 1 1 2 20 2 30 2 2 50 3 33 333 3333 ``` Upvotes: 4 [selected_answer]
2018/03/18
780
3,341
<issue_start>username_0: I have a usercontrol which datacontext is bind to a "SelectedSchedule", with a click on a button a window is open where the "SelectedSchedule" can be edited, works fine. In this window there is a combobox with some "SelectedSchedule" to choose, which SelectedItem-Property is bind to the "SelectedSchedule". When I now choose another object in the combobox it did not get the new object, just nothing happen/change. What I'm doing wrong? User-Control-XAML: ``` ``` User-Control-ViewModel ``` private Schedule mSelectedSchedule; public Schedule SelectedSchedule { get { return mSelectedSchedule; } set { mSelectedSchedule = value; OnPropertyChanged("SelectedSchedule"); } } public EmployeeWeekCheckButon_VM(Schedule GivenSchedule) { SelectedSchedule = GivenSchedule; } private void Edit() { Forms.Tracking.View.frmEditTracking newForm = new Forms.Tracking.View.frmEditTracking(SelectedSchedule); newForm.ShowDialog(); OnPropertyChanged("SelectedSchedule"); } private void Delete() { SelectedSchedule = null; } ``` Edit-Window-XAML: ``` ``` Edit-Window-ViewModel: ``` private Schedule _SelectedSchedule; public Schedule SelectedSchedule { get { return _SelectedSchedule; } set { _SelectedSchedule = value; OnPropertyChanged("SelectedSchedule"); } } private ObservableCollection \_ListOfSchedule; public ObservableCollection ListOfSchedule { get { return \_ListOfSchedule; } set { \_ListOfSchedule = value; OnPropertyChanged("ListOfSchedule"); } } public frmEditTracking\_VM(Schedule GivenSchedule) { SelectedSchedule = GivenSchedule; } private void SaveAndClose() { SelectedSchedule.isTracked = true; OnClosingRequest(); } ```<issue_comment>username_1: Try to set the binding in two-way mode ``` SelectedItem="{Binding SelectedSchedule, Mode=TwoWay}" ``` and when the dialog is closed you need to set the new value because there is no link between the dialog and the viewmodel "SelectedSchedule" property ``` newForm.ShowDialog(); SelectedSchedule = newForm.SelectedSchedule; ``` Upvotes: 2 [selected_answer]<issue_comment>username_1: To understand why edition is working but assignation don't you can try: ``` private Schedule mSelectedSchedule2; public Schedule SelectedSchedule2 { get { return mSelectedSchedule2; } set { mSelectedSchedule2 = value; OnPropertyChanged("SelectedSchedule2"); } } private Schedule _SelectedSchedule; public Schedule SelectedSchedule { get { return _SelectedSchedule; } set { _SelectedSchedule = value; OnPropertyChanged("SelectedSchedule"); } } public EmployeeWeekCheckButon_VM(Schedule GivenSchedule) { SelectedSchedule = GivenSchedule; SelectedSchedule2 = GivenSchedule; SelectedSchedule.Name = "Test"; Debug.WriteLine(SelectedSchedule.Name) //it's Test Debug.WriteLine(SelectedSchedule2.Name) //it's Test SelectedSchedule = new Schedule(); SelectedSchedule.Name = "Test2"; Debug.WriteLine(SelectedSchedule.Name) //it's Test2 Debug.WriteLine(SelectedSchedule2.Name) //it's still Test because //it's referencing the first object } ``` Upvotes: 0
2018/03/18
1,064
4,123
<issue_start>username_0: I'm bashing my head against the wall to try and figure out how to programmatically get a list of images in an Azure Container Registry. Everything seems to fall down to looking in Docker.DotNet's own *local* instantiation of an image list, and push/pulling to an ACR via that local repository - but nothing is showing me how to get a list of images (and their tags) from the ACR itself. In digging through their rest API for azure, it looks like only a slim set of "management" options are available (getting a list of ACRs, getting the properties of an ACR, but nothing shows me it dives deeper than that). I can get a list of image names, and then their image name tags via the Azure CLI -- **but** I'm looking to get an enumerable list of images in a C# app (inside a web-api, matter of fact). Essentially - what I want to do is have a list of running images remotely, in docker -- and compare *those* to what's up in the ACR, to give a "hey, there's a newer version of this image available". Has anyone done this? To any effect? Is it simple like this (for Docker): ``` var _credentials = new BasicAuthCredentials("MY_REG_USERNAME", "MY_REG_PASSWORD"); var _config = new DockerClientConfiguration(new Uri("MY_REGISTRY_NAME.azurecr.io"), _credentials); DockerClient _client = _config.CreateClient(); var myList = await _client.Images.ListImagesAsync( new Docker.DotNet.Models.ImagesListParameters() { All = true } ); ``` or impossible? I've messed around with IoT hubs and getting device twin lists and the like, with the DeviceClient -- is there nothing like this for the ACR?<issue_comment>username_1: I was facing the same puzzle for a while and the answer is: **For image operations (including the tag list you were asking about) Microsoft supports the docker registry API v2.** <https://docs.docker.com/registry/spec/api> What does it mean? An Example: Azure REST API is for Azure resource operations only. There you can use Bearer Token authentication and for example make a GET request like this: [https://management.azure.com/subscriptions/SubscriptionGUID/resourceGroups/ContainerRegistry/providers/Microsoft.ContainerRegistry/registries/YourRegistryName?api-version=2017-10-01](https://management.azure.com/subscriptions/%3CSubscriptionGUID%3E/resourceGroups/ContainerRegistry/providers/Microsoft.ContainerRegistry/registries/YourRegistryName?api-version=2017-10-01) But as you already know this will not give you access to operations on the content of the ACR. Instead you need to call a different end-point, namely the Registry end-point, and very importantly, you need to use basic authentication with username and password: [https://yourregistryname-on.azurecr.io/v2/imagename/tags/list](https://yourregistry-on.azurecr.io/v2/%3Cimagename%3E/tags/list) What username and password is it? Well, there are 2 types possible: 1. The admin user you can enable on the ACR in the Azure portal 2. You can configure users in the ACR under Access Control with different types of access (more secure). As the username you can use the underlying GUID, visible in the query string in the URL when selecting it in Azure portal. Password/key can be configured there as well. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can use <https://myregistry.azurecr.io/v2/_catalog> URL with basic authentication. To get username and password for basic auth, use below command from Powershell. **az acr credential show --name myregistry** Upvotes: 2 <issue_comment>username_3: While not ideal to use from within .NET, another CLI-centric way to get all the images is using `az acr manifest` (the successor to `az acr repository show-manifests`). This lists all the images, whether tagged or not, present in a given repository. For example: ```bash MY_REGISTRY= MY\_REPOSITORY= az acr manifest list-metadata \ --registry $MY\_REGISTRY \ --name $MY\_REPOSITORY \ --query '[:].[digest, imageSize, tags[:]]' \ -o table ``` Shows every image digest, size, and list of tags associated with that image, whether tagged or not. Upvotes: 0
2018/03/18
701
2,171
<issue_start>username_0: I am getting an error and do not know how to fix it. I have to calculate square root with approximation and it should stop on the 20th element. * Unable to resolve symbol: aprox in this context, compiling:(/home/jdoodle.clj:2:2) Code: ``` (defn msqrt [n] (aprox (n 1.0 1))) (defn aprox [n prox part] (if (= part 20) prox (+ (/ n (* 2 (aprox n part (+ part 1)))) (/ (aprox n prox (+ part 1))2))) ) (msqrt 9) ```<issue_comment>username_1: In Clojure, the order you declare functions in matters. They don't get hoisted up like in Javascript. You can fix this by either: 1. Moving your `defn aprox` above `defn msqrt` 2. Adding `declare aprox` to the top of your file to make it available <https://clojuredocs.org/clojure.core/declare> Upvotes: 2 [selected_answer]<issue_comment>username_2: The [corrected](https://stackoverflow.com/a/49352010/1562315) answer compiles, but, but does not do what it ought to. The first reference to `part` in the `aprox` function ought to be to `prox`: ``` (defn aprox [n prox part] (if (= part 20) prox (+ (/ n (* 2 (aprox n prox (inc part)))) (/ (aprox n prox (inc part)) 2)))) ``` I've taken the chance to improve layout and brevity. It works: ``` > (msqrt 9) => 3.0 ``` But it makes the same recursive call twice. It only need do so once: ``` (defn aprox [n prox part] (if (= part 20) prox (let [deeper-prox (aprox n prox (inc part))] (+ (/ n (* 2 deeper-prox)) (/ deeper-prox 2))))) ``` There are now about twenty recursive calls instead of about a million (2^20). And we can see how this works. In the recursive call, * the `part` argument counts up to `20`, creating a call stack twenty deep; * the `prox` argument uses the recursion to generate and return an improved estimate of itself; * the `n` argument is passed on unaltered. We can get the same effect by repeating the improvement process twenty times and folding it into the `msqrt` function: ``` (defn msqrt [n] (loop [prox 1.0, part 20] (if (pos? part) (recur (+ (/ n (* 2 prox)) (/ prox 2)) (dec part)) prox))) ``` Upvotes: 0
2018/03/18
452
1,576
<issue_start>username_0: Whenever I create a new branch in a remote repo, there is no way for me to get it to my local repo. I've tried: `git fetch --all`, which actually updates the remote branches, but they don't show up with `git branch` `git branch -r | grep -v '\->' | while read remote; do git branch --track "${remote#origin/}" "$remote"; done` is the only thing that will work for me, but it grabs *all* branches that remote ever had. On large projects I end up getting upward of 100 branches from `git branch`<issue_comment>username_1: Try `git branch -a` to list all branches (local and remote). Then `git checkout YOUR_REMOTE_NAME/YOUR_BRANCH_NAME`, example `git checkout origin/dev` Upvotes: 0 <issue_comment>username_2: `git branch` only shows your local branches. Specify `-r` to show remote branches, or `-a` to show both local and remote branches. To check out a local copy of a remote branch, run `git checkout` where is the name of the branch without the remote name prefix. An example: ``` $ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/foo remotes/origin/master $ git checkout foo Branch 'foo' set up to track remote branch 'foo' from 'origin'. Switched to a new branch 'foo' $ git branch -a * foo master remotes/origin/HEAD -> origin/master remotes/origin/foo remotes/origin/master $ ``` You can read more about remote branches in [3.5 Git Branching - Remote Branches](https://git-scm.com/book/en/v2/Git-Branching-Remote-Branches) of [Pro Git](https://git-scm.com/book). Upvotes: 2 [selected_answer]
2018/03/18
1,139
3,750
<issue_start>username_0: I am getting the following error in my `Angular5` applications. ``` Error: call to Function() blocked by CSP vendor.bundle.js:50077:40 bind_constructFunctionN self-hosted:1214:16 Function self-hosted:1132:24 evalExpression http://localhost:9000/assets/ui/vendor.bundle.js:50077:40 jitStatements http://localhost:9000/assets/ui/vendor.bundle.js:50095:12 ../../../compiler/esm5/compiler.js/JitCompiler.prototype._interpretOrJit http://localhost:9000/assets/ui/vendor.bundle.js:50678:20 ../../../compiler/esm5/compiler.js/JitCompiler.prototype._compileTemplate http://localhost:9000/assets/ui/vendor.bundle.js:50606:43 ../../../compiler/esm5/compiler.js/JitCompiler.prototype._compileComponents/< http://localhost:9000/assets/ui/vendor.bundle.js:50505:56 forEach self-hosted:5732:9 ../../../compiler/esm5/compiler.js/JitCompiler.prototype._compileComponents http://localhost:9000/assets/ui/vendor.bundle.js:50505:9 ../../../compiler/esm5/compiler.js/JitCompiler.prototype._compileModuleAndComponents/< http://localhost:9000/assets/ui/vendor.bundle.js:50375:13 then http://localhost:9000/assets/ui/vendor.bundle.js:16489:77 ../../../compiler/esm5/compiler.js/JitCompiler.prototype._compileModuleAndComponents http://localhost:9000/assets/ui/vendor.bundle.js:50374:16 ../../../compiler/esm5/compiler.js/JitCompiler.prototype.compileModuleAsync http://localhost:9000/assets/ui/vendor.bundle.js:50268:32 ../../../platform-browser-dynamic/esm5/platform-browser-dynamic.js/CompilerImpl.prototype.compileModuleAsync http://localhost:9000/assets/ui/vendor.bundle.js:79292:34 ../../../core/esm5/core.js/ http://localhost:9000/assets/ui/main.bundle.js:1:1 Content Security Policy: The page’s settings blocked the loading of a resource at self (β€œdefault-src”). localhost:9000:1 Content Security Policy: The page’s settings blocked the loading of a resource at self (β€œdefault-src”). Source: /* You can add global styles to this fil.... localhost:9000:1 Content Security Policy: The page’s settings blocked the loading of a resource at self (β€œdefault-src”). Source: call to eval() or related function blocked by CSP. vendor.bundle.js:50077 ``` The Angular application sends the `Content-Security-Policy default-src 'self'` header in initial `GET` request but the page doesn't get loaded. I suppose Angular code is trying to fetch files from some source outside of `self`. However, from the stack, I see that the requests are only being sent to `localhost` which I suppose should be `self`. Why isn't the code working then? My setup is a bit different though. I have compiled all the Angular code and moved its `js` files to `public` folder of `play` framework (my server).<issue_comment>username_1: The issue seem to be not the source but that browser didn't allow use of Function to avoid XSS attacks. I could make the code work by adding the following in `application.conf` in `play` (server). However, I am not sure if this is the right way as I suppose I am allowing use of eval and Function which might allow XSS attacks. Is this the right way considering that it is not my code which is calling Function at the first place. `contentSecurityPolicy = "default-src 'self' ; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'"` It seems that the code generated by Angular cli uses eval and Function. So probably, for the moment, relaxing CSP policy is the only workaround. - github.com/angular/angular-cli/issues/6872 Happy to accept other answers Upvotes: 3 [selected_answer]<issue_comment>username_2: I was getting this error, but after building with `ng build --prod`, I didn't need `'unsafe-eval'` anymore, without `--prod` I still need `'unsafe-eval'`. Running Flask/Angular 5.0.3 Upvotes: 1
2018/03/18
1,591
6,430
<issue_start>username_0: I'm trying to build a collection view without using interface builder, but I haven't been able to adjust the width of the cells. The cells appear to stay at their minimum square size, where I'd like the cells to stretch the width of the view. What am I getting wrong with the constraints? Specifically: ``` func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: view.frame.width, height: 50) } ``` Doesn't seem to be adjusting the width or height for the cells, full example: ``` import UIKit class UserCollectionViewController: UICollectionViewController { override func viewDidLoad() { super.viewDidLoad() navigationItem.title = "foo" collectionView?.backgroundColor = UIColor.red collectionView?.alwaysBounceVertical = true collectionView?.register(UserCell.self, forCellWithReuseIdentifier: "cellId") } override func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return 20 } override func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { let userCell = collectionView.dequeueReusableCell(withReuseIdentifier: "cellId", for: indexPath) return userCell } func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: view.frame.width, height: 50) } } class UserCell:UICollectionViewCell { override init(frame: CGRect) { super.init(frame: frame) setupView() } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } let nameLabel: UILabel = { let label = UILabel() label.text = "foo" label.translatesAutoresizingMaskIntoConstraints = false label.font = UIFont.boldSystemFont(ofSize: 14) return label }() func setupView() { addSubview(nameLabel) let hConstraints = NSLayoutConstraint.constraints(withVisualFormat: "H:|[v0]|", options: NSLayoutFormatOptions(), metrics: nil, views: ["v0": nameLabel]) addConstraints(hConstraints) } } ```<issue_comment>username_1: Create an extension on UIView and add this code: ``` extension UIView { func addConstraintsWithformat(format: String, views: UIView...) { // abstracting contstraints code var viewsDict = [String: UIView]() for(index,view) in views.enumerated() { let key = "v\(index)" viewsDict[key] = view view.translatesAutoresizingMaskIntoConstraints = false } addConstraints(NSLayoutConstraint.constraints(withVisualFormat: format, options: NSLayoutFormatOptions(), metrics: nil, views: viewsDict)) } } ``` Then Create a custom class for your `collectionViewCell` and add this code: ``` private func setUpContainerView() { let containerView = UIView() addSubview(containerView) addConstraintsWithformat(format: "H:|-90-[v0]|", views: containerView) addConstraintsWithformat(format: "V:[v0(50)]|", views: containerView) addConstraint(NSLayoutConstraint(item: containerView, attribute: .centerY, relatedBy: .equal, toItem: self, attribute: .centerY, multiplier: 1, constant: 0)) // set up labels and contents inside the ContainerView and set their constraints with addConstraintsWithformat } ``` Call this code in `override func setUpViews() {}` Upvotes: 0 <issue_comment>username_2: In terms of what's wrong with the constraints, you're missing the vertical constraints: ``` func setupView() { addSubview(nameLabel) var constraints: [NSLayoutConstraint] = [] constraints += NSLayoutConstraint.constraints(withVisualFormat: "H:|[v0]|", metrics: nil, views: ["v0": nameLabel]) constraints += NSLayoutConstraint.constraints(withVisualFormat: "V:|[v0]|", metrics: nil, views: ["v0": nameLabel]) addConstraints(constraints) } ``` But that's not why your cells are the wrong size. That's because your `sizeForItemAt` will not be called because you haven't declared `UICollectionViewDelegateFlowLayout` conformance. Set a breakpoint where you have it now, and you'll see it's not called. You can fix this by moving it into an extension where you define the conformance: ``` extension UserCollectionViewController: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: view.bounds.width, height: 50) } } ``` Or, better, since they're all the same, I'd just set `itemSize` for the whole layout: ``` override func viewDidLoad() { super.viewDidLoad() let layout = collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: view.bounds.width, height: 50) } override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) { super.viewWillTransition(to: size, with: coordinator) let layout = self.collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: size.width, height: 50) } ``` Upvotes: 1 <issue_comment>username_3: > > What am I getting wrong with the constraints? > > > Nothing. The width of a UICollectionViewCell under a UICollectionViewFlowLayout has nothing to do with its internal constraints. (There is supposed to be a feature that allows this, but it has never worked for me.) It is up to you to set the width explicitly, either by setting the flow layout's `itemSize` or by implementing `collectionView(_:layout:sizeForItemAt:)` in the collection view's delegate (which is usually a UICollectionViewController, and in any case must also explicitly adopt UICollectionViewDelegateFlowLayout as you've been told in username_2's answer). So just change this: ``` class UserCollectionViewController: UICollectionViewController { ``` to this: ``` class UserCollectionViewController : UICollectionViewController, UICollectionViewDelegateFlowLayout { ``` and you're done. Upvotes: 2 [selected_answer]
2018/03/18
1,160
4,646
<issue_start>username_0: I have a simple html code, which hits my php file on my server and files are uploaded. I need to use the same thing in ionic, but it doesnt work. Below is my `home.html` code ``` Select image to upload: ``` I tried `file chooser,file transfer` of ionic but I failed to implement it. I also tried `valor's ng upload`, it works but it only allows to send pics from device, where as from browser it allows every kind of file. I want to upload any kind of file from Android to server. I am unable to understand the file chooser, file transfer code and unable to find a ready to use code.<issue_comment>username_1: Create an extension on UIView and add this code: ``` extension UIView { func addConstraintsWithformat(format: String, views: UIView...) { // abstracting contstraints code var viewsDict = [String: UIView]() for(index,view) in views.enumerated() { let key = "v\(index)" viewsDict[key] = view view.translatesAutoresizingMaskIntoConstraints = false } addConstraints(NSLayoutConstraint.constraints(withVisualFormat: format, options: NSLayoutFormatOptions(), metrics: nil, views: viewsDict)) } } ``` Then Create a custom class for your `collectionViewCell` and add this code: ``` private func setUpContainerView() { let containerView = UIView() addSubview(containerView) addConstraintsWithformat(format: "H:|-90-[v0]|", views: containerView) addConstraintsWithformat(format: "V:[v0(50)]|", views: containerView) addConstraint(NSLayoutConstraint(item: containerView, attribute: .centerY, relatedBy: .equal, toItem: self, attribute: .centerY, multiplier: 1, constant: 0)) // set up labels and contents inside the ContainerView and set their constraints with addConstraintsWithformat } ``` Call this code in `override func setUpViews() {}` Upvotes: 0 <issue_comment>username_2: In terms of what's wrong with the constraints, you're missing the vertical constraints: ``` func setupView() { addSubview(nameLabel) var constraints: [NSLayoutConstraint] = [] constraints += NSLayoutConstraint.constraints(withVisualFormat: "H:|[v0]|", metrics: nil, views: ["v0": nameLabel]) constraints += NSLayoutConstraint.constraints(withVisualFormat: "V:|[v0]|", metrics: nil, views: ["v0": nameLabel]) addConstraints(constraints) } ``` But that's not why your cells are the wrong size. That's because your `sizeForItemAt` will not be called because you haven't declared `UICollectionViewDelegateFlowLayout` conformance. Set a breakpoint where you have it now, and you'll see it's not called. You can fix this by moving it into an extension where you define the conformance: ``` extension UserCollectionViewController: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: view.bounds.width, height: 50) } } ``` Or, better, since they're all the same, I'd just set `itemSize` for the whole layout: ``` override func viewDidLoad() { super.viewDidLoad() let layout = collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: view.bounds.width, height: 50) } override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) { super.viewWillTransition(to: size, with: coordinator) let layout = self.collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: size.width, height: 50) } ``` Upvotes: 1 <issue_comment>username_3: > > What am I getting wrong with the constraints? > > > Nothing. The width of a UICollectionViewCell under a UICollectionViewFlowLayout has nothing to do with its internal constraints. (There is supposed to be a feature that allows this, but it has never worked for me.) It is up to you to set the width explicitly, either by setting the flow layout's `itemSize` or by implementing `collectionView(_:layout:sizeForItemAt:)` in the collection view's delegate (which is usually a UICollectionViewController, and in any case must also explicitly adopt UICollectionViewDelegateFlowLayout as you've been told in username_2's answer). So just change this: ``` class UserCollectionViewController: UICollectionViewController { ``` to this: ``` class UserCollectionViewController : UICollectionViewController, UICollectionViewDelegateFlowLayout { ``` and you're done. Upvotes: 2 [selected_answer]
2018/03/18
1,363
5,135
<issue_start>username_0: This problem asks you to write two functions involving matrices/grids of integers, which are represented by two-dimensional lists. (a)Write the functionmultiply\_perimeter, which takes in a two-dimensional list of integers (representing a grid of numbers and of any size) and multiplies the values that are on the perimeterof the grid by a given multiplier parameter,mutatingthe argument list. The perimeter of the grid is defined as the outer most rows and columns of the grid. For instance, in the grid represented by L = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]], the function callmultiply\_perimeter(L, 2) would operate on the bolded values in the grid below, mutating the grid by doubling the perimeter values to the result on the right. ``` 1 2 3 4 2 4 6 8 5 6 7 8 10 6 7 16 9 10 11 12 would become 18 20 22 24 ``` **Here's what I have so far:** ``` def multiply_perimeter(L: [[int]], multiplier: int) -> None: for x in L: x[0] = x * multiplier x[-1] = x * multiplier for y in x: ```<issue_comment>username_1: Create an extension on UIView and add this code: ``` extension UIView { func addConstraintsWithformat(format: String, views: UIView...) { // abstracting contstraints code var viewsDict = [String: UIView]() for(index,view) in views.enumerated() { let key = "v\(index)" viewsDict[key] = view view.translatesAutoresizingMaskIntoConstraints = false } addConstraints(NSLayoutConstraint.constraints(withVisualFormat: format, options: NSLayoutFormatOptions(), metrics: nil, views: viewsDict)) } } ``` Then Create a custom class for your `collectionViewCell` and add this code: ``` private func setUpContainerView() { let containerView = UIView() addSubview(containerView) addConstraintsWithformat(format: "H:|-90-[v0]|", views: containerView) addConstraintsWithformat(format: "V:[v0(50)]|", views: containerView) addConstraint(NSLayoutConstraint(item: containerView, attribute: .centerY, relatedBy: .equal, toItem: self, attribute: .centerY, multiplier: 1, constant: 0)) // set up labels and contents inside the ContainerView and set their constraints with addConstraintsWithformat } ``` Call this code in `override func setUpViews() {}` Upvotes: 0 <issue_comment>username_2: In terms of what's wrong with the constraints, you're missing the vertical constraints: ``` func setupView() { addSubview(nameLabel) var constraints: [NSLayoutConstraint] = [] constraints += NSLayoutConstraint.constraints(withVisualFormat: "H:|[v0]|", metrics: nil, views: ["v0": nameLabel]) constraints += NSLayoutConstraint.constraints(withVisualFormat: "V:|[v0]|", metrics: nil, views: ["v0": nameLabel]) addConstraints(constraints) } ``` But that's not why your cells are the wrong size. That's because your `sizeForItemAt` will not be called because you haven't declared `UICollectionViewDelegateFlowLayout` conformance. Set a breakpoint where you have it now, and you'll see it's not called. You can fix this by moving it into an extension where you define the conformance: ``` extension UserCollectionViewController: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: view.bounds.width, height: 50) } } ``` Or, better, since they're all the same, I'd just set `itemSize` for the whole layout: ``` override func viewDidLoad() { super.viewDidLoad() let layout = collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: view.bounds.width, height: 50) } override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) { super.viewWillTransition(to: size, with: coordinator) let layout = self.collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: size.width, height: 50) } ``` Upvotes: 1 <issue_comment>username_3: > > What am I getting wrong with the constraints? > > > Nothing. The width of a UICollectionViewCell under a UICollectionViewFlowLayout has nothing to do with its internal constraints. (There is supposed to be a feature that allows this, but it has never worked for me.) It is up to you to set the width explicitly, either by setting the flow layout's `itemSize` or by implementing `collectionView(_:layout:sizeForItemAt:)` in the collection view's delegate (which is usually a UICollectionViewController, and in any case must also explicitly adopt UICollectionViewDelegateFlowLayout as you've been told in username_2's answer). So just change this: ``` class UserCollectionViewController: UICollectionViewController { ``` to this: ``` class UserCollectionViewController : UICollectionViewController, UICollectionViewDelegateFlowLayout { ``` and you're done. Upvotes: 2 [selected_answer]
2018/03/18
1,841
6,501
<issue_start>username_0: I have an RDD which look the following: > > ( (tag\_1, set\_1), (tag\_2, set\_2) ) , ... , ( (tag\_M, set\_M), (tag\_L, set\_L) ), ... > > > And for each pair from the RDD I'm going to compute the expression ![expression](https://i.stack.imgur.com/CdFkv.png) for k=0,..,3 and to find the sum: p(0)+...p(3). For each pair of pairs n\_1 is length of the set in the first pair and n\_2 is length of the set in the second pair. For now I wrote the following: ``` val N = 1000 pairRDD.map({ case ((t1,l1), (t2,l2)) => (t1,t2, { val n_1 = l1.size val n_2 = l2.size val vals = (0 to 3).map(k => { val P1 = (0 to (n_2-k-1)) .map(j => 1 - n_1/(N-j.toDouble)) .foldLeft(1.0)(_*_) val P2 = (0 to (k-1)) .map(j => (n_1-j.toDouble)*(n_2-j.toDouble)/(N-n_2+k.toDouble-j.toDouble)/(k.toDouble-j.toDouble) ) .foldLeft(1.0)(_*_) P1*P2 }) vals.sum.toDouble }) }) ``` The problem is it seems to work really slow and I hope there are some features of scala/spark that I don't know about and that could reduce the time of execution there. Edit: 1) In the first place I have a csv-file with 2 columns: tag and message\_id. For each tag I'm finding messages where it could be found and creating pairs like I described above (tagIdsZipped). The code is [here](https://pastebin.com/kcmLPexb) 2) Then I want to compute the expression for each pair and write it down to file. Actually, I also would like to filter the result, but it would be even longer, so I'm even not trying for now. 3) No, actually I dont, but the problems happened, when I tried to use this code, previously I did the following: ``` val tagPairsWithMeasure: RDD[(Tag, Tag, Measure)] = tagIdsZipped.map({ case ((t1,l1), (t2,l2)) => (t1,t2, { val numer = l1.intersect(l2).size val denom = Math.sqrt(l1.size)*Math.sqrt(l2.size) numer.toDouble / denom }) }) ``` and everything worked fine. (see 4) ) 4) In the file I described in 1) there are about 25million rows (~1.2 GB). I'm computing in on Xeon E5-2673 @2.4GHz and 32 GB RAM. It took about 1.5h to execute the code with the function I described in 3). I see, that there are more operations now, but it took about 3hours and only about 25% of the task was done. The main problem is I will have to work with about 3 times more data, but I can't even do it on a 'smaller' one. Thank you in advance!<issue_comment>username_1: Create an extension on UIView and add this code: ``` extension UIView { func addConstraintsWithformat(format: String, views: UIView...) { // abstracting contstraints code var viewsDict = [String: UIView]() for(index,view) in views.enumerated() { let key = "v\(index)" viewsDict[key] = view view.translatesAutoresizingMaskIntoConstraints = false } addConstraints(NSLayoutConstraint.constraints(withVisualFormat: format, options: NSLayoutFormatOptions(), metrics: nil, views: viewsDict)) } } ``` Then Create a custom class for your `collectionViewCell` and add this code: ``` private func setUpContainerView() { let containerView = UIView() addSubview(containerView) addConstraintsWithformat(format: "H:|-90-[v0]|", views: containerView) addConstraintsWithformat(format: "V:[v0(50)]|", views: containerView) addConstraint(NSLayoutConstraint(item: containerView, attribute: .centerY, relatedBy: .equal, toItem: self, attribute: .centerY, multiplier: 1, constant: 0)) // set up labels and contents inside the ContainerView and set their constraints with addConstraintsWithformat } ``` Call this code in `override func setUpViews() {}` Upvotes: 0 <issue_comment>username_2: In terms of what's wrong with the constraints, you're missing the vertical constraints: ``` func setupView() { addSubview(nameLabel) var constraints: [NSLayoutConstraint] = [] constraints += NSLayoutConstraint.constraints(withVisualFormat: "H:|[v0]|", metrics: nil, views: ["v0": nameLabel]) constraints += NSLayoutConstraint.constraints(withVisualFormat: "V:|[v0]|", metrics: nil, views: ["v0": nameLabel]) addConstraints(constraints) } ``` But that's not why your cells are the wrong size. That's because your `sizeForItemAt` will not be called because you haven't declared `UICollectionViewDelegateFlowLayout` conformance. Set a breakpoint where you have it now, and you'll see it's not called. You can fix this by moving it into an extension where you define the conformance: ``` extension UserCollectionViewController: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: view.bounds.width, height: 50) } } ``` Or, better, since they're all the same, I'd just set `itemSize` for the whole layout: ``` override func viewDidLoad() { super.viewDidLoad() let layout = collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: view.bounds.width, height: 50) } override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) { super.viewWillTransition(to: size, with: coordinator) let layout = self.collectionView!.collectionViewLayout as! UICollectionViewFlowLayout layout.itemSize = CGSize(width: size.width, height: 50) } ``` Upvotes: 1 <issue_comment>username_3: > > What am I getting wrong with the constraints? > > > Nothing. The width of a UICollectionViewCell under a UICollectionViewFlowLayout has nothing to do with its internal constraints. (There is supposed to be a feature that allows this, but it has never worked for me.) It is up to you to set the width explicitly, either by setting the flow layout's `itemSize` or by implementing `collectionView(_:layout:sizeForItemAt:)` in the collection view's delegate (which is usually a UICollectionViewController, and in any case must also explicitly adopt UICollectionViewDelegateFlowLayout as you've been told in username_2's answer). So just change this: ``` class UserCollectionViewController: UICollectionViewController { ``` to this: ``` class UserCollectionViewController : UICollectionViewController, UICollectionViewDelegateFlowLayout { ``` and you're done. Upvotes: 2 [selected_answer]
2018/03/18
1,060
3,120
<issue_start>username_0: I was writing a recursive function for fibonacci sequence and wanted to use a counter to keep track of the recursive calls i.e. how many calls it takes to finish this function in this case it is 15. [![enter image description here](https://i.stack.imgur.com/O2Ecw.jpg)](https://i.stack.imgur.com/O2Ecw.jpg) ``` fibonacci(num) { if(num<2) { return num; } return fibonacci(num-1)+fibonacci(num-2);} ``` I understand this function is really well-known and it's not the best approach to solve fibonacci sequence from a runtime standpoint. I still have a hard time printing out the number of steps it took<issue_comment>username_1: ``` let counter = 0; function fibonacci(num) { counter++; if(num < 2) { return num; } return fibonacci(num-1)+fibonacci(num-2); } console.log(counter); ``` Upvotes: 1 <issue_comment>username_2: ```js let counter = 0; function fibonacci(num) { counter++; if(num < 2) { return num; } return fibonacci(num-1)+fibonacci(num-2); } fibonacci(5);// if number is 5 console.log('Number of Times '+ counter); ``` Upvotes: 0 <issue_comment>username_3: One surprising way this is made easy is by use of continuation passing style We add a continuation parameter `k` and assign it a default value of `console.log`. In the base case, when `n` is less than 2, we call the continuation with `n` and a count of `1`. In the inductive case, `n` is 2 or greater, so we call `fib` for `n - 1` and `n - 2`, capture the return values of each in a continuation, and lastly call the input continuation with the combined result, adding a `+ 1` for the current `n` You'll also notice it does not require global variables ```js const fib = (n = 0, k = console.log) => n < 2 ? k (n, 1) : fib (n - 1, (a, aCount) => fib (n - 2, (b, bCount) => k (a + b, aCount + bCount + 1))) for (let i = 0; i < 10; i = i + 1) fib (i) // 0 1 // 1 1 // 1 3 // 2 5 // 3 9 // 5 15 // 8 25 // 13 41 // 21 67 // 34 109 ``` The default continuation just logs to the console, but we can provide our own continuation to do something more interesting with the result - this is also one technique used to simulate the return of multiple values ``` for (let n = 0; n < 10; n = n + 1) fib (n, (res, count) => console.log ("after %d computations, fib (%d) returned %d", count, n, res)) // after 1 computations, fib (0) returned 0 // after 1 computations, fib (1) returned 1 // after 3 computations, fib (2) returned 1 // after 5 computations, fib (3) returned 2 // after 9 computations, fib (4) returned 3 // after 15 computations, fib (5) returned 5 // after 25 computations, fib (6) returned 8 // after 41 computations, fib (7) returned 13 // after 67 computations, fib (8) returned 21 ``` Here's the same function using the older `function` syntax ``` const fib = function (n = 0, k = console.log) { if (n < 2) return k (n, 1) else return fib (n - 1, function (a, aCount) { return fib (n - 2, function (b, bCount) { return k (a + b, 1 + aCount + bCount) }) }) } ``` Upvotes: 0
2018/03/18
2,248
8,557
<issue_start>username_0: With firebase functions, you can utilize express to achieve nice functionality, like middleware etc. I have used [this example](https://github.com/firebase/functions-samples/blob/master/authorized-https-endpoint/functions/index.js) to get inspiration of how to write https firebase functions, powered by express. However, my issue is that [the official firebase documentation on how to do unit testing](https://firebase.google.com/docs/functions/unit-testing), does not include a https-express example. So my question is how to **unit test** the following function (typescript)?: ``` // omitted init. of functions import * as express from 'express'; const cors = require('cors')({origin: true}); const app = express(); app.use(cors); // The function to test app.get('helloWorld, (req, res) => { res.send('hello world'); return 'success'; }); exports.app = functions.https.onRequest(app); ```<issue_comment>username_1: You can use [postman](https://www.getpostman.com/) application for unit test. Enter following url with your project name <https://us-central1-your-project.cloudfunctions.net/hello> ``` app.get('/hello/',(req, res) => { res.send('hello world'); return 'success'; }); ``` Upvotes: -1 <issue_comment>username_2: Would something like `mock-express` work for you? It should allow you to test paths without actually forcing you to make the express server. <https://www.npmjs.com/package/mock-express> Upvotes: 0 <issue_comment>username_3: You can use [`supertest`](https://github.com/visionmedia/supertest) paired with the [guide](https://firebase.google.com/docs/functions/unit-testing#testing_http_functions) from Firebase. Below is a very basic example of testing your app, however, you can make it more complex/better by integrating `mocha`. ``` import * as admin from 'firebase-admin' import * as testFn from 'firebase-functions-test' import * as sinon from 'sinon' import * as request from 'supertest' const test = testFn() import * as myFunctions from './get-tested' // relative path to functions code const adminInitStub = sinon.stub(admin, 'initializeApp') request(myFunctions.app) .get('/helloWorld') .expect('hello world') .expect(200) .end((err, res) => { if (err) { throw err } }) ``` Upvotes: 2 <issue_comment>username_4: For local & no-network unit tests, you could refactor the `app.get("helloWorld", ...)` callback into a separate function and call it with mock objects. A general approach would be something like this: main.js: ``` // in the Firebase code: export function helloWorld(req, res) { res.send(200); } app.get('helloWorld', helloWorld); ``` main.spec.js: using jasmine & sinon ``` // in the test: import { helloWorld } from './main.js'; import sinon from 'sinon'; const reqMock = {}; const resMock = { send: sinon.spy() }; it('always responds with 200', (done) => { helloWorld(reqMock, resMock); expect(resMock.send.callCount).toBe(1); expect(resMock.send).toHaveBeenCalledWith(200); }); ``` Upvotes: -1 <issue_comment>username_5: Testing is about building up confidence or trust. I would start by Unit Testing the function in FireBase. Without more requirements defined, I would follow the documentation. Once those Unit Tests pass you can consider what type of testing you want at the Express level. Keeping in mind that you have already tested the function, the only thing to test at the Express level is if the mapping is correct. A few tests at that level should be sufficient to ensure that the mappings have not become "stale" as a result of some set of changes. If you want to test the Express level and above without having to involve the DB, then you would look at a mocking framework to act like the database for you. Hope this helps you think about what tests you need. Upvotes: 0 <issue_comment>username_6: I've gotten this to work using [firebase-functions-test](https://www.npmjs.com/package/firebase-functions-test) and [node-mocks-http](https://www.npmjs.com/package/node-mocks-http). I have this utility class FunctionCaller.js : ```js 'use strict'; var httpMocks = require('node-mocks-http'); var eventEmitter = require('events').EventEmitter; const FunctionCaller = class { constructor(aYourFunctionsIndex) { this.functions_index = aYourFunctionsIndex; } async postFunction(aFunctionName,aBody,aHeaders,aCookies) { let url = (aFunctionName[0]=='/') ? aFunctionName : `/${aFunctionName}`; let options = { method: 'POST', url: url, body: aBody }; if (aHeaders) options.headers = aHeaders; if (aCookies) { options.cookies = {}; for (let k in aCookies) { let v = aCookies[k]; if (typeof(v)=='string') { options.cookies[k] = {value: v}; } else if (typeof(v)=='object') { options.cookies[k] = v; } } } var request = httpMocks.createRequest(options); var response = httpMocks.createResponse({eventEmitter: eventEmitter}); var me = this; await new Promise(function(resolve){ response.on('end', resolve); if (me.functions_index[aFunctionName]) me.functions_index[aFunctionName](request, response); else me.functions_index.app(request, response); }); return response; } async postObject(aFunctionName,aBody,aHeaders,aCookies) { let response = await this.postFunction(aFunctionName,aBody,aHeaders,aCookies); return JSON.parse(response._getData()); } async getFunction(aFunctionName,aParams,aHeaders,aCookies) { let url = (aFunctionName[0]=='/') ? aFunctionName : `/${aFunctionName}`; let options = { method: 'GET', url: url, query: aParams // guessing here }; if (aHeaders) options.headers = aHeaders; if (aCookies) { options.cookies = {}; for (let k in aCookies) { let v = aCookies[k]; if (typeof(v)=='string') { options.cookies[k] = {value: v}; } else if (typeof(v)=='object') { options.cookies[k] = v; } } } var request = httpMocks.createRequest(options); var response = httpMocks.createResponse({eventEmitter: eventEmitter}); var me = this; await new Promise(function(resolve){ response.on('end', resolve); if (me.functions_index[aFunctionName]) me.functions_index[aFunctionName](request, response); else me.functions_index.app(request, response); }); return response; } async getObject(aFunctionName,aParams,aHeaders,aCookies) { let response = await this.getFunction(aFunctionName,aParams,aHeaders,aCookies); return JSON.parse(response._getData()); } }; module.exports = FunctionCaller; ``` and my app is mounted as app : ```js exports.app = functions.https.onRequest(expressApp); ``` and my firebase.json contains : ```js "rewrites": [ : : : { "source": "/path/to/function", "function": "app" } ] ``` In my test file at the top I do : ```js const FunctionCaller = require('../FunctionCaller'); let fire_functions = require('../index'); const fnCaller = new FunctionCaller(fire_functions); ``` and then in the test I do : ```js let response = await fnCaller.postFunction('/path/to/function',anObject); ``` and it calls my function with anObject as request.body and returns the response object. I'm using node 8 on Firebase to get async/await etc. Upvotes: 0 <issue_comment>username_7: This works with Jest ```js import supertest from 'supertest' import test from 'firebase-functions-test' import sinon from 'sinon' import admin from 'firebase-admin' let undertest, adminInitStub, request const functionsTest = test() beforeAll(() => { adminInitStub = sinon.stub(admin, 'initializeApp') undertest = require('../index') // inject with the exports.app methode from the index.js request = supertest(undertest.app) }) afterAll(() => { adminInitStub.restore() functionsTest.cleanup() }) it('get app', async () => { let actual = await request.get('/') let { ok, status, body } = actual expect(ok).toBe(true) expect(status).toBeGreaterThanOrEqual(200) expect(body).toBeDefined() }) ``` Upvotes: 3
2018/03/18
1,898
6,477
<issue_start>username_0: I am trying to **efficiently** deduct which conditions caused an if statement to be overlooked by the program without using a sequence of if statements to verify each variable's relative integrity individually. Is this possible? ``` bool state = false; int x = 0; int y = 1; int z = 3; if(x == 0 && y == 1 && z == 2) { // Do something... state == true; } if(state == false) { std::cout << "I did not execute the if statement because the following conditions were not met: " << std::endl; /*Find a way to make the program output that z != 3 stopped the conditional from running without directly using if(z != 2)*/ } ```<issue_comment>username_1: If this is something you want to display to the end user, and not just while debugging, as suggested in the comments, you can design a simple data structure for yourself. It would be a list / vector / array of entries, each of which contain a) a value to compare against, b) a value to test, and optionally c) a description of the test. Then simply iterate the list, and check if equality holds for all of them. If not, you can stop the flow of the programme and print out the description. To more directly answer your question: no, there is nothing in C++ that would allow you to examine the results of previous statements. The statements and operations you see in the source code get compiled and possibly won't even be trivially recognisable among the assembly instructions. Being able to check the results would mean the data has to be stored somewhere, which would be an incredible waste of memory and processing time. That is why you have to do this yourself. Upvotes: 0 <issue_comment>username_2: You could introduce a counter as a "condition" between each of the conditions in the `if` to see when short-circuit evaluation of operator `&&` prohibits execution of the latter conditions: ``` int nrOfConditionFailing = 1; if(x == 0 && nrOfConditionFailing++ && y == 1 && nrOfConditionFailing++ && z == 2) { state = true; } if (!state) { cout << "failed due to condition nr " << nrOfConditionFailing << endl; } ``` If you want to check all the conditions, you cannot do it in a single if-statement; Short-circuit evaluation of operator && will prevent the latter conditions to be even checked/evaluated if one of the former conditions evaluates to false. However, you could do such a check as an expression that marks a bit in an unsigned int for each condition that is not met: ``` int x = 1; int y = 1; int z = 3; unsigned int c1 = !(x == 0); unsigned int c2 = !(y == 1); unsigned int c3 = !(z == 2); unsigned int failures = (c1 << 0) | (c2 << 1) | (c3 << 2); if (failures) { for(int i=0; i<3; i++) { if (failures & (1 << i)) { cout << "condition " << (i+1) << " failed." << endl; } } } else { cout << "no failures." << endl; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: > > Is this possible? > > > It is not possible in the way you were thinking about the problem. You can solve your problem instead by running each test individually, storing the result, and then identifying which of them were `false`: ``` std::vector > tests = { {"x==0",x==0}, // test name as a string followed by the actual test {"y==1",y==1}, {"z==2",z==2} }; if(!all\_of(tests.begin(),tests.end(),[](std::tuple &t) { return std::get<1>(t); })) { std::cout << "The following tests failed: "; //remove all tests that passed tests.erase( std::remove\_if(tests.begin(),tests.end(),[](std::tuple &t) { return std::get<1>(t); }), tests.end()); //This will only print out the tests that failed std::transform(tests.begin(),tests.end(),std::ostream\_iterator(std::cout, " "),[](std::tuple &t) { return std::get<0>(t); }); std::cout << std::endl; } else { //what to do if all tests were true } ``` This will evaluate all tests (i.e., it won't use `&&`'s short-circuiting) and print all the ones that failed. You could likely wrap this into a `class` to make this more generalizable and user friendly. Upvotes: 0 <issue_comment>username_4: The original code tests each variable individually. The `&&` series is exactly equivalent to a series of if...else statements. There's nothing inefficient about one compared to the other, and there's nothing "clever" about using some tricky solution that achieves the same end result as straightforward code. I might write: ``` char const *reason = nullptr; if(x != 0) reason = "x failed"; else if (y != 1) reason = "y failed"; else if (z != 2 ) reason = "z failed"; if ( reason ) std::cout << reason << '\n'; else { // success code here... } ``` Upvotes: 0 <issue_comment>username_5: I would typically do something like the following to determine if a series of validity checks worked and to mark which ones failed. ``` unsigned long ulFlags = 0; int x = 0; int y = 1; int z = 3; ulFlags |= (x == 0) : 0 ? 0x0001; // if bit set then condition failed. ulFlags |= (y == 1) : 0 ? 0x0002; // if bit set then condition failed. ulFlags |= (z == 2) : 0 ? 0x0004; // if bit set then condition failed. if(ulFlags == 0) { // Do something since all conditions are met and valid ... } else { std::cout << "I did not execute if statement because: " << std::hex << ulFlags << std::endl; /* Find a way to make the program output that z != 3 stopped the conditional from running without directly using if(z != 2) */ } ``` Upvotes: 0 <issue_comment>username_6: This is the same idea as some of the other answers, but with a template to simplify the syntax to use it. Stores all the individual checks in an `std::array` and one additional bool to be able to re-check the full statement without going through the individual results again. No dynamic allocation is a plus as well. ``` #include #include #include template struct what\_failed { what\_failed(N... n) : arr{n...}, status{(... && n)} { static\_assert(std::conjunction\_v...>, "Only pass bools"); } std::array arr; bool status; operator bool() { return status; } }; int main() { auto check = what\_failed(2 == 5, 2 < 5, 2 > 5, 1 == 1); if (check) std::cout << "Check: All true"; else { std::cout << "Check: "; for (auto c : check.arr) std::cout << c << ' '; } return 0; } ``` This requires c++17 due to fold expressions and template deduction in a constructor, but that can be worked around for c++11 with a couple of extra help-templates. Upvotes: 0
2018/03/18
2,322
7,881
<issue_start>username_0: So I'm finishing up my code, and encountered an error and don't know why! The String "Size" is null when it gets to the "baseArea()" Method. No errors are displayed. EDIT: Added the Main method of the class, where everything is referenced, please note this is not the full code in the class ! here is the relevant code: ``` public class OrderingSystem { private Canvas canvas; private double Price; private String Topping1 = setTopping1(); private String Topping2 = setTopping2(); private String Sauce = setSauce(); private String Size; private String Crust; private double BaseArea; /** * Constructor for the ordering system. */ public OrderingSystem() { canvas = new Canvas("Pizza Ordering", 900, 650); } /** * Method to draw the outline of the order screen. */ public void drawOrderScreen() { canvas.setForegroundColor(Color.BLACK); // vertical dividers canvas.drawLine(300, 0, 300, 600); canvas.drawLine(600, 0, 600, 600); // halfway divider canvas.drawLine(0, 300, 900, 300); setSauce(); startToppings(); startOrdering(); setSize(); baseArea(BaseArea, Size); Crust(); } public String setSize(){ System.out.print("What size would you like: Large, Medium or Small? : "); Scanner sizescanner = new Scanner(System.in); String Size = sizescanner.nextLine(); if (Size.equals("Large")){ System.out.print( "Large selected ! "); } else if (Sauce.equals("Medium")){ System.out.print( "Medium selected !" ); } else if (Sauce.equals("Small")){ System.out.print( "Small selected !" ); } else { sizescanner.reset(); System.out.print("Invalid Size! "); setSize(); } return Size; } public double baseArea(double baseArea,String Size){ if (Size.equals("Large")){ baseArea = 176.7150; } else if (Size.equals("Medium")){ baseArea = 113.0976; } else if (Size.equals("Small")){ baseArea = 78.54; } return baseArea; } ```<issue_comment>username_1: If this is something you want to display to the end user, and not just while debugging, as suggested in the comments, you can design a simple data structure for yourself. It would be a list / vector / array of entries, each of which contain a) a value to compare against, b) a value to test, and optionally c) a description of the test. Then simply iterate the list, and check if equality holds for all of them. If not, you can stop the flow of the programme and print out the description. To more directly answer your question: no, there is nothing in C++ that would allow you to examine the results of previous statements. The statements and operations you see in the source code get compiled and possibly won't even be trivially recognisable among the assembly instructions. Being able to check the results would mean the data has to be stored somewhere, which would be an incredible waste of memory and processing time. That is why you have to do this yourself. Upvotes: 0 <issue_comment>username_2: You could introduce a counter as a "condition" between each of the conditions in the `if` to see when short-circuit evaluation of operator `&&` prohibits execution of the latter conditions: ``` int nrOfConditionFailing = 1; if(x == 0 && nrOfConditionFailing++ && y == 1 && nrOfConditionFailing++ && z == 2) { state = true; } if (!state) { cout << "failed due to condition nr " << nrOfConditionFailing << endl; } ``` If you want to check all the conditions, you cannot do it in a single if-statement; Short-circuit evaluation of operator && will prevent the latter conditions to be even checked/evaluated if one of the former conditions evaluates to false. However, you could do such a check as an expression that marks a bit in an unsigned int for each condition that is not met: ``` int x = 1; int y = 1; int z = 3; unsigned int c1 = !(x == 0); unsigned int c2 = !(y == 1); unsigned int c3 = !(z == 2); unsigned int failures = (c1 << 0) | (c2 << 1) | (c3 << 2); if (failures) { for(int i=0; i<3; i++) { if (failures & (1 << i)) { cout << "condition " << (i+1) << " failed." << endl; } } } else { cout << "no failures." << endl; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: > > Is this possible? > > > It is not possible in the way you were thinking about the problem. You can solve your problem instead by running each test individually, storing the result, and then identifying which of them were `false`: ``` std::vector > tests = { {"x==0",x==0}, // test name as a string followed by the actual test {"y==1",y==1}, {"z==2",z==2} }; if(!all\_of(tests.begin(),tests.end(),[](std::tuple &t) { return std::get<1>(t); })) { std::cout << "The following tests failed: "; //remove all tests that passed tests.erase( std::remove\_if(tests.begin(),tests.end(),[](std::tuple &t) { return std::get<1>(t); }), tests.end()); //This will only print out the tests that failed std::transform(tests.begin(),tests.end(),std::ostream\_iterator(std::cout, " "),[](std::tuple &t) { return std::get<0>(t); }); std::cout << std::endl; } else { //what to do if all tests were true } ``` This will evaluate all tests (i.e., it won't use `&&`'s short-circuiting) and print all the ones that failed. You could likely wrap this into a `class` to make this more generalizable and user friendly. Upvotes: 0 <issue_comment>username_4: The original code tests each variable individually. The `&&` series is exactly equivalent to a series of if...else statements. There's nothing inefficient about one compared to the other, and there's nothing "clever" about using some tricky solution that achieves the same end result as straightforward code. I might write: ``` char const *reason = nullptr; if(x != 0) reason = "x failed"; else if (y != 1) reason = "y failed"; else if (z != 2 ) reason = "z failed"; if ( reason ) std::cout << reason << '\n'; else { // success code here... } ``` Upvotes: 0 <issue_comment>username_5: I would typically do something like the following to determine if a series of validity checks worked and to mark which ones failed. ``` unsigned long ulFlags = 0; int x = 0; int y = 1; int z = 3; ulFlags |= (x == 0) : 0 ? 0x0001; // if bit set then condition failed. ulFlags |= (y == 1) : 0 ? 0x0002; // if bit set then condition failed. ulFlags |= (z == 2) : 0 ? 0x0004; // if bit set then condition failed. if(ulFlags == 0) { // Do something since all conditions are met and valid ... } else { std::cout << "I did not execute if statement because: " << std::hex << ulFlags << std::endl; /* Find a way to make the program output that z != 3 stopped the conditional from running without directly using if(z != 2) */ } ``` Upvotes: 0 <issue_comment>username_6: This is the same idea as some of the other answers, but with a template to simplify the syntax to use it. Stores all the individual checks in an `std::array` and one additional bool to be able to re-check the full statement without going through the individual results again. No dynamic allocation is a plus as well. ``` #include #include #include template struct what\_failed { what\_failed(N... n) : arr{n...}, status{(... && n)} { static\_assert(std::conjunction\_v...>, "Only pass bools"); } std::array arr; bool status; operator bool() { return status; } }; int main() { auto check = what\_failed(2 == 5, 2 < 5, 2 > 5, 1 == 1); if (check) std::cout << "Check: All true"; else { std::cout << "Check: "; for (auto c : check.arr) std::cout << c << ' '; } return 0; } ``` This requires c++17 due to fold expressions and template deduction in a constructor, but that can be worked around for c++11 with a couple of extra help-templates. Upvotes: 0
2018/03/18
2,048
7,487
<issue_start>username_0: React Native app crashing without any error log. No output on "`react-native log-android`" terminal, no red screen with error, Android emulator just crashes. Tried running with Expo, again crashes with no error Happens when working with `TextInput`. I have some ideas how I can fix the code, but want to understand why is app crashing without error log and making debugging much more difficult?<issue_comment>username_1: Run adb logcat You can find your app name in your package.json, under the name field. Also when looking for errors specifically dealing with react native, try running `adb logcat | grep 'redbox'` This way you don't have to scan the entire logfile. Or try the method shown in [this article](https://medium.com/@taufiq_ibrahim/using-adb-logcat-for-react-native-debugging-38256bda007c): > > Run `adb logcat *:S ReactNative:V ReactNativeJS:V` in a terminal to see your Android app's logs. > > > Upvotes: 5 <issue_comment>username_2: While running your Android Emulator for debugging also run > > Android Studio -> Tools -> Android -> Android Device Monitor > > > It will tell you exactly what is crashing the app under `LogCat` tab. Upvotes: 0 <issue_comment>username_3: two easy steps solved my problem... open android studio and remove offline bundle from src/main/assest..(\*\* if any) open MainApplication.java and remove the following import.. import com.facebook.react.BuildConfig Upvotes: 0 <issue_comment>username_4: my bug was fixed by deleting the build folder(inside android/app) and running ```sh npx react-native run-android ``` Upvotes: 6 <issue_comment>username_5: I had the same issue, maybe you made the same mistakes i did. Make sure u didn't change the app's package name (ex: com.myapp.app). I lost a lot of time on this. Seems like someone else published a app with the same name on google play, and i didnt change it correctly in each place i should on mine. Anyway, if that is not your problem, it is probably in something u changed in build.gradle or other config file, try to remember where u last changed something. Upvotes: 0 <issue_comment>username_6: In case react native app crashes without an error message checking the problem in Android Studio's Logcat works perfect. But don't forget to create a filter with your package name. 1. Open Android Studio 2. Open Logcat 3. Click on "Edit Filter Configuration" in the right top of the logcat screen 4. Fill in your criteria (don't forget package name) 5. Run you react native project on an android device or emulator [![enter image description here](https://i.stack.imgur.com/FQTm6.png)](https://i.stack.imgur.com/FQTm6.png) Upvotes: 3 <issue_comment>username_7: My RN App works correctly before, suddenly one day it does not work, keeps crashing without error log. All I have to do is: delete `build` and `.gradle` folder under `android` folder, then run again, it works. Hope it may help someone. Upvotes: 2 <issue_comment>username_8: This may be because SOLoader is absent. Ensure `implementation'com.facebook.soloader:soloader:0.9.0+'` is added under dependencies in android/app/build.gradlle clean your build `cd android` `./gradlew clean` Try bundling `./gradlew bundleRelease` Exit android folder `cd ../` Try running `npx react-native run-android --variant=release` Upvotes: 2 <issue_comment>username_9: I was running an Android Project on my Linux PC, then I have made some changes on it in a MacOS PC, and when I git pulled de project back to Linux I faced that crashes inside the `android` folder, just run: ```sh ./gradlew clean ``` and then try to run it again. Upvotes: 4 <issue_comment>username_10: a very simple solution for me. delete this file -> gradle-wrapper.properties and run android folder with android studio. Upvotes: -1 <issue_comment>username_11: My app kept crashing because I had changed the name of the app in a few places. Make sure your app name is the same everywhere. Check if the folder name in ``` android/app/src/main/java/com/ ``` has the same name as that of your app in manifest file in ``` android/app/src/main/ ``` Upvotes: 2 <issue_comment>username_12: Was having this issue on Windows 10 What fixed it for me in the end was: 1. delete `C:\Users\%userprofile%\.gradle\caches` 2. delete`your-react-app/android` folder completely - then 'Discard Changes' in git so you get the original files back 3. re-run `npx react-native start` and `npx react-native run-android` Upvotes: 0 <issue_comment>username_13: this worked for me run `react-native start --reset-cache` after this if app throws some import error kill the task and run `react-native start` or `npm start` normally and open the app. Upvotes: 1 <issue_comment>username_14: Deleting my `node_modules`, reinstalling them and run ``` npx react-native run-android ``` fixed it for me Upvotes: 1 <issue_comment>username_15: In case none of these work, don't hesitate to just filter on Warning or Higher as sometimes there will be a low level error that will not be caught by the RN filters. I was only able to find my issue(s) with the following: ``` adb logcat '*:W'. ``` Upvotes: 4 <issue_comment>username_16: In case you tried any of the given suggestions and none worked, use Android Studio to run the app in debug mode and then watch logs at the debug console. For me, it was a styling rule error. I did `paddingTop: Platform.OS === 'ios' && 50` which caused the app to exit immediately on Android. The debug console log I got was `E/unknown:ViewManager: Error while updating prop paddingTop` To correct that, I did `paddingTop: Platform.OS === 'ios' ? 150 : 0` Upvotes: -1 <issue_comment>username_17: A tip, my emulator api version was 32. And my targetSdkVersion was 31. I created another emulator which api level is 31 and the error is gone. Upvotes: 0 <issue_comment>username_18: Well first of all, does it crashes when it opens or does it crash at a specific screen? Mine crashed because I did not put my text inside inside the `TouchableOpacity` Upvotes: 0 <issue_comment>username_19: ```html Delete Your node modules Then run npm install after this run for window gradlew clean for macOS ./gradlew clean then run npx react-natie run-android --variant-release or npx react-natie run-android ``` Upvotes: -1 <issue_comment>username_20: For me the issue was a lot. I was using react-native-video then at some point installed react-native-track-player. These packages were supposed to run together on the app. And so it was working fine until I ran into an issue and I needed to clear my build folder. After clearing it, my app refused to open but was installing. What I did to fix the issue: * Clear my node\_modules and reinstalled (This did not fix the issue) * Then I uninstalled **react-native-track-player** and started using **react-native-sound-player**. * Reinstalled react-native-video and followed the installation guide from [here](https://www.npmjs.com/package/react-native-video) * Cleared my build again and performed all necessary npm installs In my case, I found that using **react-native-video** and **react-native-track-player** together was the major source of my problem and I think it's related to both packages using different versions of exoplayer. So I would say you uninstall one of them or find a way to work around the exoplayer versions. Just incase, I was using **react-native-video** version **5.2.1** and **react-native-track-player** version **3.2.0** Upvotes: 0
2018/03/18
599
2,465
<issue_start>username_0: First i am execute procedure for create #temp table with data in first, then after i want #temp some column with other table column using joins. first query is execute after that on second query error occur (#temp object is invalid) ``` if (con.State == ConnectionState.Closed) { con.Open(); } IsInTransaction = true; trans = con.BeginTransaction(); da = new SqlDataAdapter("Execute SP_Statement_Temp", con); da.SelectCommand.Transaction = trans; DataTable DatTemp = new DataTable(); da.Fill(DatTemp); SelectString = "Select Distinct #temp.IdentityID, TblMasterTypeOfIdentity.TypeOfIdentity,TblIdentity.IdentityName, '' As 'Opening Balance' , '' As 'Closing Balance' from #temp inner join TblIdentity on TblIdentity.IdentityID=#temp.IdentityID inner join TblMasterTypeOfIdentity on TblMasterTypeOfIdentity.TypeOfIdentityID=#temp.TypeOfIdentityID"; CmdString = SelectString + " " + WhereString + " " + OrderBy; da = new SqlDataAdapter(CmdString, con); da.SelectCommand.Transaction = trans; DataTable datDetail = new DataTable(); da.Fill(datDetail); trans.Commit(); IsInTransaction = false; con.Close(); ```<issue_comment>username_1: That's going to be because the `#temp` table is dropped immediately after creating SP. > > A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished. The table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The table cannot be referenced by the process that called the stored procedure that created the table. > > > <https://learn.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql> Upvotes: 3 [selected_answer]<issue_comment>username_2: If you want to create a temporary table you can use across connections, you can use a double hash (`##`) instead of a single (`#`). This creates a global temporary variable, as opposed to a local temporary variable. So, if you change the SQL inside `Execute SP_Statement_Temp` to create a temp variable called `##temp` instead of `#temp`, you should be able to use that in the SQL. This has been asked before, see e.g. [Local and global temporary tables in SQL Server](https://stackoverflow.com/questions/2920836/local-and-global-temporary-tables-in-sql-server/2921091) Upvotes: 2
2018/03/18
586
2,403
<issue_start>username_0: I typically work with C++ but off late have to program a lot in Python. Coming from a C++ background, I am finding dynamic typing to be very inconvenient when I have to modify an existing codebase. I know I am missing something very basic and hence turning to the stackoverflow community to understand best practices. Imagine, there is a class with a number of methods and I need to edit an existing method. Now, in C++, I could explicitly see the datatype of every parameter, check out the .h files of the corresponding class if need be and could quickly understand what's happening. In python on the other hand, all I see are some variable names. I am not sure if it is a list or a dictionary or maybe some custom datastructure with its getters and setters. To figure this out, I need to look at some existing usages of this function or run the code with breakpoints and see what kind of datastructure am I getting. I find either methods to be very time consuming. Is there a faster way to resolve this problem? How should I quickly determine what's the datatype of a particular variable? The general impression is that code is easier to read/write in Python, but I am not finding it very quick to read python code because of lack of types. What am I missing here?<issue_comment>username_1: These are the following things i follow: 1. Comment clearly what is being returned and what is the input in the `docstring` 2. Use a debug(or a Flag) variable, which is by default set to `False`, and keep a if block as follows. ``` if debug: print(type(variable)) ``` So, in that way, you would be sure to see what is the type of the variable. Upvotes: 0 <issue_comment>username_2: In Python, you can see the data type of any variable by using ``` type(variable_name) ``` It will show you data type of that variable. Such as int, bool, str, etc. Upvotes: 0 <issue_comment>username_3: I feel your pain, too! I frequently switch between Python and C++, so paradigm shifting does give me paranoia. However, I've been readjusting my codes with: [Type Annotations](https://docs.python.org/3/library/typing.html) It doesn't improve runtime performance, but it provides sense of comfort when reading through tens of thousands line of codes. Also, you can run your python programs with this to further verify your type annotations: [mypy](http://mypy-lang.org/) Upvotes: 1
2018/03/18
624
2,245
<issue_start>username_0: **Scenario:** I have 2 tables, Car Table, and Employee Table. Pretty simple tables, both tables however include a location code: ``` ╔════════════════╗ ╔═══════════╗ β•‘ Employee Table β•‘ β•‘ Car Table β•‘ ╠════════════════╣ ╠═══════════╣ β•‘ empID β•‘ β•‘ CarID β•‘ β•‘ empName β•‘ β•‘ CarName β•‘ β•‘ LocID β•‘ β•‘ LocID β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• β•šβ•β•β•β•β•β•β•β•β•β•β•β• ``` This is to represent Employees working at specific locations, and also shows which vehicles are in the respective locations. What I am trying to display with this query is a count of each car, and employee GROUPED BY Location ID, for example: ![Example of what I want](https://i.stack.imgur.com/dsCMA.png) Every time I try a query, it always adds multiple location ids in the query, even though I want to group by it. Is this possible with one query?<issue_comment>username_1: ``` select e.LocId, EmpCount, CarCount from (select LocId, count(*) as EmpCount from Employee group by LocId) as e inner join (select LocId, count(*) as CarCount from Car group by LocId) as c on c.LocId = e.LocId ``` Here's the general idea though I don't think that's perfect syntax for Access as I'm sure it must be missing a pair of parentheses somewhere. Since you have a many to many relationship you'll need to be able to collapse to a single row before joining them up. I'm also assuming that every location will be represented by both an employee and a car. Perhaps you can get away with a left outer join if that's not completely true. As I believe that Access doesn't allow full outer join you'll need another workaround if you can't make assumptions along those lines. It might also work to use `count(distinct)` but I think it'll be faster using the above approach if the tables are large. ``` select e.LocId, count(distinct e.EmpId) as EmpCount, count(distinct c.CarId) as CarCount from Employee as e inner join Car as c on c.LocId = e.LocId ``` Upvotes: 2 <issue_comment>username_2: Try this: ``` SELECT CASE WHEN (A.LocID IS NULL) THEN B.LocID ELSE A.LocID END LocID, Count(A.CarID), Count(B.empID) FROM Car A FULL OUTER JOIN Employee B ON A.LocID=B.LocID GROUP BY A.LocID, B.LocID; ``` Upvotes: 0
2018/03/18
964
1,715
<issue_start>username_0: i have a text file with 3 columns tab separated: 1st column: a gene ID 2nd column: a value 3rd column: a list of genes associated to the one in the 1st column comma separated (number of genes can vary across lines) ``` TMCS09g1008699 6.4 TMCS09g1008677,TMCS09g1008681,TMCS09g1008685 TMCS09g1008690 5.3 TMCS09g1008686,TMCS09g1008680,TMCS09g1008675,TMCS09g1008690 ``` etc.. what i want is this: ``` TMCS09g1008699 6.4 TMCS09g1008677 TMCS09g1008699 6.4 TMCS09g1008681 TMCS09g1008699 6.4 TMCS09g1008685 TMCS09g1008690 5.3 TMCS09g1008686 TMCS09g1008690 5.3 TMCS09g1008680 TMCS09g1008690 5.3 TMCS09g1008675 TMCS09g1008690 5.3 TMCS09g1008690 ``` could someone help me?<issue_comment>username_1: ``` $ awk 'BEGIN{FS=OFS="\t"} {n=split($3,f3,","); for(i=1;i<=n;i++) print $1,$2,f3[i]}' file ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is an R solution using packages from the `tidyverse`: ``` library(tidyverse); df %>% mutate(V3 = str_split(V3, ",")) %>% unnest(); # V1 V2 V3 #1 TMCS09g1008699 6.4 TMCS09g1008677 #2 TMCS09g1008699 6.4 TMCS09g1008681 #3 TMCS09g1008699 6.4 TMCS09g1008685 #4 TMCS09g1008690 5.3 TMCS09g1008686 #5 TMCS09g1008690 5.3 TMCS09g1008680 #6 TMCS09g1008690 5.3 TMCS09g1008675 #7 TMCS09g1008690 5.3 TMCS09g1008690 ``` Explanation: `str_split` column 3 based on `","`; expand the resulting `list` entries with `unnest`. --- Sample data ----------- ``` df <- read.table(text = "TMCS09g1008699 6.4 'TMCS09g1008677,TMCS09g1008681,TMCS09g1008685' TMCS09g1008690 5.3 'TMCS09g1008686,TMCS09g1008680,TMCS09g1008675,TMCS09g1008690'", header = F) ``` Upvotes: 1
2018/03/18
694
1,619
<issue_start>username_0: The script that I am using to import external HTML which has my Bootstrap based Navbar, here is the Javascript: ``` var link = document.querySelector('link[rel="import"]'); // Clone the <template> in the import. var template = link.import.querySelector('template'); var clone = document.importNode(template.content, true); document.querySelector('#navBar').appendChild(clone); ``` In my HTML page, all I am doing is including a reference to external html file in section and adding a div in my section with defined class name as follows: ``` section --> section --> ```<issue_comment>username_1: ``` $ awk 'BEGIN{FS=OFS="\t"} {n=split($3,f3,","); for(i=1;i<=n;i++) print $1,$2,f3[i]}' file ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is an R solution using packages from the `tidyverse`: ``` library(tidyverse); df %>% mutate(V3 = str_split(V3, ",")) %>% unnest(); # V1 V2 V3 #1 TMCS09g1008699 6.4 TMCS09g1008677 #2 TMCS09g1008699 6.4 TMCS09g1008681 #3 TMCS09g1008699 6.4 TMCS09g1008685 #4 TMCS09g1008690 5.3 TMCS09g1008686 #5 TMCS09g1008690 5.3 TMCS09g1008680 #6 TMCS09g1008690 5.3 TMCS09g1008675 #7 TMCS09g1008690 5.3 TMCS09g1008690 ``` Explanation: `str_split` column 3 based on `","`; expand the resulting `list` entries with `unnest`. --- Sample data ----------- ``` df <- read.table(text = "TMCS09g1008699 6.4 'TMCS09g1008677,TMCS09g1008681,TMCS09g1008685' TMCS09g1008690 5.3 'TMCS09g1008686,TMCS09g1008680,TMCS09g1008675,TMCS09g1008690'", header = F) ``` Upvotes: 1
2018/03/18
674
2,590
<issue_start>username_0: I'm currently working on a bot project and stuck in some line of codes. What I'm trying to do is convert the user input into an array and then modify it before sending it back to the user. It's like ``` user=> hello bot => :indicator_h::indicator_e::indicator_l::indicator_l::indicator_o: ``` As you can see, the letter after `_` are based on user input `:indicator_{userinput#1}` and then adds another `:` at the end. I'm using `split()` to convert user's input into array. I've made the bot send `:indicator_h:`, but it only worked for a single word, when I send more than one characters, it will send `:indicator_h,e,l,l,o:` instead of separate 'em. I don't know how to ask this properly since my English is not that good, and maybe someone already asked about this. If so, please give me a link to that thread before marking this question as a duplicate. :) Thank you in Advanced, Cheers!<issue_comment>username_1: Try this, ```js var s = "hello world" console.log(s.split("").filter(v=> v!==" ").map(i=>":indicator_" + i + ":").join("")) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can split the input string (`"hello"`) into individual characters, use the [`Array.prototype.map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) to transform each of the elements into the corresponding indicator, and then join all the modified elements into one string. ``` function toIndicators(str) { return str.split("").map(char => `:indicator_${char}:`).join(""); } toIndicators("hello"); // => ":indicator_h::indicator_e::indicator_l::indicator_l::indicator_o:" ``` The `split("")` call converts the string into an array containing all the individual characters. The `map` call executes the provided function on all of the elements of the array (all of the characters). The function passed to map inserts the single character into the correct position in the indicator template string. The `join("")` call concatenates all the elements of the array into one string with no delimiter between them. Upvotes: 0 <issue_comment>username_3: `input_massage` creates an array of letters out of the user's input. Then iterates through each letter and converts it to `:indicator_{userinput#1}:`. Then converts it back to a string for the bot. ```js function input_massage(input) { return input.split('') .map(function(letter) { return `:indicator_${letter}:` }) .join(''); } console.log(input_massage('hello')) ``` Upvotes: 1
2018/03/18
737
2,747
<issue_start>username_0: So far I can create my own Data Models in a swift file. Something like: User.swift: ``` class User { var name: String var age: Int init?(name: String, age: Int) { self.name = name self.age = age } } ``` When I create a Core Data model, ie. a *UserData* entity, (1) do I have to add the same number of attributes as in my own data model, so in this case two - the name and age? Or (2) can it has just one attribute eg. name (and not age)? My core data model: UserData * name * age The second problem I have is that when I start the fetch request I get a strange error in Xcode. This is how I start the fetchRequest (AppDelegate is set up like it is suggested in the documentation): ``` var users = [User]() var managedObjectContext: NSManagedObjectContext! ... func loadUserData() { let dataRequest: NSFetchRequest = UserData.fetchRequest() do { users = try managedObjectContext.fetch(dataRequest) .... } catch { // do something here } } ``` The error I get is "**Cannot assign values of type '[UserData] to type [User]**. What does this error mean? In the official documentation are some of the errors described, but not this particularly one.<issue_comment>username_1: You cannot use custom (simple) classes as Core Data model. Classes used as Core Data model **must** be a subclass of `NSManagedObject`. If your entity is named `UserData` you have to declare the array ``` var users = [UserData]() ``` --- The `User` class is useless. Upvotes: 1 [selected_answer]<issue_comment>username_2: If you are designing a user model in core data you don't have to write the class yourself. In fact by default Xcode will generate subclasses of NSManagedObject automatically once you create them in your project's, but you can also manually generate them if you would like to add additional functionality: [![enter image description here](https://i.stack.imgur.com/6Yhwm.png)](https://i.stack.imgur.com/6Yhwm.png) Then you can go to Editor and manually generate the classes [![enter image description here](https://i.stack.imgur.com/PyKOV.png)](https://i.stack.imgur.com/PyKOV.png) Doing this will give you `User+CoreDataClass.swift` and `User+CoreDataProperties.swift`. I see in your question you are asking about how the core data model compares to your 'own' model, but if you're using core data then that IS the model. The generated User class, which subclasses NSManagedObject, is all you need. Then you might fetch the users like this: ``` let userFetch = NSFetchRequest(entityName: "User") do { users = try managedObjectContext.executeFetchRequest(userFetch) as! [User] } catch { fatalError("Failed to fetch users: \(error)") } ``` Upvotes: 1