date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/20
1,273
5,112
<issue_start>username_0: I am building a PWA from my Angular app and I am getting the following error when I run `ng build --prod`: ``` ERROR in app\app.module.ts(108,64): Error during template compile of 'AppModule' Function calls are not supported in decorators but 'Environment' was called in 'environment' 'environment' calls 'Environment'. ``` However, it makes no sense because I added `export` to the class as you can see here: **environment.prod.ts** ``` import { BaseEnvironment } from './base-environment'; import { ProspectBuilderModel } from '../app/models/prospect'; export class Environment extends BaseEnvironment { production: boolean = true; prospectBuilderModel: ProspectBuilderModel = { buildQuote: false, buildAcknowledge: false, buildOrganizationInfo: false, buildFinancialInfo: false, buildTradeInfo: false, buildPermissiblePurpose: false, buildUserSetup: false, buildPackageSelection: false, buildPaymentOptions: false, buildOrderOptions: false, buildVerifyOrganizationInfo: false, buildDocusignAuthorization: false, buildDocusignContract: false }; } export const environment = new Environment(); ``` **base-environment.ts** ``` import { ProspectBuilderModel } from '../app/models/prospect'; export abstract class BaseEnvironment { abstract production: boolean; abstract prospectBuilderModel: ProspectBuilderModel; } ``` **app.module.ts** ``` ... ], imports: [ BrowserModule, AppRoutingModule, FormsModule, MultiselectDropdownModule, ReactiveFormsModule, HttpModule, ToastrModule.forRoot(), BrowserAnimationsModule, NgxMyDatePickerModule.forRoot(), PopoverModule.forRoot(), ModalModule.forRoot(), ServiceWorkerModule.register('/ngsw-worker.js', { enabled: environment.production }) ], providers: [ ... ``` Does anyone know what I can do to resolve this error?<issue_comment>username_1: I am assuming that your code was working before you added the line *ServiceWorkerModule.register('/ngsw-worker.js', { enabled: environment.production })* Normally when we access the environment variable in app module, it usually refers to the environment files auto-generated by angular cli (which are basically constants and exported as such). I have never tried creating an object of a class and passing it off in app module. This might be causing the problem. Just try passing a true directly in this line instead of getting it from an object and see if that works. Upvotes: 0 <issue_comment>username_2: Below is the code for the solution I finally came up with. Hope this help someone that ever runs into this type of issue. Essentially, I just modified my environment file so that it did not create an instance of the Environment class. I guess Angular no likey the instantiation: ``` export const environment = { production: true, prospectBuilderModel: { buildQuote: false, buildAcknowledge: false, buildOrganizationInfo: false, buildFinancialInfo: false, buildTradeInfo: false, buildPermissiblePurpose: false, buildUserSetup: false, buildPackageSelection: false, buildPaymentOptions: false, buildOrderOptions: false, buildVerifyOrganizationInfo: false, buildDocusignAuthorization: false, buildDocusignContract: false } } ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: Had same problem, here are possible workarounds (if you don't have dynamically loaded properties): ### 1. As your answer, just object. But you lose default properties and information about "to implement" properties. ### 2. Use static classes: here you lose 'to implement' properties, since abstract statics are [not possible](https://github.com/microsoft/TypeScript/issues/14600), but you have the types and defaults. ``` // default.env.ts export class BaseEnvironment { public production: boolean = true; public specialPropToImpl: boolean = true; } // your.env.ts export class Environment extends BaseEnvironment { public specialPropToImpl = true; } export const environment = Environment; ``` ### 3. Use default properties object, interface and object assignment: here you have the 'to implement' properties, types and defaults, but little bit ugly to implement. ``` // default.env.ts export interface EnvProperties { production: boolean; specialPropToImpl: boolean; } export const defaultEnv = { production: true }; // your.env.ts export const environment: EnvProperties = { ...defaultEnv, specialPropToImpl: true }; ``` ### 4. Just one static variable more from environment simple solution, you keep your subclasses, but every env file must have it. But since new environment files are created by copypaste of old one, this can work. Depends on production value, you should change the variable value manually. Not very flexible and maintainable solution. But simple. ``` // your.env.ts export const enablePwa = true // app.module.ts import {environment, enablePwa} from 'env.ts' //... ServiceWorkerModule.register('/ngsw-worker.js', { enabled: enablePwa }) ``` Upvotes: 2
2018/03/20
1,314
4,400
<issue_start>username_0: I need to subtract current REQ\_START\_DATE from ( previous ACTUAL\_START\_DATE + RUN\_DURATION) in order to check the timing whether job is requested before completion of the previous execution. Tried using below query: ``` WITH delay_in_start AS ( SELECT LOG_ID, LOG_DATE, OWNER, JOB_NAME, REQ_START_DATE, ACTUAL_START_DATE,run_duration, ROW_NUMBER() OVER (PARTITION BY job_name ORDER BY req_start_date desc) RN FROM dba_scheduler_job_run_details t ) SELECT cast(A.req_start_date as date) - cast((B.ACTUAL_START_DATE + b.run_duration) as date) Consumption, a.*, b.* FROM delay_in_start A LEFT JOIN delay_in_start B ON B.JOB_NAME = A.JOB_NAME AND A.RN = B.RN - 1 where cast(A.req_start_date as date) > (cast(B.ACTUAL_START_DATE as date) + b.run_duration) ``` But not sure of the output. Can someone help?<issue_comment>username_1: > > lag function doesn't help in date datatype. > > > No, That's not right and you may be looking for something like this. ``` SELECT JOB_NAME, CASE WHEN REQ_START_DATE >= LAG ( actual_start_date + run_duration ) OVER ( PARTITION BY JOB_NAME ORDER BY LOG_DATE ) THEN 1 ELSE 0 END FROM dba_scheduler_job_run_details; ``` Upvotes: 1 <issue_comment>username_2: Based on your description, you seem to be comparing the wrong way around; your where-clause filter is looking for requested dates *after* the completion of the previous job, based on its actual start time and duration - not before. So you can perhaps just change the logic to: ``` where cast(A.req_start_date as date) < (cast(B.ACTUAL_START_DATE as date) + b.run_duration) ``` I'm not sure why you're casting the timestamps to dates though; you're losing precision, which could mean you miss jobs that ran very close together (sub-second gap). On my test instance I see 23 records with those casts - all from `ORACLE_APEX_MAIL_QUEUE` - but 36 if I leave them as timestamps. You can also use `lag()` instead of self-joining based on row number: ``` select lag_actual_start_date + lag_run_duration - req_start_date as consumption, t.* from ( select dba_scheduler_job_run_details.*, lag(actual_start_date) over (partition by job_name order by req_start_date) as lag_actual_start_date, lag(run_duration) over (partition by job_name order by req_start_date) as lag_run_duration from dba_scheduler_job_run_details ) t where req_start_date < lag_actual_start_date + lag_run_duration order by job_name, req_start_date; ``` Which gives the 'consumption' as an interval - positive as I've switched the terms round in that subtraction too. The results currently include only the actual start date and duration from the previous row; if you wanted other fields then you could include lag clauses for those too. I would also avoid `*`, but I don't know which columns you are actually interested in, from either the current or previous row. Upvotes: 3 [selected_answer]<issue_comment>username_3: Used below query to get the data of all the jobs whose differs in req\_start\_date and actual\_start\_date excluding those job's details which gets delayed because of previous job run delayed (i.e. don't want to get alerted if the delayed start is due to previous run not completed) and who is taking more than 60 secs ``` SELECT extract(DAY FROM(t.actual_start_date - t.req_start_date)) * 24 * 60 * 60 + (extract(hour FROM(t.actual_start_date - t.req_start_date))) * 60 * 60 + (extract(minute FROM(t.actual_start_date - t.req_start_date))) AS duration, t.* FROM (SELECT d.*, lag(actual_start_date) over(PARTITION BY job_name ORDER BY req_start_date) AS lag_actual_start_date, lag(run_duration) over(PARTITION BY job_name ORDER BY req_start_date) AS lag_run_duration FROM dba_scheduler_job_run_details d) t WHERE ((lag_actual_start_date IS NOT NULL AND req_start_date > lag_actual_start_date + lag_run_duration) OR -- For Jobs running at frequent time interval (lag_actual_start_date IS NULL AND actual_start_date > req_start_date)) -- For Jobs scheduled at single time interval AND extract(DAY FROM(t.actual_start_date - t.req_start_date)) * 24 * 60 * 60 + (extract(hour FROM(t.actual_start_date - t.req_start_date))) * 60 * 60 + (extract(minute FROM(t.actual_start_date - t.req_start_date))) > 1 ORDER BY log_date DESC; ``` Upvotes: 0
2018/03/20
1,170
2,884
<issue_start>username_0: ``` g++ -std=c++14 -m32 -I/export/home/ab0599/local/include -L/export/home/ab0599/local/lib xerces_tst1.cpp -o xerces_tst1 -lxerces-c-3.2 Undefined first referenced symbol in file __1cG__CrunKpure_error6F_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so __1cG__CrunIex_alloc6FI_pv_ /export/home/ab0599/local/lib/libxerces-c-3.2.so __1cG__CrunIex_throw6Fpvpkn0AQstatic_type_info_pF1_v_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so __1cG__CrunKcross_cast6Fpvpkn0AQstatic_type_info_4_1_ /export/home/ab0599/local/lib/libxerces-c-3.2.so typeinfo for xercesc_3_2::XMLException /var/tmp//ccjKE5ec.o xercesc_3_2::XMLPlatformUtils::Initialize(char const*, char const*, xercesc_3_2::PanicHandler*, xercesc_3_2::MemoryManager*) /var/tmp//ccjKE5ec.o __1cG__CrunGex_get6F_pv_ /export/home/ab0599/local/lib/libxerces-c-3.2.so __1cG__CrunMex_rethrow_q6F_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so xercesc_3_2::XMLUni::fgXercescDefaultLocale /var/tmp//ccjKE5ec.o __1cG__CrunSregister_exit_code6FpG_v_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so __1cG__CrunHex_skip6F_b_ /export/home/ab0599/local/lib/libxerces-c-3.2.so xercesc_3_2::XMLPlatformUtils::Terminate() /var/tmp//ccjKE5ec.o __1cG__CrunIex_clean6F_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so __1cG__CrunKex_rethrow6F_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so c::c(N6, (int0_t)) /export/home/ab0599/local/lib/libxerces-c-3.2.so c::c(n6, (int0_t)) /export/home/ab0599/local/lib/libxerces-c-3.2.so __1c2K6Fpv_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so __1c2k6Fpv_v_ /export/home/ab0599/local/lib/libxerces-c-3.2.so ld: fatal: symbol referencing errors collect2: error: ld returned 1 exit status ``` Please, help to solve the problem. I tried diffferent combination of parameters but nothing help.<issue_comment>username_1: Please check the (in my opinion) mismatching paths given in "-L/export/home/sian/local/lib" and shown in "/export/home/ab0599/local/lib". If you have a link to that path, thats ok. make an `ldd /export/home/ab0599/local/lib/libxerces-c-3.2.so` and try to solve the (probably missing) dependencies. If the dependencies are satisfied, you probably have a version mismatch (most likely your libxerces was built with another version of a common dependency than your xerces\_tst1) Upvotes: 0 <issue_comment>username_2: I have solved the problem. The problem was, that I did configuration with CC compiler: ``` ./configure --prefix=/export/home/ab0599/local --disable-network CC=cc CXX=CC CFLAGS=-m64 CXXFLAGS=-m64 ``` but tried to compile example with gcc. Now, I have compiled library by gcc compiler and the problem was solved. Upvotes: 1
2018/03/20
1,663
6,579
<issue_start>username_0: I am having problems in decoding JSON response using Swift 4 Decoding Functionality. I have main construct and it has one inner construct var hr\_employees: [Employee]? = []. The problem is JSON not mapping for 'var hr\_employees: [Employee]? = []. I am getting correct values forthe three root values response\_status,access\_level,session\_token. ////////////Code for Struct//////////////////////// ``` struct EmployeeData: Codable { var response_status:Int=0 var access_level:Int=0 var session_token:String="" var hr_employees: [Employee]? = [] } private enum CodingKeys: String, CodingKey { case response_status="response_status" case access_level="access_level" case session_token="session_token" case hr_employees="hr_employees" } init() { } init(from decoder: Decoder) throws { let values = try decoder.container(keyedBy: CodingKeys.self) response_status = try values.decode(Int.self, forKey: .response_status) do{ session_token = try values.decode(String.self, forKey: .session_token) }catch { print( "No value associated with key title (\"session_token\").") } do{ access_level = try values.decode(Int.self, forKey: .access_level) } catch { print( "No value associated with key access_level ") } } ``` /////////////////Inner Struct/////////////////////// ``` struct Employee: Codable { var userId:Int=0 var nameFirst:String="" var nameLast:String="" var position:String="" var company:String="" var supervisor:String="" var assistant:String="" var phone:String="" var email:String="" var address:String="" var gender:String="" var age:Int=0 var nationality:String="" var firstLanguage:String="" var inFieldOfView:String = "0" var photo:String="user-default" var status:String="3" } ``` ////////////Following is the JSON////////////////////// ``` { "response_status":1 ,"access_level":2 ,"hr_employees":[ { "user_id":4226 ,"name_last":"Sampe" ,"name_first":"Frederica" ,"position":"Systems Maint" ,"phone":"123456" ,"email":"<EMAIL>" ,"address":"00100 Helsinki 1" ,"age":67 ,"company":"Omega Enterprise" } ,{ "user_id":5656 ,"name_last":"Aalto" ,"name_first":"Antero" ,"position":"Programming Methodology and Languages Researcher" ,"supervisor":"<NAME>" ,"phone":"123456" ,"email":"<EMAIL>" ,"address":"00100 Finland " ,"age":51 ,"company":"Omega Fire Related Equipment" } ] } ```<issue_comment>username_1: One problem is that what is in the JSON does not match your definition of Employee. For example `nameFirst` is not present and `name_first` is. Another is that you have a custom implementation of `init(from:)`, and it never fetches the `hr_employees` value! Upvotes: 1 <issue_comment>username_2: Quite a few things for you to improve on: 1. Your `Struct`s can be improved to harness automation capability of the `Codable` protocol. 2. You need to understand why you're using a `CodingKeys` enum * and in your case... also where best to have it (hint: inside the `Struct` itself) 3. You need to know which parameters need to be optional and why * this depends on your json structure ofcourse 4. If the parameters are to have a default value then there's a whole different process you need to follow; like having your own `init(from:Decoder)` * which you have to a certain extent but doesn't really handle everything in it's current state --- Based on your given JSON example, you can simply do the following. **However...** do note that this is not designed to provide default values. i.e. If a key is missing in the json, like `status` for example, then the parameter `status` in your `Employee` struct will be `nil` rather than a default value of `"3"`. ``` struct EmployeeData: Codable { var responseStatus: Int var accessLevel: Int /* sessionToken is optional because as per your JSON it seems it not always available */ var sessionToken: String? var hrEmployees: [Employee] /* CodingKeys is inside the struct It's used if the JSON key names are different than the ones you plan to use. i.e. JSON has keys in snake_case but we want camelCase */ enum CodingKeys: String, CodingKey { case responseStatus = "response_status" case accessLevel = "access_level" case sessionToken = "session_token" case hrEmployees = "hr_employees" } } ``` --- ``` struct Employee: Codable { var userId: Int var nameFirst: String var nameLast: String var position: String var company: String var supervisor: String? var assistant: String? var phone: String var email: String var address: String var gender: String? var age: Int var nationality: String? var firstLanguage: String? var inFieldOfView: String? var photo: String? var status: String? enum CodingKeys: String, CodingKey { case userId = "user_id" case nameFirst = "name_first" case nameLast = "name_last" case firstLanguage = "first_language" case inFieldOfView = "in_field_of_view" /* Keys names that are same in json as well as in your model need not have a raw string value but must be defined if it's to be encoded/decoded from the json else it can be omitted and a default value will be required which won't affect the encoding or decoding process */ case position case company case supervisor case assistant case phone case email case address case gender case age case nationality case photo case status } } ``` --- Check: ``` do { let employeeData = try JSONDecoder().decode(EmployeeData.self, from: jsonAsData) print(employeeData) } catch { /* If it comes here then debug, it's most probably nil keys meaning you need more optional parameters in your struct */ print(error) } ``` --- *If you want default values in your `Struct` and the above example is a dealbreaker for you then check the following answer:* * <https://stackoverflow.com/a/44575580/2857130> Upvotes: 0
2018/03/20
533
1,665
<issue_start>username_0: RxJS provides a function called `forkJoin`. It allows you to input multiple `Observables` and wait for all of them to finish. I am wondering if the resulting array will contain the results in the same order as the order of the input observables. If it wil not, which one of the operators does maintain the same order? I've been looking into the [docs](https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/core/operators/forkjoin.md) and was not able to find the answer.<issue_comment>username_1: It will return results in the same order. As is described in [these official docs](http://reactivex.io/rxjs/class/es6/Observable.js~Observable.html#static-method-forkJoin). Good to mention that it will emit only latest values of the streams: ``` var source = Rx.Observable.forkJoin( Rx.Observable.of(1,2,3), Rx.Observable.of(4) ); source.subscribe(x => console.log("Result", x)); // LOG: Result [3,4] ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: Anti-answer =========== For this of us who actually want to get results sorted by task finish time you can use following alternative to forkJoin ```js const { merge, of } = rxjs; const { bufferCount, take, delay } = rxjs.operators; let t1 = of(1).pipe(delay(1000)); let t2 = of(2).pipe(delay(3000)); let t3 = of(3).pipe(delay(4000)); let t4 = of(4).pipe(delay(2000)); // in forkJoin(t1,t2,t3,t4) we get: [1,2,3,4] // in this we get sorted by finish time: [1,4,2,3] // bufferCount(4) join 4 results to array // take(1) take on buffer and unsubscribe after merge(t1,t2,t3,t4) .pipe(bufferCount(4), take(1)) .subscribe(console.log) ``` Upvotes: 3
2018/03/20
328
1,143
<issue_start>username_0: Well, we got parables exam preparations and instead of me typing everything a million times, I thought of rather making a little Python script. It's done, but something's bugged. I've been stuck on it for around 30 minutes and just can't figure it out as my Python is a bit rusty. You can find my code at: <https://repl.it/@Rrrei/CurvySecondaryService><issue_comment>username_1: As mentioned before, `for x in x:` is indeed bad naming :p. You can easily get confused about what `x` you are talking about, the inner or the outer one. Also a = input(..) makes a a string, a string multiplied by a number repeats the string in Python. e.g.: ``` '1'*5 == '11111' ``` To solve this, wrap input in a int: `a=int(input(...))` Upvotes: 1 <issue_comment>username_2: You are going to append the string values. Python is not typesafe and `a` is a string after calling `input()`... Here you can see a safe example with corrected values and better names (cleancode). [check my code](https://repl.it/repls/ExcellentSaddlebrownFactors) It´s the correct version of yours. If you want the same form `b=0` `c=0`.... Upvotes: 0
2018/03/20
385
1,388
<issue_start>username_0: How to use hash, i can write this class better? ``` class Bank def initialize(name, city, country, max_credit_value, min_credit_value, credit_time) @name = name @coordinates = coordBainates @country = country @max_credit_value = max_credit_value @min_credit_value = min_credit_value @interest_rate = interest_rate @credit_time = credit_time end end ``` Perhaps there is a better way?<issue_comment>username_1: You can define the arguments in an array so you don't have to keep everything in sync. Use a little bit of metaprogramming to get them into instance variables: ``` class Bank def initialize opts={} keys = %i{name coordinates country max_credit_value min_credit_value interest_rate credit_time} raise ArgumentError, "invalid options: expected #{keys}, got #{opts.keys}" unless keys.sort == opts.keys.sort keys.each { |key| instance_variable_set "@#{key}", opts[key] } end end ``` and then initialize with ``` Bank.new name: "Foobar", coordinates: [2, 5], country: "US", etc... ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If you aim for shortest code and your constructor code has only assignments then maybe this is *better* ``` class Bank < Struct.new(:name, :city, :country, :max_credit_value, :min_credit_value, :credit_time) end ``` Upvotes: 1
2018/03/20
2,405
7,594
<issue_start>username_0: I have an [example spring boot project](https://github.com/MaxHoefl/demo-mongodb) which uses mongo db for persisting objects. I would like to test it with an embedded mongo db but I get an error which is very similar to [this question](https://stackoverflow.com/questions/45956233/easiest-way-to-configure-embedded-mongodb) (see comments of the accepted answer): For some reason `IFeatureAwareVersion` from `flapdoodle` cannot be found. ``` Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.mongodb.core.MongoTemplate]: Factory method 'mongoTemplate' threw exception; nested exception is java.lang.NoClassDefFoundError: de/flapdoodle/embed/mongo/distribution/IFeatureAwareVersion at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:579) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] ... 65 common frames omitted Caused by: java.lang.NoClassDefFoundError: de/flapdoodle/embed/mongo/distribution/IFeatureAwareVersion at cz.jirutka.spring.embedmongo.EmbeddedMongoFactoryBean.(EmbeddedMongoFactoryBean.java:47) ~[embedmongo-spring-1.3.1.jar:1.3.1] at com.example.MongoConfig.mongoTemplate(MongoConfig.java:37) ~[test-classes/:na] at com.example.MongoConfig$$EnhancerBySpringCGLIB$$108c2b8.CGLIB$mongoTemplate$0() ~[test-classes/:na] at com.example.MongoConfig$$EnhancerBySpringCGLIB$$108c2b8$$FastClassBySpringCGLIB$$629c796a.invoke() ~[test-classes/:na] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) ~[spring-core-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361) ~[spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at com.example.MongoConfig$$EnhancerBySpringCGLIB$$108c2b8.mongoTemplate() ~[test-classes/:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0\_92] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0\_92] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0\_92] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0\_92] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] ... 66 common frames omitted Caused by: java.lang.ClassNotFoundException: de.flapdoodle.embed.mongo.distribution.IFeatureAwareVersion at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0\_92] at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0\_92] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[na:1.8.0\_92] at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0\_92] ... 78 common frames omitted ``` Here is the pom ``` xml version="1.0" encoding="UTF-8"? 4.0.0 com.example demo-mongodb 0.0.1-SNAPSHOT jar demo-mongodb Demo project for Spring Boot org.springframework.boot spring-boot-starter-parent 2.0.0.RELEASE UTF-8 UTF-8 1.8 Finchley.M8 org.springframework.boot spring-boot-starter-actuator org.springframework.boot spring-boot-starter-data-mongodb org.springframework.boot spring-boot-starter-data-jpa org.springframework.boot spring-boot-starter-web com.h2database h2 runtime org.springframework.boot spring-boot-devtools runtime org.springframework.boot spring-boot-starter-test test de.flapdoodle.embed de.flapdoodle.embed.mongo test cz.jirutka.spring embedmongo-spring 1.3.1 test org.springframework.cloud spring-cloud-dependencies ${spring-cloud.version} pom import org.springframework.boot spring-boot-maven-plugin spring-milestones Spring Milestones https://repo.spring.io/milestone false ``` The Spring boot start parent `2.0.0.RELEASE` manages the version of `flapdoodle` which is `2.0.3` and I see the class `IFeatureAwareVersion` in the `de.flatpdoodle.embed.mongo-2.0.3.jar`. When I look at the dependancy hierarchy I see * `embedmongo-spring: 1.3.1 [test]` + `de.flatpdoodle.embed.mongo: 2.0.3 (managed from 1.46.4) (ommitted for conflict with 2.0.3) [test]` + `de.flatpdoodle.embed.process: 1.40.1 (ommitted for conflict with 2.0.2) [test]` Here is the test ``` @RunWith(SpringRunner.class) @SpringBootTest(classes=MongoConfig.class) @TestPropertySource(locations = "classpath:application-test.properties") public class Test_TimezoneDao { private static final Logger LOG = LoggerFactory.getLogger(Test_TimezoneDao.class); //@Autowired private TimezoneDao dao; @Autowired private CounterService counterService; @Autowired private MongoTemplate mongoTemplate; @Test public void test() { TimeZone tz = new TimeZone(); tz.setId(counterService.getNextSequence()); tz.setOffset(9); tz.setTz("Asia/Singapore"); TimeZone tz2 = new TimeZone(); tz2.setId(counterService.getNextSequence()); tz2.setOffset(11); tz2.setTz("Australia/Sydney"); mongoTemplate.save(tz); List tzs = mongoTemplate.findAll(TimeZone.class, "timezones"); for(TimeZone t : tzs) { LOG.info(t.toString()); } } } ``` and here is the test config ``` @SpringBootConfiguration @Profile("test") @ComponentScan(basePackages= {"com.example.demomongodb"}) @EnableJpaRepositories(basePackages= {"com.example.demomongodb"}) @EnableMongoRepositories(basePackages= {"com.example.demomongodb"}) @EntityScan(basePackages= {"com.example.demomongodb"}) @EnableAutoConfiguration public class MongoConfig { public MongoConfig() { System.out.println("============= LOADING MONGO CONFIG ==============="); } @Bean public MongoTemplate mongoTemplate() throws IOException { EmbeddedMongoFactoryBean mongo = new EmbeddedMongoFactoryBean(); mongo.setBindIp("localhost"); MongoClient mongoClient = mongo.getObject(); MongoTemplate mongoTemplate = new MongoTemplate(mongoClient, "test_or_whatever_you_want_to_call_this_db"); return mongoTemplate; } } ``` Thanks for the help<issue_comment>username_1: embedmongo-spring is kinda deprecated, it is embed mongo is now integrated directly into spring boot not via a third party dependency. Just remove this dependency and you should be done. You can also take a look at this issue, which explains the situation a bit further <https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues/260> Upvotes: 1 <issue_comment>username_2: de.flapdoodle.embed dependency will be added through cz.jirutka.spring. ``` [INFO] +- cz.jirutka.spring:embedmongo-spring:jar:1.3.1:compile [INFO] | +- de.flapdoodle.embed:de.flapdoodle.embed.process:jar:1.40.1:compile [INFO] | | +- net.java.dev.jna:jna:jar:4.5.2:compile [INFO] | | \- net.java.dev.jna:jna-platform:jar:4.5.2:compile [INFO] | \- org.mongodb:mongo-java-driver:jar:3.11.2:compile [INFO] +- de.flapdoodle.embed:de.flapdoodle.embed.mongo:jar:2.2.0:test ``` This issue is fixed by excluding the de.flapdoodle.embed.process from cz.jirutka.spring dependency. ``` cz.jirutka.spring embedmongo-spring RELEASE de.flapdoodle.embed.process de.flapdoodle.embed de.flapdoodle.embed de.flapdoodle.embed.mongo test ``` Upvotes: 0
2018/03/20
1,019
3,567
<issue_start>username_0: I haven't updated anything recently that I can think of that would have impacted nacl and the signing process for paramiko, but now it will not work. I have reinstalled paramiko, and netmiko and made sure all of the crypto libraries are up to date. I am at a loss what else to do, any thoughts? ``` from netmiko import ConnectHandler ... def main(device_list): username = input("\nWhat is your username? -> ") password = getpass.getpass("\nWhat is your password? -> ") for host in device_list: juniper_device = { 'device_type': 'juniper', 'ip': host, 'username': username, 'password': <PASSWORD>, 'verbose': False } ``` Netmiko uses the Paramiko library to make SSH connections. This bit of code is what sets up the device definition. Here is the stack trace associated with the creation of the device and the opening of an ssh connection. ``` Traceback (most recent call last): File "./get_running-config.py", line 5, in from netmiko import ConnectHandler File "/usr/local/lib/python3.4/dist-packages/netmiko/\_\_init\_\_.py", line 8, in from netmiko.ssh\_dispatcher import ConnectHandler File "/usr/local/lib/python3.4/dist-packages/netmiko/ssh\_dispatcher.py", line 4, in from netmiko.a10 import A10SSH File "/usr/local/lib/python3.4/dist-packages/netmiko/a10/\_\_init\_\_.py", line 2, in from netmiko.a10.a10\_ssh import A10SSH File "/usr/local/lib/python3.4/dist-packages/netmiko/a10/a10\_ssh.py", line 4, in from netmiko.cisco\_base\_connection import CiscoSSHConnection File "/usr/local/lib/python3.4/dist-packages/netmiko/cisco\_base\_connection.py", line 3, in from netmiko.base\_connection import BaseConnection File "/usr/local/lib/python3.4/dist-packages/netmiko/base\_connection.py", line 13, in import paramiko File "/usr/local/lib/python3.4/dist-packages/paramiko/\_\_init\_\_.py", line 22, in from paramiko.transport import SecurityOptions, Transport File "/usr/local/lib/python3.4/dist-packages/paramiko/transport.py", line 57, in from paramiko.ed25519key import Ed25519Key File "/usr/local/lib/python3.4/dist-packages/paramiko/ed25519key.py", line 22, in import nacl.signing File "/usr/local/lib/python3.4/dist-packages/nacl/signing.py", line 19, in import nacl.bindings File "/usr/local/lib/python3.4/dist-packages/nacl/bindings/\_\_init\_\_.py", line 17, in from nacl.bindings.crypto\_box import ( File "/usr/local/lib/python3.4/dist-packages/nacl/bindings/crypto\_box.py", line 27, in crypto\_box\_SEEDBYTES = lib.crypto\_box\_seedbytes() AttributeError: cffi library '\_sodium' has no function, constant or global variable named 'crypto\_box\_seedbytes' ```<issue_comment>username_1: Unless you post the code, some guessing is needed: Maybe the problem is similar to <https://github.com/mitmproxy/mitmproxy/issues/2372>, where it was solved by installing the `cryptography` package in version 1.9. Upvotes: 0 <issue_comment>username_2: After a lot of tinkering with the cryptography and subsequent modules, I just fresh installed all related libraries and things work now. Not really sure how it got in this state unfortunately but it was easier to just start over. Upvotes: 1
2018/03/20
598
2,160
<issue_start>username_0: I have a very large table with billions of rows. The following statements return the same result, but the first one (2 step query) took 22 seconds while the second one (table join) took 3 minutes. ColID is the Identity column with primary key An index is created based on colA and colB --- ``` select @valA = colA, @valB = colB from LargeTable where colID = 1234 select top 1000 * from LargeTable where colA = @valA and colB = @valB ``` --- ``` select top 1000 a.* from LargeTable a join LargeTable b on a.colA = b.colA and a.colB = b.colB where b.colID = 1234 ``` From Comment : Upload the query plan <https://www.brentozar.com/pastetheplan/?id=rJbHzoCKM><issue_comment>username_1: The plan uploaded does at least show the problem, the hash join. The hash table is being created using the rows returned from OptionArchive b based on the ID (That's the clustered index seek). You have an index on option\_type and expiration? (You specified only 2 columns in the index in the question), but the query is selecting a.mid\_bid\_ask and also joining on 3 columns (strike, option\_type, expiration), so I suspect it has decided to 'tip' from seek to scan (and thus change from Nested Loop / Index Seek + Row ID Lookup) to the Hash + Scan, to ensure it is getting the field it needs. That could be tested as a theory relatively easily, include mid\_bid\_ask in the non-clustered index. It does not help that the query in the question + schema information does not match the query used to generate the plan though, we still do not know what the indexes are properly yet. The Plan itself, in XML version is advising us of the missing index it believes it wants : ``` ``` So that supports the notion of the covering index (including mid\_bid\_ask) and that the existing index may only cover 2 of the 3 columns. Upvotes: 0 <issue_comment>username_2: I found the problem. The index I created has 4 columns, and the first column is not in the query at all, therefore the whole index is not used. I dropped and recreated the index with first column removed, and it's working fine now. Upvotes: 1
2018/03/20
606
2,123
<issue_start>username_0: I'm plotting a wind vector over an Atlantic map. To represent the vector I have use this code, where u, v, lon and lat are variables that I get from a database (in netcdf4 format): ``` ugrid,newlons = shiftgrid(0.,u,lon, start = False) vgrid,newlons = shiftgrid(0.,v,lon,start=False) uproj,vproj,xx,yy = \ map.transform_vector(u,v,newlons,lat,31,31,returnxy=True,masked=True) Q = map.quiver(xx,yy,uproj,vproj,scale=2000,color='b') ``` I have to do something in a wrong way because as you can see the graphic shows windvector between -100 and 0. However, my database have data between -75 and 0. ![m6jtr.jpg](https://i.stack.imgur.com/m6jtr.jpg) Thanks, I.<issue_comment>username_1: The plan uploaded does at least show the problem, the hash join. The hash table is being created using the rows returned from OptionArchive b based on the ID (That's the clustered index seek). You have an index on option\_type and expiration? (You specified only 2 columns in the index in the question), but the query is selecting a.mid\_bid\_ask and also joining on 3 columns (strike, option\_type, expiration), so I suspect it has decided to 'tip' from seek to scan (and thus change from Nested Loop / Index Seek + Row ID Lookup) to the Hash + Scan, to ensure it is getting the field it needs. That could be tested as a theory relatively easily, include mid\_bid\_ask in the non-clustered index. It does not help that the query in the question + schema information does not match the query used to generate the plan though, we still do not know what the indexes are properly yet. The Plan itself, in XML version is advising us of the missing index it believes it wants : ``` ``` So that supports the notion of the covering index (including mid\_bid\_ask) and that the existing index may only cover 2 of the 3 columns. Upvotes: 0 <issue_comment>username_2: I found the problem. The index I created has 4 columns, and the first column is not in the query at all, therefore the whole index is not used. I dropped and recreated the index with first column removed, and it's working fine now. Upvotes: 1
2018/03/20
360
1,012
<issue_start>username_0: I'm sure there's an awk solution to this, but I've been fumbling with it for far too long. I'm trying to print repeats of a string from one file, based on the corresponding line value from another file. For example: **file1.txt** ``` Hello Beautiful World ``` **file2.txt** ``` 2 4 3 ``` **desired\_output\_file.txt** ``` Hello Hello Beautiful Beautiful Beautiful Beautiful World World World ```<issue_comment>username_1: You may use `awk`: ``` awk 'FNR==NR{a[FNR]=$0; next} {for (i=1; i<=$1; i++) print a[FNR]}' file1 file2 ``` ``` Hello Hello Beautiful Beautiful Beautiful Beautiful World World World ``` --- **References:** * [Effective AWK Programming](https://www.gnu.org/s/gawk/manual/gawk.pdf) * [Awk Tutorial](http://www.grymoire.com/Unix/Awk.html) Upvotes: 1 <issue_comment>username_2: another `awk` ``` $ paste file1 file2 | awk '{while($2--) print $1}' Hello Hello Beautiful Beautiful Beautiful Beautiful World World World ``` Upvotes: 3 [selected_answer]
2018/03/20
4,046
13,559
<issue_start>username_0: I am having this issue ``` system3:postgres saurabh-gupta2$ docker build -t postgres . Sending build context to Docker daemon 38.91kB Step 1/51 : FROM registry.access.redhat.com/rhel7/rhel Get https://registry.access.redhat.com/v2/: Service Unavailable ``` --- ``` docker run -t apline Unable to find image 'apline:latest' locally docker: Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable. See 'docker run --help'. ``` I have looked for a solution that says to set proxy, but I have set the proxy for the wifi. <https://docs.docker.com/docker-for-mac/networking/#httphttps-proxy-support> Still, it is not working. I have set proxy for docker too. It is not working. in Preference -> proxies Docker version 17.12 ce I also want to know if the proxy is the issue then how can I check it is set, what is work around for this?<issue_comment>username_1: Here are few suggestions: 1. Try restarting your Docker service. 2. Check your network connections. For example by the following shell commands: 3. Check your [proxy settings](https://blog.codeship.com/using-docker-behind-a-proxy/) (e.g. in `/etc/default/docker`). If above won't help, this could be a temporary issue with the Docker services (as per *Service Unavailable*). Related: [GH-842 - 503 Service Unavailable at http://hub.docker.com](https://github.com/docker/hub-feedback/issues/842). I had this problem for past days, it just worked after that. You can consider raising the issue at [`docker/hub-feedback` repo](https://github.com/docker/hub-feedback/issues), check at, [Docker username_6 Forums](https://forums.docker.com), or contact [Docker Support](https://success.docker.com/support) directly. Upvotes: 6 <issue_comment>username_2: For me I had this issue when I first installed Docker and ran ``` docker run hello-world ``` I got an authentication required error when I ran ``` curl https://registry-1.docker.io/v2/ && echo Works ``` All I needed to do was to restart my MacOS and then run the command again, it just started pulling the image and i got the message ``` Hello from Docker! This message shows that your installation appears to be working correctly. ``` Upvotes: 4 <issue_comment>username_3: I tried running on Windows, and got this problem after an update. I tried restarting the docker service as well as my pc, but nothing worked. When running: ``` curl https://registry-1.docker.io/v2/ && echo Works ``` I got back: ``` {"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]} Works ``` Eventually, I tried: <https://github.com/moby/moby/issues/22635#issuecomment-284956961> By changing the fixed address to 8.8.8.8: [![enter image description here](https://i.stack.imgur.com/SOvoG.png)](https://i.stack.imgur.com/SOvoG.png) Which worked for me! I still got the unauthorized message for `curl https://registry-1.docker.io/v2/` but I managed to pull images from docker hub. Upvotes: 4 <issue_comment>username_4: try to reload daemon then restart docker service. ``` systemctl daemon-reload ``` Upvotes: 2 <issue_comment>username_5: Just to add, in case anyone else comes across this issue. On a Mac I had to logout and log back in. ``` docker logout docker login ``` Then it prompts for username (NOTE: Not email) and password. (Need an account on <https://hub.docker.com> to pull images down) Then it worked for me. Upvotes: 2 <issue_comment>username_6: ``` NTML PROXY AND DOCKER If your company is behind MS Proxy Server that using the proprietary NTLM protocol. You need to install **Cntlm** Authentication Proxy After this SET the proxy in /etc/systemd/system/docker.service.d/http-proxy.conf) with the following format: [Service] Environment=“HTTP_PROXY=http://<>:3182” In addition you can set in the .DockerFile export http\_proxy=http://<>:3182 export https\_proxy=http://>:3182 export no\_proxy=localhost,127.0.0.1,10.0.2.\* Followed by: systemctl daemon-reload systemctl restart docker ``` This Worked for me Upvotes: 2 <issue_comment>username_7: Got this from a network filter (LuLu on macOS) blocking traffic to/from Docker-related processes. Upvotes: 1 <issue_comment>username_8: For me the problem was solved by restarting the docker daemon: ``` sudo systemctl restart docker ``` Upvotes: 2 <issue_comment>username_9: Run `export DOCKER_CONTENT_TRUST=0` and then try it again. Upvotes: -1 <issue_comment>username_10: One option which worked for me on MAC. Click on the Docker Icon in the tray. Open Preferences -> Proxies. Click on username_15al Proxy and specify Web Server (HTTP) proxy and Secure Web server (HTTPS) proxy in the same format as we specify in HTTPS\_PROXY env variable. Choose Apply and Restart. This Worked for me Upvotes: 3 <issue_comment>username_11: It's clearly a proxy issue: **docker proxies https connections to the wrong place**. Bear in mind that docker proxy settings may be different from the operating system (and curl) ones. Here's how I managed to solve the issue: First of all, find out where are you proxying your docker https requests: ``` # docker info | grep Proxy Http Proxy: http://:8080 Https Proxy: https://:8080 No Proxy: localhost,127.0.0.1 ``` and double check your https settings. In my case, I realized that the "Https proxy" was set to `https://...` instead of `http://...`, so I corrected it in `/etc/sysconfig/docker` file (I'm using RHEL7) and, after a docker restart with: ``` # systemctl restart docker ``` the proxy variable shows up succesfully updated: ``` # docker info | grep Proxy Http Proxy: http://:8080 Https Proxy: http://:8080 No Proxy: localhost,127.0.0.1 ``` and everything works fine :-) Upvotes: 3 <issue_comment>username_12: I had this issue when I first installed Docker and ran ``` docker run hello-world ``` I was on a corporate network and switching to my personal network solved the issue for me. Upvotes: 1 <issue_comment>username_13: The answers are provided here amazing, but if you are new in that and you don't realize full error then you may see at the end of that error `net/http: TLS handshake timeout.` message means that you have a slow internet connection. So it can be only that problem that's it. Toodles Upvotes: 1 <issue_comment>username_14: I have solved this issue about `$ sudo docker run hello-world` following the [Docker doc](https://docs.docker.com/config/daemon/systemd/). If you are behind an HTTP Proxy server of corporate, this may solve your problem. [Docker doc](https://docs.docker.com/config/daemon/systemd/) also displays other situation about HTTP proxy setting. Upvotes: -1 <issue_comment>username_15: I had the following entries in my /etc/hosts file: ``` 192.168.127.12 registry-1.docker.io 172.16.31.10 auth.docker.io 172.16.31.10 production.cloudflare.docker.com ``` Just by commenting them out, I fixed the problem. Upvotes: 1 <issue_comment>username_16: In my case, stopping Proxifier fixed it. I added a rule to route any connections from `vpnkit.exe` as `Direct` and it now works. Upvotes: -1 <issue_comment>username_17: One of the problems you might need to check is, Does the registry requires VPN, Enable your VPN and try pulling again. Thanks. Upvotes: -1 <issue_comment>username_18: ``` docker logout docker login ``` This might solve your problem Upvotes: 5 <issue_comment>username_19: 1. Recheck Proxy Settings with the following commands docker info | grep Proxy 2. Check VPN Connectivity 3. If VPN not using CHECK NET connectivity 4. Reinstall Docker and repeat the above steps. 5. Enjoy Upvotes: 2 <issue_comment>username_20: I had this same issue when working on an **Ubuntu** server. I was getting the following error: ``` deploy@my-comp:~$ docker login -u my-username -p my-password WARNING! Using --password via the CLI is insecure. Use --password-stdin. Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp 192.168.127.12:443: connect: connection refused ``` **Here are the things I tried that did not work**: * Restarting the docker service using `sudo docker systemctl restart docker` * Powering off and restarting the Ubuntu server. * Changing the name server to 8.8.8.8 in the `/etc/resolv.conf` file **Here's what worked for me**: I tried checking if the server has access to the internet using the following `netcat` command: ``` nc -vz google.com 443 ``` And it returned this output: ``` nc: connect to google.com port 443 (tcp) failed: Connection refused nc: connect to google.com port 443 (tcp) failed: Network is unreachable ``` Instead of something like this: ``` Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to 172.16.17.3210:443. Ncat: 0 bytes sent, 0 bytes received in 0.07 seconds. ``` I tried checking again if the server has access to the internet using the following `wget` command: ``` wget -q --spider http://google.com ; echo $? ``` And it returned: ``` 4 ``` Instead of: ``` 0 ``` **Note**: Anything other than 0 in the output means your system is not connected to the internet I then tried the last time if the server has access to the internet using the following `Nmap` command: ``` nmap -p 443 google.com ``` And it returned: ``` Starting Nmap 7.01 ( https://nmap.org ) at 2021-02-16 11:50 WAT Nmap scan report for google.com (172.16.31.10) Host is up (0.00052s latency). Other addresses for google.com (not scanned): fc00:db20:35b:7399::5 rDNS record for 172.16.31.10: los02s04-in-f14.1e100.net PORT STATE SERVICE 443/tcp closed https Nmap done: 1 IP address (1 host up) scanned in 1.21 seconds ``` Instead something like this: ``` Starting Nmap 7.01 ( https://nmap.org ) at 2021-02-16 11:50 WAT Nmap scan report for google.com (172.16.31.10) Host is up (0.00052s latency). Other addresses for google.com (not scanned): fc00:db20:35b:7399::5 rDNS record for 172.16.31.10: los02s04-in-f14.1e100.net PORT STATE SERVICE 443/tcp open https Nmap done: 1 IP address (1 host up) scanned in 1.21 seconds ``` **Note**: The state of port `443/tcp` is **closed** instead of **open** All this was enough to make me realize that connections to the internet were not allowed on the server. All I had to do was speak with the team in charge of infrastructure to fix the network connectivity issue to the internet on the server. And once that was fixed my docker command started working fine. **Resources**: [9 commands to check if connected to internet with shell script examples](https://www.golinuxcloud.com/commands-check-if-connected-to-internet-shell/) That's all. **I hope this helps** Upvotes: 2 <issue_comment>username_21: 1. List item Many good answers above, but mine is a bit different with Mac and Docker Desktop UI. In my case, it is a Desktop proxy setting that needs to be turned off when I am outside of corporate fiewall/proxy: ``` ERROR message from docker CLI: Username: xxx Password: *** Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable ``` 1. My env: Machine Mac with Docker UI (i.e. called Docker Desktop, shown as a whale icon), running outside of corp firewall/proxy. 2. I am able to Sign In with Docker Desktop UI. 3. However, whether docker login or docker pull, I kept getting the above error and I got sidetrack into the user id, reset the daemon, ... 4. Finally, I got to the Docker Desktop UI. Sure enough, there is a proxy setting that I have setup long time ago, and totally forgot about it! 5. Yes, when I am outside of firewall, I need to turn off the proxy setting here. Docker Desktop -> Preference -> Resources -> Proxies. Turn Turn off the manual proxy configuration. Then docker pull works (without docker login as I was pulling a public image)! Thanks PS. I think the difference in behavior of Docker Desktop and Docker CLI contributes to the confusion. I am able to login to docker through the GUI, and the CLI keeps erroring out without good enough diagnostic information. Upvotes: 1 <issue_comment>username_22: Ok, I have a similar issue and nothing seemed to help, restart docker, disabled IPv6 and the nslookup and dig all seemed fine. What worked for me was going to my Docker Desktop -> Preferences -> Experimental Features and unchecking Use new virtualization framework. Upvotes: -1 <issue_comment>username_23: Use `--tls` in the pull request. For example if original pull request is `docker pull dgraph/dgraph:v21.03.0` Use this instead : `docker --tls pull dgraph/dgraph:v21.03.0` Upvotes: -1 <issue_comment>username_24: docker login terminal command worked for me. If your machine requires VPN then must connect with VPN first and try docker login. Upvotes: -1 <issue_comment>username_25: Just reloading system, this is helped for me. (Windows 10 64x) Upvotes: -1 <issue_comment>username_26: On my windows 11 all I did was to first login into my account ``` docker login ``` Upvotes: 2 <issue_comment>username_27: Using Linux. For me it worked by doing: 1. $ docker logout 2. log out of hub.docker.com 3. log in to hub.docker.com 4. $ docker login Upvotes: 0 <issue_comment>username_28: Have you create a repo with the matching tag on destinated docker hub? It might be that your container image has no where to be pushed to. Upvotes: -1 <issue_comment>username_29: Check whether containers is enabled or not? * Goto --> turn on/off windows feature, then enable checkbox of containers * Restart windows. Upvotes: 0 <issue_comment>username_30: Using the `root` account instead of my regular user account solved it for me. Upvotes: 0
2018/03/20
1,208
2,849
<issue_start>username_0: I want to use apply to replace values in a dataframe, if >8 1 star, if >10 2 stars, if >12 3 stars, else NS. ``` mydata<-data.frame(A=1:10, B=3:12, C=5:14) apply(mydata, 2, function(x) ifelse(x > 12, "***"|x > 10, "**"|x >= 8, "*"|x <8, "NS", x)) ```<issue_comment>username_1: This is not a place you need to use apply, just select rows using bracket notation: ``` ns <- mydata < 8 s3 <- mydata > 12 s2 <- mydata > 10 & mydata <= 12 s1 <- mydata >= 8 & mydata <= 10 mydata[ns] <- 'NS' mydata[s3] <- '***' mydata[s2] <- '**' mydata[s1] <- '*' ``` NOTE: the conditional statements have to be before the assignment because once you assign a character value to any cell in a column, that column is converted from `numeric` to `character` and future conditional statements will use lexicographic comparison (1 < a < b < c < A) rather than numeric comparison. Upvotes: 2 <issue_comment>username_2: Note that to use nested `ifelse` statements, you need to put the next `ifelse` as the third argument (the `else` argument) in the parent `ifelse`. So you could try the following: ``` ifelse(mydata > 12, "***", ifelse(mydata >= 10, "**", ifelse(mydata >= 8, "*", "NS"))) ``` Output: ``` A B C [1,] "NS" "NS" "NS" [2,] "NS" "NS" "NS" [3,] "NS" "NS" "NS" [4,] "NS" "NS" "*" [5,] "NS" "NS" "*" [6,] "NS" "*" "**" [7,] "NS" "*" "**" [8,] "*" "**" "**" [9,] "*" "**" "***" [10,] "**" "**" "***" ``` Hope this helps! Upvotes: 4 [selected_answer]<issue_comment>username_3: ``` ifelse(mydata >= 8 & mydata <= 10, "*", ifelse(mydata > 10 & mydata <= 12, "**", ifelse(mydata > 12, "***", "NS" ))) A B C [1,] "NS" "NS" "NS" [2,] "NS" "NS" "NS" [3,] "NS" "NS" "NS" [4,] "NS" "NS" "*" [5,] "NS" "NS" "*" [6,] "NS" "*" "*" [7,] "NS" "*" "**" [8,] "*" "*" "**" [9,] "*" "**" "***" [10,] "*" "**" "***" ``` Upvotes: 2 <issue_comment>username_4: You can use `cut` and set the labels: ``` mydata<-data.frame(A=1:10, B=3:12, C=5:14) as.data.frame(lapply(mydata, function(x) cut(x, breaks = c(-Inf, 8, 10, 12, Inf), labels = c("NS","*","**","***")))) # A B C # 1 NS NS NS # 2 NS NS NS # 3 NS NS NS # 4 NS NS NS # 5 NS NS * # 6 NS NS * # 7 NS * ** # 8 NS * ** # 9 * ** *** # 10 * ** *** ``` Upvotes: 2 <issue_comment>username_5: A `tidyverse` alternative with `case_when`: ``` mydata %>% mutate_all(funs(case_when(. > 12 ~ '***', . > 10 & . <= 12 ~ '**', . >= 8 & . <= 10 ~ '*', . < 8 ~ 'NS'))) ``` which gives: > > > ``` > A B C > 1 NS NS NS > 2 NS NS NS > 3 NS NS NS > 4 NS NS * > 5 NS NS * > 6 NS * * > 7 NS * ** > 8 * * ** > 9 * ** *** > 10 * ** *** > > ``` > > Upvotes: 3
2018/03/20
713
2,747
<issue_start>username_0: My output from SQL is as follows: ``` ------------------------------------ | Name | Identifier | Date | Value | ------------------------------------- | A | Bid | XX/XX | 10 | ------------------------------------- | A | Ask | XX/XX | 11 | ------------------------------------- | B | Bid | YY/YY | 20 | ------------------------------------- | B | Ask | YY/YY | 21 | ------------------------------------- ``` My desired output preferably directly from SQL or with the help of Python or Excel is as follows: ``` -------------------------------- | Name | Date | Bid | Ask | -------------------------------- | A | XX/XX | 10 | 11 | -------------------------------- | B | YY/YY | 20 | 21 | -------------------------------- ``` What is the best way to accomplish this in either SQL, Python or Excel? My problem is that the next step in which I wish to use this data only handles inputs that are in the form of the "desired output" table. EDIT: The original query is as follows: ``` SELECT * FROM table where Name (LIKE 'A' or LIKE 'B') and Date between 'AA/AA' and 'ZZ/ZZ' ```<issue_comment>username_1: In Python Pandas you could use PD.melt and specify the columns you want to keep the same. The others will get pivoted. For more info: <https://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html> Upvotes: 0 <issue_comment>username_2: You can achieve the desired output using pivot. It is a functionality which can be found in all data analysis framework like excel, SQL etc. For Excel, you can follow this link to acheive the desired result : <http://www.excel-easy.com/data-analysis/pivot-tables.html> SQL : I have written a dynamic sql by using pivot function ``` create table tbl1 ( name varchar(100),Identifier varchar(100), Date_val varchar(100), Value int); INSERT INTO tbl1 values ('A','Bid','XX/XX',10),('A','Ask','XX/XX',11),('b','Bid','YY/YY',20), ('b','Ask','YY/YY',21) DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT ',' + QUOTENAME(Identifier) from tbl1 group by Identifier order by Identifier FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = 'SELECT name,Date_val,' + @cols + ' from ( select name,Identifier, Date_val, value from tbl1 ) x pivot ( sum(value) for Identifier in (' + @cols + ') ) p ' execute (@query) ``` Upvotes: 1
2018/03/20
888
2,953
<issue_start>username_0: Is there a way to check if it's DST (Daylight Saving Time) with UTC, **without using conversion**? I don't want to use conversion because it's ambiguous on the 28 october at 2 am. This: ``` using System; namespace Rextester { public class Program { public static void PrintSeasonKindTime(DateTime utcDate) { // Convert dt to Romance Standard Time TimeZoneInfo tzi = TimeZoneInfo.FindSystemTimeZoneById("W. Europe Standard Time"); DateTime localDate = TimeZoneInfo.ConvertTime(utcDate, tzi); Console.WriteLine("Local date: " + localDate.ToString("yyyy.MM.d HH:mm:ss") + (tzi.IsDaylightSavingTime(localDate) ? " is summer" : " is winter")); } public static void Main(string[] args) { DateTime currentUTCDateTime = new DateTime(2018,10,27,23,59,0, DateTimeKind.Utc); double nbMinutes = 1.0; PrintSeasonKindTime(currentUTCDateTime); PrintSeasonKindTime(currentUTCDateTime.AddMinutes(nbMinutes)); } } } ``` Will display this: ``` Local date: 2018.10.28 01:59:00 is summer Local date: 2018.10.28 02:00:00 is winter ``` While I wish the following display: ``` Local date: 2018.10.28 01:59:00 is summer Local date: 2018.10.28 02:00:00 is summer ``` Since the time change is at 2018.10.28 03:00:00 local time in the specified time zone not at 2 am (see here [enter link description here](https://www.timeanddate.com/time/change/switzerland)). However, that behaviour is "ambiguously" correct since it's two times 2 am on 28th October; once at 00:00 UTC (2 am summer time) and once at 1 am (2 am winter time). Do you have any idea?<issue_comment>username_1: It is impossible to get Daylight saving time from UTC, because it is UTC - check UTC description [here](https://en.wikipedia.org/wiki/Coordinated_Universal_Time#Daylight_saving_time) You need the current timezone. With this information you can use [TimeZoneInfo.IsDaylightSavingTime](https://msdn.microsoft.com/en-us/library/bb460642(v=vs.110).aspx) to determine if the current TimeZone is currently in DST. If you are storing data serverside (eg. web) you should always try to get the users timezone and convert to UTC. Upvotes: 2 <issue_comment>username_2: Just use [`TimeZoneInfo.IsDaylightSavingTime`](https://learn.microsoft.com/en-gb/dotnet/api/system.timezoneinfo.isdaylightsavingtime), passing in your UTC `DateTime` value. That will *effectively* convert that UTC value into the given time zone and check whether the result is in daylight saving time, although whether it actually performs that conversion or just checks whether it *would* be in DST is a different matter. Note that this is never ambiguous, as a UTC value is never ambiguous. `TimeZoneInfo.IsDaylightSavingTime` returns `true` when passed 2018-10-28T00:00:00Z in the time zone you're interested in. Upvotes: 4 [selected_answer]
2018/03/20
537
1,938
<issue_start>username_0: I would like to dynamically add watermark to a report that is generated in Stimulsoft. The watermark can not be hard-coded and only appear if the report was generated in TEST environment. I have a variable that checks if the report was created in test environment: isTestEnv Which means that if the watermark was added to the page the old fashioned way I would use: ``` if(isTestEnv == true) { Page1.Watermark.Enabled = true; } else { Page1.Watermark.Enabled = false; } ``` But this is not the case. I have to add the watermark when generating the report. Does anyone know how to? The text is same on all pages it simply says "TEST". But how to push that into a report is the mystery.<issue_comment>username_1: You can set the report page watermark to some Report variable at design time and in your code set the value for the report variable. Something like this: StiReport report = new StiReport(); report.Load("REPORT\_TEMPLATE\_PATH"); //You can check if this variable exists or not using an if condition report.Dictionary.Variables["WATERMARK\_VARIABLE\_NAME"] = "YOUR\_TEXT"; report.Show();//or report.ShowWithWpf(); Upvotes: 0 <issue_comment>username_2: you can use this code and set your water mark image in your report ``` Stimulsoft.Base.StiLicense.loadFromFile("../license.key"); var options = new Stimulsoft.Viewer.StiViewerOptions({showTooltips:false}); var viewer = new Stimulsoft.Viewer.StiViewer(options, "StiViewer", false); var report = new Stimulsoft.Report.StiReport({isAsyncMode: true}); report.loadFile("Backgroundimg.mrt"); var page = report.pages.getByIndex(0); page.watermark.image = Stimulsoft.System.Drawing.Image.fromFile('test.jpg'); page.watermark.aspectRatio = true; page.watermark.imageStretch = true; page.watermark.imageShowBehind= true; report.renderAsync(function () { viewer.report = report; viewer.renderHtml("viewerContent"); }); ``` Upvotes: 1
2018/03/20
865
3,392
<issue_start>username_0: I'm trying to call a promise function recursively. The following call service.getSentenceFragment() returns upto 5 letters from a sentence i.e. 'hello' from 'helloworld. Providing a nextToken value as a parameter to the call returns the next 5 letters in the sequence. i.e. 'world'. The following code returns 'hellohelloworldworld' and does not log to the console. ``` var sentence = ''; getSentence().then(function (data)) { console.log(sentence); }); function getSentence(nextToken) { return new Promise((resolve, reject) => { getSentenceFragment(nextToken).then(function(data) { sentence += data.fragment; if (data.nextToken != null && data.nextToken != 'undefined') { getSentence(data.NextToken); } else { resolve(); } }).catch(function (reason) { reject(reason); }); }); } function getSentenceFragment(nextToken) { return new Promise((resolve, reject) => { service.getSentenceFragment({ NextToken: nextToken }, function (error, data) { if (data) { if (data.length !== 0) { resolve(data); } } else { reject(error); } }); }); } ```<issue_comment>username_1: Cause when you do this: ``` getSentence(data.NextToken); ``` A new Promise chain is started, and thecurrent chain stays pending forever. So may do: ``` getSentence(data.NextToken).then(resolve, reject) ``` ... but actually you could beautify the whole thing to: ``` async function getSentence(){ let sentence = "", token; do { const partial = await getSentenceFragment(token); sentence += partial.fragment; token = partial.NextToken; } while(token) return sentence; } ``` And watch out for this trap in `getSentenceFragment` - if `data` is truthy but `data.length` is 0, your code reaches a dead end and the Promise will timeout ``` // from your original getSentenceFragment... if (data) { if (data.length !== 0) { resolve(data); } /* implicit else: dead end */ // else { return undefined } } else { reject(error); } ``` Instead, combine the two `if` statements using `&&`, now our Promise will *always* resolve or reject ``` // all fixed! if (data && data.length > 0) resolve(data); else reject(error); ``` Upvotes: 2 <issue_comment>username_2: You could recursively call a promise like so: ``` getSentence("what is your first token?") .then(function (data) { console.log(data); }); function getSentence(nextToken) { const recur = (nextToken,total) => //no return because there is no {} block so auto returns getSentenceFragment(nextToken) .then( data => { if (data.nextToken != null && data.nextToken != 'undefined') { return recur(data.NextToken,total + data.fragment); } else { return total + data.fragment; } });//no catch, just let it go to the caller return recur(nextToken,""); } function getSentenceFragment(nextToken) { return new Promise((resolve, reject) => { service.getSentenceFragment({ NextToken: nextToken }, function (error, data) { if (data) { if (data.length !== 0) { resolve(data); } } else { reject(error); } }); }); } ``` Upvotes: 1
2018/03/20
644
2,697
<issue_start>username_0: I made a private app using shopify-api-node package which is working perfect in local development using ngrok. I registered a webhook "products/update" which is working fine. registered url local: `https://example.ngrok.io/webhooks/product-update` but the same webhook registered with production environment, webhook is not getting fired registered url in production: `https://custom.example.in/webhooks/product-update` *When both local and production servers are running, webhooks are fired to local server (tunneled via ngrok) only.* I am using nginx in production as a reverse proxy. I have checked access.log file of nginx but there is no webhook request fired by shopify. I tried creating a new private app but no help. What can be possible issue? Thanks in advance.<issue_comment>username_1: You need to provide more information. How do you know the webhook in production is not firing? Have you proven that when you created the webhook itself that is was created? What does the webhook object look like when you inspect it? Is it all good? If the webhook exists, and the object all looks good, the next thing to investigate is your public production server. 99/100 webhook problems are not Shopify but something developers do wrong. Ensure you have done everything right before you ask what is wrong! Upvotes: 0 <issue_comment>username_2: Some common problems are: * An invalid SSL certificate, e.g. one that is self-signed or one that is missing intermediate certificates. You can check for problems using <https://www.ssllabs.com/ssltest/> * Your server or app is not configured to accept `POST` requests, instead it only accepts other methods. From the command line you could check this by making a POST request with `curl` to your webhook endpoint. * Your app has implemented webhook verification logic and you are trying to verify production webhooks using your development app secret. * The webhook that you created has been cancelled because Shopify was not receiving a 200 status response quickly enough. You can use the Webhooks API to list webhooks and verify that yours is still registered. Upvotes: 3 [selected_answer]<issue_comment>username_3: Make sure you have setup next.js server's SSL with ***certificate.crt***, ***private.key*** and ***ca\_bundle.crt***. Example: ``` var options = { key: fs.readFileSync(__dirname + "/../certificates/server.key"), cert: fs.readFileSync(__dirname + "/../certificates/server.crt"), ca: fs.readFileSync(__dirname + "/../certificates/ca_bundle.crt"), }; http.createServer(server.callback()).listen(8081); https.createServer(options, server.callback()).listen(443); ``` Upvotes: 0
2018/03/20
1,558
5,465
<issue_start>username_0: `delete_blob()` seems to delete only the files inside the container and from folders and subfolders inside the container. But i'm seeing below error in python while trying to delete a folder from container. > > Client-Request-ID=7950669c-2c4a-11e8-88e7-00155dbf7128 Retry policy did not allow for a retry: Server-Timestamp=Tue, 20 Mar 2018 14:25:00 GMT, Server-Request-ID=54d1a5d6-b01e-007b-5e57-c08528000000, HTTP status code=404, Exception=The specified blob does not exist.ErrorCode: BlobNotFound`BlobNotFound`The specified blob does not exist.RequestId:54d1a5d6-b01e-007b-5e57-c08528000000Time:2018-03-20T14:25:01.2130063Z. > > > azure.common.AzureMissingResourceHttpError: The specified blob does not exist.ErrorCode: BlobNotFound > `BlobNotFound`The specified blob does not exist. > RequestId:54d1a5d6-b01e-007b-5e57-c08528000000 > Time:2018-03-20T14:25:01.2130063Z > > > Could anyone please help here?<issue_comment>username_1: There are two things to understand from the process, you could delete specific files,folders,images...(blobs) using **delete\_blob** , But if you want to delete **containers**, you have to use the **delete\_container** which will delete all blobs within, here's a sample that i created which deletes blobs inside a path/virtual folder: ``` from azure.storage.blob import BlockBlobService block_blob_service = BlockBlobService(account_name='yraccountname', account_key='accountkey') print("Retreiving blobs in specified container...") blob_list=[] container="containername" def list_blobs(container): try: global blob_list content = block_blob_service.list_blobs(container) print("******Blobs currently in the container:**********") for blob in content: blob_list.append(blob.name) print(blob.name) except: print("The specified container does not exist, Please check the container name or if it exists.") list_blobs(container) print("The list() is:") print(blob_list) print("Delete this blob: ",blob_list[1]) #DELETE A SPECIFIC BLOB FROM THE CONTAINER block_blob_service.delete_blob(container,blob_list[1],snapshot=None) list_blobs(container) ``` Please refer to the code in my repo with explanation in Readme section, as well as new storage scripts:<https://github.com/adamsmith0016/Azure-storage> Upvotes: 3 <issue_comment>username_2: In Azure Blob Storage, as such a folder doesn't exist. It is just a prefix for a blob's name. For example, if you see a folder named `images` and it contains a blob called `myfile.png`, then essentially the blob's name is `images/myfile.png`. Because the folders don't really exist (they are virtual), you can't delete the folder directly. What you need to do is delete all blobs individually in that folder (or in other words delete the blobs whose name begins with that virtual folder name/path. Once you have deleted all the blobs, then that folder automatically goes away. In order to accomplish this, first you would need to fetch all blobs whose name starts with the virtual folder path. For that you will use [`list_blobs`](https://github.com/Azure/azure-storage-python/blob/master/azure-storage-blob/azure/storage/blob/baseblobservice.py#L1190) method and specify the virtual folder path in `prefix` parameter. This will give you a list of blobs starting with that prefix. Once you have that list, you will delete the blobs one by one. Upvotes: 4 [selected_answer]<issue_comment>username_3: For others searching for the solution in python. This worked for me. First make a variable that stores all the files in the folder that you want to remove. Then for every file in that folder, remove the file by stating the name of the container, and then the actual foldername.name . By removing all the files in a folder, the folders is deleted in azure. ``` def delete_folder(self, containername, foldername): folders = [blob for blob in blob_service.block_blob_service.list_blobs(containername) if blob.name.startswith(foldername)] if len(folders) > 0: for folder in folders: blob_service.block_blob_service.delete_blob(containername, foldername.name) print("deleted folder",folder name) ``` Upvotes: 2 <issue_comment>username_4: You cannot delete a non-empty folder in Azure blobs, but you can achieve it if you delete the files inside the sub-folders first. The below work around will start deleting it from the files to the parent folder. ``` from azure.storage.blob import BlockBlobService blob_client = BlockBlobService(account_name='', account_key='') containername = 'XXX' foldername = 'XXX' def delete_folder(containername, foldername): folders = [blob.name for blob in blob_client.list_blobs(containername, prefix=foldername)] folders.sort(reverse=True, key=len) if len(folders) > 0: for folder in folders: blob_client.delete_blob(containername, folder) print("deleted folder",folder) ``` Upvotes: 2 <issue_comment>username_5: Use `list_blobs(name_starts_with=folder_name)` and `delete_blob()` Complete code: ```py blob_service_client = BlobServiceClient.from_connection_string(conn_str=CONN_STR) blob_client = blob_service_client.get_container_client(AZURE_BLOBSTORE_CONTAINER) for blob in blob_client.list_blobs(name_starts_with=FOLDER_NAME): blob_client.delete_blob(blob.name) ``` Upvotes: 2
2018/03/20
558
1,918
<issue_start>username_0: Why the second function didn't use the **"use strict";** mode (*it shows me window object in console*): ``` function test() { console.log(this); } test(); // will be global or window, it's okay "use strict"; function test2() { console.log(this); } test2(); // will be global, BUT WHY? It must be undefined, because I have used strict mode! ``` But if I define strict mode **in the body** of the second function, all will be as I expect. ``` function test() { console.log(this); } test(); // will be global or window function test2() { "use strict"; console.log(this); } test2(); ``` My question is simple — why it happens?<issue_comment>username_1: Because `"use strict"` only has effects if it's the first statement of the current script/function. From [the MDN docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode) : > > To invoke strict mode for an entire script, put the exact statement `"use strict";` (or `'use strict';`) **before any other statements** > > > Likewise, to invoke strict mode for a function, put the exact statement `"use strict";` (or `'use strict';`) in the function's body **before any other statements.** > > > Upvotes: 2 <issue_comment>username_2: See [the MDN documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode): > > To invoke strict mode for an entire script, put the exact statement "use strict"; (or 'use strict';) before any other statements. > > > and > > Likewise, to invoke strict mode for a function, put the exact statement "use strict"; (or 'use strict';) in the function's body before any other statements. > > > In your first code block, you have `"use strict";` but it isn't the *first* statement in the script, so it has no effect. In your second, it is the first statement in a function, so it does. Upvotes: 3 [selected_answer]
2018/03/20
1,815
6,133
<issue_start>username_0: I am getting started with Apache isis (I have Windows 10) and following their tutorial (<https://isis.apache.org/guides/ugfun/ugfun.html#_ugfun_getting-started_helloworld-archetype>) I installed Java and Maven, added them to the path and then I created a folder inside which when I run command `mvn -v` I see folling output: ``` E:\Apache isis\test_project>mvn -v Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T20:49:05+01:00) Maven home: D:\Development softwares\apache-maven-3.5.3-bin\apache-maven-3.5.3\bin\.. Java version: 9.0.4, vendor: Oracle Corporation Java home: C:\Program Files\Java\jre-9.0.4 Default locale: en_US, platform encoding: Cp1252 OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows" ``` When I run the command specified in their docs to generate app: ``` mvn archetype:generate \ -D archetypeGroupId=org.apache.isis.archetype \ -D archetypeArtifactId=helloworld-archetype \ -D archetypeVersion=1.16.2 \ -D groupId=com.mycompany \ -D artifactId=myapp \ -D version=1.0-SNAPSHOT \ -B ``` I get following error: ``` PS C:\Users\Nitish> cd .\Desktop\ PS C:\Users\Nitish\Desktop> mvn archetype:generate \ >> -D archetypeGroupId=org.apache.isis.archetype \ >> -D archetypeArtifactId=simpleapp-archetype \ >> -D archetypeVersion=1.16.2 \ >> -D groupId=com.mycompany \ >> -D artifactId=myapp \ >> -D version=1.0-SNAPSHOT \ >> -B [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 0.114 s [INFO] Finished at: 2018-03-20T15:42:22+01:00 [INFO] ------------------------------------------------------------------------ [ERROR] The goal you specified requires a project to execute but there is no POM in this directory (C:\Users\Nitish\Desktop). Please verify you invoked Maven from the correct directory. -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MissingProjectException -D : The term '-D' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:2 char:5 + -D archetypeGroupId=org.apache.isis.archetype \ + ~~ + CategoryInfo : ObjectNotFound: (-D:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException -D : The term '-D' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:3 char:5 + -D archetypeArtifactId=simpleapp-archetype \ + ~~ + CategoryInfo : ObjectNotFound: (-D:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException -D : The term '-D' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:4 char:5 + -D archetypeVersion=1.16.2 \ + ~~ + CategoryInfo : ObjectNotFound: (-D:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException -D : The term '-D' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:5 char:5 + -D groupId=com.mycompany \ + ~~ + CategoryInfo : ObjectNotFound: (-D:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException -D : The term '-D' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:6 char:5 + -D artifactId=myapp \ + ~~ + CategoryInfo : ObjectNotFound: (-D:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException -D : The term '-D' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:7 char:5 + -D version=1.0-SNAPSHOT \ + ~~ + CategoryInfo : ObjectNotFound: (-D:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException -B : The term '-B' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:8 char:5 + -B + ~~ + CategoryInfo : ObjectNotFound: (-B:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException ``` Am I missing out something?<issue_comment>username_1: As I was using Powershell I had to use the command : ``` mvn archetype:generate "-DarchetypeGroupId=org.apache.isis.archetype" "-DarchetypeArtif actId=helloworld-archetype" "-DarchetypeVersion=1.16.2" "-DgroupId=com.mycompany" "-DartifactId=myapp" "-Dversion=1.0-SN APSHOT" "-B" ``` Upvotes: 1 <issue_comment>username_2: For Unix, it should as below. ``` mvn archetype:generate \ -DarchetypeGroupId=org.apache.beam \ -DarchetypeArtifactId=beam-sdks-java-maven-archetypes-examples \ -DarchetypeVersion=2.14.0 \ -DgroupId=org.example \ -DartifactId=word-count-beam \ -Dversion="0.1" \ -Dpackage=org.apache.beam.examples \ -DinteractiveMode=false ``` If you want to use it in powershell replace backslash \ with backtick ` Upvotes: 0
2018/03/20
725
2,716
<issue_start>username_0: I am trying to mock spring's applicationContext.getBean(String, Class) method in below manner - ``` when(applicationContext.getBean(Mockito.anyString(), Mockito.eq(SomeClass.class))).thenAnswer( new Answer() { @Override public SomeClass answer(InvocationOnMock invocation) throws Throwable { // doing some stuff and returning the object of SomeClass } }); ``` The scenario is that, the applicationContext.getBean method will be called a number of times, each time a with different strings but the same class (SomeClass.class). The problem I am getting here is that the answer is getting invoked for the very first invocation of getBean method. With each subsequent invocation of getBean I am getting below exception - ``` ******org.mockito.exceptions.misusing.InvalidUseOfMatchersException: Invalid use of argument matchers! 2 matchers expected, 1 recorded: This exception may occur if matchers are combined with raw values: //incorrect: someMethod(anyObject(), "raw String"); When using matchers, all arguments have to be provided by matchers. For example: //correct: someMethod(anyObject(), eq("String by matcher"));****** ``` Any idea, if I am missing something ?<issue_comment>username_1: When using Mockito, you need to tell him how many times you will call your mock. You need to make as many `when` calls as you will need. For instance, if you know you'll test your method three times, you need to declare three separate `when` statements. PS: I used Mockito for a bit only, it is possible that there is a solution to tell Mockito to use the same `Answer` multiple times in a row. --- **EDIT** [<NAME>](https://stackoverflow.com/users/180719/olivier-gr%C3%A9goire) pointed out in the comments that you can use a single `when` call, but call `thenAnswer` multiple times. It is more elegant this way. Upvotes: 0 <issue_comment>username_2: @Kshitij I have faced same issue in my previous project. I came across this below doc: <http://static.javadoc.io/org.mockito/mockito-core/2.16.0/org/mockito/Mockito.html#resetting_mocks> Under section 17 it is explained how to use reset() Example: Let's say we have EvenOddService which finds whether number is even or odd then Test case with reset() will look like: ``` when(evenOddService.isEven(20)).thenReturn(true); Assert.assertEquals(evenOddController.isEven(20), true); //allows to reuse the mock reset(evenOddService); Assert.assertEquals(evenOddController.isEven(20), true); ``` Hope this helps :). Upvotes: 1 <issue_comment>username_3: I've added this to my test class: ``` @AfterEach public void resetMock(){ Mockito.reset(myMockedService); } ``` Upvotes: 0
2018/03/20
930
3,269
<issue_start>username_0: I am using docker image <https://hub.docker.com/_/mongo/> (Latest MongoDB version) I run command ``` docker run --name some-mongo -d mongo ``` Then I install [Studio 3T](https://studio3t.com) I enter connection information like this [![enter image description here](https://i.stack.imgur.com/SDB43.png)](https://i.stack.imgur.com/SDB43.png) but I can't connect. What is correct connection must declare in Studio 3T in this case? How to connect MongoDB instance (docker) by Studio 3T?<issue_comment>username_1: You need to export the port you want to use in your docker command. e.g. ``` docker run -p 127.0.0.1:27017:27017 --name some-mongo -d mongo ``` This opens the port of the container on your host machine. Upvotes: 5 [selected_answer]<issue_comment>username_2: You need to find the IP address where the Docker container is running. On Mac docker runs in the background inside a linux VM that has its own IP. Thus `localhost` will not work. To find the IP, run `docker-machine env default` and set this IP in the Server field. Upvotes: 0 <issue_comment>username_3: * Click New Connection * Enter the Connection name * Click on `From URI` * Enter the URI in the following format `mongodb://{username}:{password}@{ip_address}:{port}/?authSource=admin` * Click `OK` * Click `Test Connection` * Works? + No: Check your username, password, etc + yes: Congrats! Upvotes: 3 <issue_comment>username_4: for those on windows kindly check in task manager and make sure you don't have a local installation of mongo db server running then use `localhost` in address/connection string Upvotes: 0 <issue_comment>username_5: I was running mongodb with wsl2 and docker, so I needed just to add "from uri", and set up ip\_adress with the ip from wsl2. Explanation: 1. Uri: This connection string is the way we can talk to robo 3t understand how to connect with our mongodb server. 2. ip\_address: Because of docker creates containers that are similar to servers, each container has an ip\_adress, so to have access to this mongodb server, we need to find the ip\_adress from the container that we need. I used this URI: **mongodb://{username}:{password}@{ip\_address}:{port}/?authSource=admin** **username** = MONGO\_INITDB\_ROOT\_USERNAME **password** = <PASSWORD> **port** = 27017(docker container port will be set up on docker parameter "-p") **ip\_address** = Ip from wsl2 in my case, or localhost if you running docker locally. This was my command to run the container on the first time : **docker container run -d -e MONGO\_INITDB\_ROOT\_USERNAME=mongouser -e MONGO\_INITDB\_ROOT\_PASSWORD=<PASSWORD> -p 27017:27017 -v mongo\_vol:/data/db mongo:4.4.3** Upvotes: 0 <issue_comment>username_6: I had a problem connecting Studio 3T with MongoDB earlier this week, and it consumed my entire working day, so I gave up on using Studio 3T to ensure time for other things and an alternative. Today I found this short video tutorial by Studio 3T on how to connect Studio 3T with MongoDB. Following the directions, I was surprised that a connection was made successfully. I recommend you watch/follow this video: <https://www.youtube.com/watch?v=_Ka3-HGNlYE> If it helps you, remember to tick this answer. Upvotes: 0
2018/03/20
639
2,295
<issue_start>username_0: Something very strange is happening in Heroku, I normally install my packages when I do a git push heroku master, but now what worked for months for me not work anymore. Here is the error: ``` remote: -----> Python app detected remote: -----> Installing pip remote: -----> Installing requirements with pip remote: Obtaining django-widget-tweaks from git+https://github.com/julianogouveia/django-widget-tweaks/#egg=django-widget-tweaks (from -r /tmp/build_843fc05109f047351641f1b9e1db069d/requirements.txt (line 2)) remote: Cloning https://github.com/julianogouveia/django-widget-tweaks/ to /app/.heroku/src/django-widget-tweaks remote: Complete output from command python setup.py egg_info: remote: usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] remote: or: -c --help [cmd1 cmd2 ...] remote: or: -c --help-commands remote: or: -c cmd --help remote: remote: error: invalid command 'egg_info' remote: remote: ---------------------------------------- remote: Command "python setup.py egg_info" failed with error code 1 in /app/.heroku/src/django-widget-tweaks/ remote: ! Push rejected, failed to compile Python app. remote: remote: ! Push failed remote: Verifying deploy... remote: remote: ! Push rejected to partnersbit-prod. ``` What is more crazy about is that the egg\_info error is a setuptools pip error, and the heroku always use the last version of pip with setuptools. Someone has a ideia to how to start debugging this on heroku? The heroku documentation doesn't say anything about this and the stackoverflow already opened issue about this problem is incomplete and not well documented.<issue_comment>username_1: What seems is that Heroku is doing some enhancements in the python-3.6.4 runtime and this are causing this egg\_info error. I solved by now, changing the python runtime version from 3.6.4 to the 3.6.3 version. I'll not close this question until I know how to go back to 3.6.4 version without errors. Upvotes: 2 <issue_comment>username_2: For me it was very helpful changing in `runtime.txt` python-3.6.4 -> python-3.6.3 -> python-3.6.4 and send it in separate pushes. Upvotes: 1
2018/03/20
258
832
<issue_start>username_0: Let's say we have a list of functions ``` funcs = [int, float] ``` and we want to apply them to one argument `0` to get `[0, 0.0]` Clearly we can (edit: and should!) do ``` [f(0) for f in funcs] ``` but is there any other machinery for this in the standard library, akin to the `map` function?<issue_comment>username_1: What seems is that Heroku is doing some enhancements in the python-3.6.4 runtime and this are causing this egg\_info error. I solved by now, changing the python runtime version from 3.6.4 to the 3.6.3 version. I'll not close this question until I know how to go back to 3.6.4 version without errors. Upvotes: 2 <issue_comment>username_2: For me it was very helpful changing in `runtime.txt` python-3.6.4 -> python-3.6.3 -> python-3.6.4 and send it in separate pushes. Upvotes: 1
2018/03/20
1,584
6,123
<issue_start>username_0: How can I implement the mongoose plugin using nestjs? ``` import * as mongoose from 'mongoose'; import uniqueValidator from 'mongoose-unique-validator'; import mongoosePaginate from 'mongoose-paginate'; import mongoose_delete from 'mongoose-delete'; const UsuarioSchema = new mongoose.Schema({ username: { type: String, unique: true, required: [true, 'El nombre de usuario es requerido'] }, password: { type: String, required: [true, 'La clave es requerida'], select: false } }); UsuarioSchema.plugin(uniqueValidator, { message: '{PATH} debe ser único' }); UsuarioSchema.plugin(mongoosePaginate); UsuarioSchema.plugin(mongoose_delete, { deletedAt : true, deletedBy : true, overrideMethods: true }); ``` Error: First param to `schema.plugin()` must be a function, got "undefined"<issue_comment>username_1: here is an example of using `timestamp` plugin ``` import { Schema } from 'mongoose'; import * as timestamp from 'mongoose-timestamp'; export const ConversationSchema = new Schema({ users: [String], }).plugin(timestamp); ``` try replace ``` import uniqueValidator from 'mongoose-unique-validator'; import mongoosePaginate from 'mongoose-paginate'; import mongoose_delete from 'mongoose-delete'; ``` by ``` import * as uniqueValidator from 'mongoose-unique-validator'; import * as mongoosePaginate from 'mongoose-paginate'; import * as mongoose_delete from 'mongoose-delete'; ``` Upvotes: 0 <issue_comment>username_2: If you followed the official doc, you can add plugins in this file: ``` `export const databaseProviders = [ { provide: 'DbConnectionToken', useFactory: async () => { (mongoose as any).Promise = global.Promise; mongoose .plugin('pluginOne') .plugin('pluginTwo') return await mongoose.connect('mongodb://localhost/nest', { useMongoClient: true, }); }, }, ];` ``` Remind, If you set the plugins in the Schema file, you set the same plugins as many times. The best way to set plugins is only once. Upvotes: -1 <issue_comment>username_3: This is a snippet for those who are using [mongoose-paginate](https://www.npmjs.com/package/mongoose-paginate) plugin with nestjs. You can also install [@types/mongoose-paginate](https://www.npmjs.com/package/@types/mongoose-paginate) for getting the typings support 1. Code for adding the paginate plugin to the schema: ``` import { Schema } from 'mongoose'; import * as mongoosePaginate from 'mongoose-paginate'; export const MessageSchema = new Schema({ // Your schema definitions here }); // Register plugin with the schema MessageSchema.plugin(mongoosePaginate); ``` 2. Now in the Message interface document ``` export interface Message extends Document { // Your schema fields here } ``` 3. Now you can easily get the paginate method inside the service class like so ``` import { Injectable } from '@nestjs/common'; import { InjectModel } from '@nestjs/mongoose'; import { PaginateModel } from 'mongoose'; import { Message } from './interfaces/message.interface'; @Injectable() export class MessagesService { constructor( // The 'PaginateModel' will provide the necessary pagination methods @InjectModel('Message') private readonly messageModel: PaginateModel, ) {} /\*\* \* Find all messages in a channel \* \* @param {string} channelId \* @param {number} [page=1] \* @param {number} [limit=10] \* @returns \* @memberof MessagesService \*/ async findAllByChannelIdPaginated(channelId: string, page: number = 1, limit: number = 10) { const options = { populate: [ // Your foreign key fields to populate ], page: Number(page), limit: Number(limit), }; // Get the data from database return await this.messageModel.paginate({ channel: channelId }, options); } } ``` Upvotes: 4 <issue_comment>username_4: Try this: ``` import * as mongoose from 'mongoose'; import * as uniqueValidator from 'mongoose-unique-validator'; import * as mongoosePaginate from 'mongoose-paginate'; import * as mongoose_delete from 'mongoose-delete'; const UsuarioSchema = new mongoose.Schema({ username: { type: String, unique: true, required: [true, 'El nombre de usuario es requerido'] }, password: { type: String, required: [true, 'La clave es requerida'], select: false } }); UsuarioSchema.plugin(uniqueValidator, { message: '{PATH} debe ser único' }); UsuarioSchema.plugin(mongoosePaginate); UsuarioSchema.plugin(mongoose_delete, { deletedAt : true, deletedBy : true, overrideMethods: true }); export default UsuarioSchema; ``` Then you can use it like this: ``` import UsuarioSchema from './UsuarioSchema' ``` Upvotes: 2 <issue_comment>username_5: NestJS Documentation has better way to add plugins to either individual schema. ``` @Module({ imports: [ MongooseModule.forFeatureAsync([ { name: Cat.name, useFactory: () => { const schema = CatsSchema; schema.plugin(require('mongoose-autopopulate')); return schema; }, }, ]), ], }) export class AppModule {} ``` Or if at global level. ``` import { Module } from '@nestjs/common'; import { MongooseModule } from '@nestjs/mongoose'; @Module({ imports: [ MongooseModule.forRoot('mongodb://localhost/test', { connectionFactory: (connection) => { connection.plugin(require('mongoose-autopopulate')); return connection; } }), ], }) export class AppModule {} ``` Upvotes: 2 <issue_comment>username_6: If you're using forRootAsync, This worked for me as below ``` MongooseModule.forRootAsync({ useFactory: async (ConfigService: ConfigService) => { const connectionName: string = process.env.NODE_ENV === 'production' ? 'DATABASE_URL' : 'LOCAL_DATABASE_URL'; mongoose.plugin(require('mongoose-unique-validator')); // import mongoose the normal way and you can import your plugin as desired return { uri: ConfigService.get(connectionName), }; }, inject: [ConfigService], }), ``` Upvotes: 2
2018/03/20
429
1,381
<issue_start>username_0: HTML: ``` Search ``` This is from an example using Bootstrap v4.alpha2. (I already found out that float- should be used, instead of pull- :( But it does not work in Bootstrap v4.0.0; that I have installed using bower. What is the solution, to get the form on the right edge? Or can the classes not be combined? I must say: Bootstrap pretends to make styling easier, but for me, it is getting worse and worse -- so many changes between versions :-( Edit: I will give you the whole code block: ``` [Navbar](#) * [Home (current)](#) * [Features](#) * [Pricing](#) * [About](#) Search ``` Hopefully, with everything bootstrap v4.0.0 compatible!<issue_comment>username_1: If you put everything inside a Bootstrap column, it works as expected: ```html Search ``` Upvotes: 3 <issue_comment>username_2: The Bootstrap 4 Navbar is flexbox, so float won't work for alignment. The easiest method is to use auto-margins to push the form to the right. Just use `ml-auto` which is `margin-left:auto;`... <https://www.codeply.com/go/Xhdz5MQIDS> ``` [Navbar](#) * [Home (current)](#) * [Features](#) * [Pricing](#) * [About](#) Search ``` Related question: [Bootstrap 4 align navbar items to the right](https://stackoverflow.com/questions/41513463/bootstrap-4-align-navbar-items-to-the-right/41513784#41513784) Upvotes: 5 [selected_answer]
2018/03/20
1,011
3,973
<issue_start>username_0: On one of Oracle DB instances I am working on I am observing a different than normal behavior when recompiling packages. Typically, (as in question [Frequent error in Oracle ORA-04068: existing state of packages has been discarded](https://stackoverflow.com/questions/1761595/frequent-error-in-oracle-ora-04068-existing-state-of-packages-has-been-discarde)) following error on first call is expected after PL/SQL package recompilation: ``` ERROR at line 1: ORA-04068: existing state of packages has been discarded ORA-04061: existing state of package body "PACKAGE.NAME" has been invalidated ORA-06508: PL/SQL: could not find program unit being called: "PACKAGE.NAME" ORA-06512: at line 1 ``` But the second call should work fine, assuming the package has no errors of course. This behavior was present previously in that environment. In the meantime we upgraded from 11g R2 to 12c R1, and enabled edition based redefinition. Now the problem I am experiencing is that I keep getting just: ``` ORA-04061: existing state of package body "PACKAGE.NAME" has been invalidated ORA-06508: PL/SQL: could not find program unit being called: "PACKAGE.NAME" ORA-06512: at line 1 ``` So no ORA-04068 anymore, and only way to fix it is to reconnect the session or calling DBMS\_SESSION.RESET\_PACKAGE() manually (but I don't control all code that may be affected anyway), otherwise the problem persists on every call. Are there any DB parameters that control this that could got tweaked? The problem is not specific to any particular PL/SQL package and it seems that it can be triggered by normal package invalidation when something it references, changes. Thank you in advance.<issue_comment>username_1: Oracle does this because recompiling a PL/SQL package invalidates any session variables in use. There isn't much we can do to avoid this, except by using good deployment practices. Don't deploy changes while the database is in use, make sure all the connections are properly disconnected, etc. Easier said than done in this age of CI/CD, zero downtime and other exciting innovations. So there is one thing in the back of the locker: `pragma serially_reusable;`. This instruction means the package's state is maintained for the duration of **a single server call**. For instance if we have a PL/SQL block which calls an SR procedure three times any variables altered by that procedure will main the value across the three calls. But the next time we run the block - in the same session - the variables will have been reset to their starting values. There are several limitations to serially reusable PL/SQL - for instance, it can't be used in SQL queries. But the big attraction from your perspective is no more ORA-04068 or ORA-04061 errors. No session state, nothing to invalidate. `pragma serially_reusable` must be declared at the package level, and in the body as well as the spec. So you must be sure that none of the packaged procedures need to maintain state across server calls. Upvotes: 3 <issue_comment>username_2: I got this error when I put into some procedure the DDL instructions like: ``` execute immediate ('drop sequence seq_table_1'); execute immediate ('create sequence seq_table_1 increment by 1 start with 1'); ``` Even though I don't call this (private) procedure anywhere in the package I got this error by calling any other procedures, implemented after(!) this procedure in the package body. Putting the `pragma serially_reusable;` didn't help also. But when I moved the implementation of this mentioned procedure at the end of the package body, the error is disappeared. Upvotes: 0 <issue_comment>username_3: The problem is with timestamp. If you have a script where you first create the package and then try to call it, it is possible that the timestamp is the same (especially if the server is strong). I had the same error and solved it by entering dbms\_lock.sleep (2) after creating the package. Upvotes: 0
2018/03/20
948
3,569
<issue_start>username_0: this is such a simple problem but for some reason, I cant wrap my head around Array of Objects or Object Arrays. All I have to do is take in 5 user inputs, and create a class called Height, create object array and store user inputs into obj array and print the average. I'm kinda stuck. ``` class Height{ int total=0; int count=0; public Height(int y) { total=total+y; count++; } public void print() { System.out.print("The average is: "+total/count); } } public class ObjectArray { public static void main(String[] args){ Scanner s=new Scanner(System.in); System.out.println("Enter 5 heights in inches: "); int[] x=new int[5]; int total=0; for(int i=0;i ``` Maybe I'm over complicating it. The problem right now is that I cannot invoke h.print();. I've tried different iterations, ex: taking out the print method and doing all the printing after every iteration.<issue_comment>username_1: Your approach is wrong. Your `Height` class appears to be responsible for the evaluation of the mean value. Hence, you should put all values inside a single `Height` instance, instead of generating a new instance for each user value. However, `h` is an array of `Height`s object, while `print()` method is defined on a single `Height` instance. In order to call such method, you have to access one of the objects contained in `h`, that is `h[0].print()`. Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm assuming that your goal is simply to print the average of all the heights recieved via user input. Your code in your main method is a tad confusing, so correct me if I'm wrong in any of the examples I give here. You should, instead of creating the `x[]` array, simply add the user input for the five values to `Height.total` in a *for loop,* and increase the `Height.count` variable by one each loop through. This should look something like this: ``` for (int i = 1; i <= 5; i++) { // System.out.println("Please enter the next height: "); Height.total += s.nextDouble(); Height.count++; } ``` Then, you can run `Height.print();`. I would also recommend adding a `System.out.print("");` command to let the user know that they should enter the next value. That's the comment I left in the example code I gave above. Upvotes: 0 <issue_comment>username_3: You have to design your `Height` in a way that match your requirement : * you need different `Height` with for each one a value * you need to know how many instances there is For that, you need a private value, and a static counter : ``` class Height { private int value = 0; private static int count = 0; // static => nb of instances public Height(int y) { value = y; count++; } public static int averageOf(Height... heights) { return Arrays.stream(heights).mapToInt(h -> h.value).sum() / count; } } ``` To get the average, because it doesn't depend on a particular instance, you can have a static method, that sums all the value of the `Height` given and divide by the nb of instance And use like : ``` public static void main(String[] args) { Scanner s = new Scanner(System.in); int nb = 5; System.out.println("Enter " + nb + " heights in inches: "); Height[] heights = new Height[nb]; for (int i = 0; i < heights.length; i++) { heights[i] = new Height(Integer.parseInt(s.nextLine())); } System.out.println("Average is " + Height.averageOf(heights)); } ``` Upvotes: 0
2018/03/20
776
2,724
<issue_start>username_0: I have scoured google, and stackover flow, and just cant get to the bottom of this issue. I cannot get the following php code to connect to SQL. Its a simple php web document, that i am using to test out some things. SQL is sqlexpress 2016, and its running on IIS with php 7.x installed. PHP code executes fine, so its something with the code or the database is my guess. Things I've tried: * I've ran an echo in php to resolve the name, and it resolves it fine. * I've connected from a separate server to the sql server using tcp, and it connects fine. * I've tried both PDO connection, and mysqli and both come back with same error. The PDO code ive used is: ``` php $servername = 'RemoteServerName\SqlInstance'; $username = 'iislogon'; $password = '<PASSWORD>'; try { $conn = new PDO("mysql:host=$servername;dbname=netdata", $username, $password); // set the PDO error mode to exception $conn-setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); echo "Connected successfully"; } catch(PDOException $e) { echo "Connection failed: " . $e->getMessage(); } ?> ``` The mysqli code is: ``` php $servername = 'RemoteServerName\SqlInstance'; $username = 'iislogin'; $password = '<PASSWORD>'; $dbname = 'netdata'; ? php $conn = new mysqli($servername, $username, $password, $dbname); ? php if ($conn-connect_error) { die("Connection failed: " . $conn->connect_error); } echo "Connected successfully";?> ``` Both return the same error of host not found. What other issues could be causing this? Im new to coding php so any help would be appreciated.<issue_comment>username_1: I think you didn't escape your backslash with another backslash. Try this: ``` php $servername = 'RemoteServerName\\SqlInstance'; ? ``` Upvotes: 0 <issue_comment>username_2: mysqli and PDO starting with mysql: are supposed to connect to MySQL, not SQLExpress. If you want to use SQLExpress you should use something like [sqlsrv\_connect](http://php.net/manual/pt_BR/function.sqlsrv-connect.php) or adjust your pdo string to a [SQLExpress compatible one](http://php.net/manual/pt_BR/ref.pdo-dblib.php). Take a look at [this thread](https://stackoverflow.com/questions/29151466/cannot-connect-to-mysql-using-mysqli-in-php-script) too. Upvotes: 2 [selected_answer]<issue_comment>username_3: Look at this description <http://php.net/manual/en/pdo.construct.php> and specifically: In general, a DSN consists of the PDO driver name, followed by a colon, followed by the PDO driver-specific connection syntax. Further information is available from the PDO driver-specific documentation. Are you sure your dsn is correct and you have the PHP module enabled? See <http://php.net/manual/en/ref.pdo-mysql.php> Upvotes: 1
2018/03/20
355
1,259
<issue_start>username_0: How can I combine a vuejs condition with a vuejs variable and an elixir variable? ``` ```<issue_comment>username_1: I think you didn't escape your backslash with another backslash. Try this: ``` php $servername = 'RemoteServerName\\SqlInstance'; ? ``` Upvotes: 0 <issue_comment>username_2: mysqli and PDO starting with mysql: are supposed to connect to MySQL, not SQLExpress. If you want to use SQLExpress you should use something like [sqlsrv\_connect](http://php.net/manual/pt_BR/function.sqlsrv-connect.php) or adjust your pdo string to a [SQLExpress compatible one](http://php.net/manual/pt_BR/ref.pdo-dblib.php). Take a look at [this thread](https://stackoverflow.com/questions/29151466/cannot-connect-to-mysql-using-mysqli-in-php-script) too. Upvotes: 2 [selected_answer]<issue_comment>username_3: Look at this description <http://php.net/manual/en/pdo.construct.php> and specifically: In general, a DSN consists of the PDO driver name, followed by a colon, followed by the PDO driver-specific connection syntax. Further information is available from the PDO driver-specific documentation. Are you sure your dsn is correct and you have the PHP module enabled? See <http://php.net/manual/en/ref.pdo-mysql.php> Upvotes: 1
2018/03/20
401
1,488
<issue_start>username_0: What i want to do is fairly simple. I have a form in LibreOffice Base with a text box to enter some data and a button to execute a macro. Now i want to get with the macro on the clicking button the entered value of the text box and print it with the Print("...") function. This what i got so far. Not much, but maybe a start: ``` Sub TestMacro dim oForm as object dim oTextbox as object dim content as object oForm = thisComponent.drawpage.forms.getByName("form_a") oTextbox = oForm.getByName("testbox") content = oTextbox.getText() Print(content) End Sub ``` Any kind of help is appreciated!<issue_comment>username_1: I found the answer on my own. The key was to let the subroutine have a parameter as the macro is used on the execution on the button. Over the event you can get the Parent which is the form, from that on you can get the text box and the Current Value of it. Works just fine for me. Here is the code ``` Sub TestMacro(oEvent as object) DIM oForm AS OBJECT DIM oField as object DIM oTField as object 'gets the button oTField = oEvent.Source.Model 'gets the parent of the button which is the form oForm = oTField.Parent 'gets the text field based on its name oField = oForm.getByName("testbox") 'prints the value of the textfield Print(oField.getCurrentValue) End Sub ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Using Access2Base API will make it easy. Upvotes: -1
2018/03/20
423
1,704
<issue_start>username_0: I'm developing a service which has to copy multiple files from a central node to remote servers. The problem is that each time the service is executed, there are new servers and new files to dispatch to these servers. I mean, in each execution, I have the information of which files have to be copied to each server and in which directory. Obviously, this information is very dynamically changing, so I would like to be able to automatize this task. I tried to get a solution with **Ansible, FTP and SCP over Python**. * I think Ansible is very difficult to automatize every scp task in each execution. * SCP is ok but I need to build each SCP command in Python to launch it. * FTP Is too much for this problem because there are not many files to dispatch to a single server. Is there any better solution than what I thinked about?<issue_comment>username_1: I found the answer on my own. The key was to let the subroutine have a parameter as the macro is used on the execution on the button. Over the event you can get the Parent which is the form, from that on you can get the text box and the Current Value of it. Works just fine for me. Here is the code ``` Sub TestMacro(oEvent as object) DIM oForm AS OBJECT DIM oField as object DIM oTField as object 'gets the button oTField = oEvent.Source.Model 'gets the parent of the button which is the form oForm = oTField.Parent 'gets the text field based on its name oField = oForm.getByName("testbox") 'prints the value of the textfield Print(oField.getCurrentValue) End Sub ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Using Access2Base API will make it easy. Upvotes: -1
2018/03/20
420
1,732
<issue_start>username_0: Does Game Maker have the capability to do multiplayer in the form of many different rooms. I'm trying to create an online Arena style gladiator type game, and I can't have an obscene amount of characters in one arena, so I need many different rooms, created upon need. Can I have different rooms functioning at the same time in game maker, or so I need to cycle thru them to accomplish everything? The game will be turn based, and will be one where you'd log on to take your action, similar to how Frozen Synapse can be run over an extended length of time. Does anyone have any ideas of how to accomplish this or something similar?<issue_comment>username_1: *Yes* you can have multiple rooms funtioning at the same time. I haven't worked much with GameMaker multiplayer, but it is possible since you can do all basic networking stuff. How hard it is depends on how you want the game to work: **Peer to peer** or with a **host-server**. You an take a look at this tutorial about servers and clients if you haven't already: <https://help.yoyogames.com/hc/en-us/articles/216754698-Networking-Overview> Upvotes: 1 <issue_comment>username_2: If you're using a gamemaker server, short answer is **no**. However, if you'd like to use a dedicated server, you can make it possible, because then the server hosts and stores all game information, and the client (players) only render the information that is important to them. **Gamemaker Studio** **can not** by its own run two rooms at the same time naturally, so if you'd like to create something more complex, it's the safest to create a dedicated server elsewhere. Dedicated servers only transact information, but the game renders from the client. Upvotes: 0
2018/03/20
1,277
4,466
<issue_start>username_0: I need to change this ``` data = models.CharField(max_length=500, null=True) ``` to this ``` data = JSONField(null=True, blank=True, default={}) ``` From what I understand, I have to write custom migration. Closest info I managed to find is [here](https://stackoverflow.com/questions/49274563/django-change-models-field-type-from-charfield-to-jsonfield?noredirect=1&lq=1), but I am completely clueless what to do with RunPython if that even correct thing to do here.<issue_comment>username_1: You can do it in next steps: 1. Add new field with type JSONField and run 'makemigrations': ``` data = models.CharField(max_length=500, null=True) data_json = JSONField(null=True, blank=True, default={}) ``` 2. Create data migration(using RunPython) ```js import json from django.db import migrations def forwards_func(apps, schema_editor): MyModel = apps.get_model("myapp", "MyModel") for obj in MyModel.objects.all(): try: obj.data_json = json.loads(obj.data) obj.save() except json.decoder.JSONDecodeError as e: print('Cannot convert {} object'.format(obj.pk)) class Migration(migrations.Migration): dependencies = [] # WRITE YOUR LAST MIGRATION HERE operations = [ migrations.RunPython(forwards_func, lambda apps, schema_editor: pass), ] ``` 3. Remove old data field and run 'makemigrations': data\_json = JSONField(null=True, blank=True, default={}) 4. Rename json field and run 'makemigrations': data = JSONField(null=True, blank=True, default={}) P.S. You can use one migration for all this steps. I described all the steps for better understanding. Upvotes: 2 <issue_comment>username_2: Suppose you have a model like this ``` class SampleModel(models.Model): name = models.CharField(max_length=120) age = models.IntegerField() address = models.CharField(max_length=100) ``` and you need to change the `address` to a `JSONField` So you have to define a `temporary field` as below ``` from django.db import models from jsonfield import JSONField class SampleModel(models.Model): name = models.CharField(max_length=120) age = models.IntegerField() address = models.CharField(max_length=100) temp_address = JSONField(default={}) ``` Then run the commands, `python manage.py makemigrations app_name` and `python manage.py migrate app_name` Then comes the role of [DATA MIGRATION](https://docs.djangoproject.com/en/2.0/topics/migrations/#data-migrations). What you have to do is that, run the command, `python manage.py makemigrations app_name --empty` This willl create a `empty migration file` in your directory.Change that file into something like this **0003\_auto\_20180320\_1525.py** (migration file) ``` # -*- coding: utf-8 -*- # Generated by Django 1.11 on 2018-03-20 15:25 from __future__ import unicode_literals import json from django.db import migrations from sample.models import SampleModel class Migration(migrations.Migration): def forward_data_migration(apps, schema): for sample in SampleModel.objects.all(): try: sample.temp_address = json.loads(sample.address) sample.save() except json.decoder.JSONDecodeError as e: print('Cannot convert {} object'.format(sample.pk)) def revert_migration(apps, schema): pass dependencies = [ ('sample', '0002_samplemodel_temp_address'), ] operations = [ migrations.RunPython( code=forward_data_migration, reverse_code=revert_migration ) ] ``` Migrate this change,then create another empty migration file as before, and write `RenameField` script as below **0004\_auto\_20180320\_1530.py** ``` # -*- coding: utf-8 -*- # Generated by Django 1.11 on 2018-03-20 15:30 from __future__ import unicode_literals from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('sample', '0003_auto_20180320_1525'), ] operations = [ migrations.RenameField( model_name='samplemodel', old_name='address', new_name='temp_address' ), migrations.RenameField( model_name='samplemodel', old_name='temp_address', new_name='address' ) ] ``` Migrate this too. That's it. Upvotes: 1 [selected_answer]
2018/03/20
1,234
3,410
<issue_start>username_0: I would like to get the current and next months using shell script, I have tried this command: ``` $ date '+%b' mar $ date +"%B %Y" --date="$(date +%Y-%m-15) next month" March 2018 ``` But it always displays only the current month. Could you please help me if there is something wrong with the commands.<issue_comment>username_1: ``` $ date -d "next month" '+%B %Y' April 2018 ``` Check [this post](https://stackoverflow.com/questions/22764358/bash-getting-next-month-script/22764898#22764898) about specific caveats Note: your command works just fine for me (archlinux, bash4, date GNU coreutils 8.29) Upvotes: 3 <issue_comment>username_2: I wouldn't rely on `date` alone to do this. Instead, perform a little basic math on the month number. ``` this_month=$(date +%-m) # GNU extension to avoid leading 0 next_month=$(( this_month % 12 + 1 )) next_month_name=$(date +%B --date "2018-$next_month-1") ``` Since you are using `bash`, you don't need to use `date` at all to get the current month; the built-in `printf` can call the underlying date/time routines itself, saving a fork. ``` $ printf -v this_month '%(%-m)T\n' $ echo $this_month 3 ``` Upvotes: 2 <issue_comment>username_3: What variant and version of `date` are you running? "Solaris 5.2" was never released, though SunOS 5.2 was a kernel in Solaris 2.2 (EOL in 1999). See the [Solaris OS version history](https://en.wikipedia.org/wiki/Solaris_(operating_system)#Version_history). The Solaris 10 (SunOS 5.10) [man page for date](https://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9dc/index.html) does not support GNU's `--date=` syntax used in the question, so I'm guessing you're using some version of [date from GNU coreutils](https://www.gnu.org/software/coreutils/manual/html_node/Examples-of-date.html). Here's a solution using BSD `date` (note, this is academic in the face of BSD's `date -v 1m`): ``` date -jf %s $((1728000+$(date -jf %Y-%m-%d $(date +%Y-%m-15) +%s))) +"%B %Y" ``` There are three date calls here. BSD's date allows specifying a format (GNU can intuit most formats on its own). The parent call is the one that takes the final time (as seconds since the 1970 epoch), expressing it in the desired "Month Year" format. Seconds are determine by adding 20 days to the epoch time of the current month on the 15th day. Since no month has 15+20 days, this is always the following month. Here's a direct translation of that logic to GNU date: ``` date +"%B %Y" --date="@$((1728000+$(date +%s --date=$(date +%Y-%m-15))))" ``` Here's a simpler solution using GNU date, with one fewer `date` call: ``` date +"%B %Y" --date="$(date +%Y-%m-15) 20 days" ``` (A [bug](https://stackoverflow.com/questions/22764358/getting-next-month-with-date-command/22764898#22764898) in GNU date will give you the wrong month if you run `date --date="next month"` on the 31st.) Upvotes: 0 <issue_comment>username_4: The below one works in Red Hat 4.4.7-23, Linux version 2.6.32-754.2.1.el6.x86\_64. Just use the "month" for future months and "month ago" for previous months.. Dont confuse with adding +/- signs to the number. Check out. ``` > date "+%B-%Y" #current month November-2018 > date -d" 1 month" "+%B-%Y" December-2018 > date -d" 1 month ago" "+%B-%Y" October-2018 > ``` More.. ``` > date -d" 7 month ago" "+%B-%Y" April-2018 > date -d" 7 month " "+%B-%Y" June-2019 > ``` Upvotes: 0
2018/03/20
1,364
3,735
<issue_start>username_0: I'm writing a Python program that returns out of how many combinations in a list you can create a triangle. Example: ``` --> test([1,1,3]) 0 #you cant make a triangle out of 1,1,3 (the only combination in this list) --> test([2,789,5,3,3237,4]) 3 #you can make a triangle out of [2,5,4],[5,3,4] and [2,4,3] ``` I only managed to write a function that checks if you can create a triangle out of 3 given edges: ``` def check(a,b,c): n = max((a,b,c)) x = 0 y = 0 for i in [a,b,c]: if i != n: if x == 0: x = i elif y == 0: y = i return (x+y)>n ```<issue_comment>username_1: ``` $ date -d "next month" '+%B %Y' April 2018 ``` Check [this post](https://stackoverflow.com/questions/22764358/bash-getting-next-month-script/22764898#22764898) about specific caveats Note: your command works just fine for me (archlinux, bash4, date GNU coreutils 8.29) Upvotes: 3 <issue_comment>username_2: I wouldn't rely on `date` alone to do this. Instead, perform a little basic math on the month number. ``` this_month=$(date +%-m) # GNU extension to avoid leading 0 next_month=$(( this_month % 12 + 1 )) next_month_name=$(date +%B --date "2018-$next_month-1") ``` Since you are using `bash`, you don't need to use `date` at all to get the current month; the built-in `printf` can call the underlying date/time routines itself, saving a fork. ``` $ printf -v this_month '%(%-m)T\n' $ echo $this_month 3 ``` Upvotes: 2 <issue_comment>username_3: What variant and version of `date` are you running? "Solaris 5.2" was never released, though SunOS 5.2 was a kernel in Solaris 2.2 (EOL in 1999). See the [Solaris OS version history](https://en.wikipedia.org/wiki/Solaris_(operating_system)#Version_history). The Solaris 10 (SunOS 5.10) [man page for date](https://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9dc/index.html) does not support GNU's `--date=` syntax used in the question, so I'm guessing you're using some version of [date from GNU coreutils](https://www.gnu.org/software/coreutils/manual/html_node/Examples-of-date.html). Here's a solution using BSD `date` (note, this is academic in the face of BSD's `date -v 1m`): ``` date -jf %s $((1728000+$(date -jf %Y-%m-%d $(date +%Y-%m-15) +%s))) +"%B %Y" ``` There are three date calls here. BSD's date allows specifying a format (GNU can intuit most formats on its own). The parent call is the one that takes the final time (as seconds since the 1970 epoch), expressing it in the desired "Month Year" format. Seconds are determine by adding 20 days to the epoch time of the current month on the 15th day. Since no month has 15+20 days, this is always the following month. Here's a direct translation of that logic to GNU date: ``` date +"%B %Y" --date="@$((1728000+$(date +%s --date=$(date +%Y-%m-15))))" ``` Here's a simpler solution using GNU date, with one fewer `date` call: ``` date +"%B %Y" --date="$(date +%Y-%m-15) 20 days" ``` (A [bug](https://stackoverflow.com/questions/22764358/getting-next-month-with-date-command/22764898#22764898) in GNU date will give you the wrong month if you run `date --date="next month"` on the 31st.) Upvotes: 0 <issue_comment>username_4: The below one works in Red Hat 4.4.7-23, Linux version 2.6.32-754.2.1.el6.x86\_64. Just use the "month" for future months and "month ago" for previous months.. Dont confuse with adding +/- signs to the number. Check out. ``` > date "+%B-%Y" #current month November-2018 > date -d" 1 month" "+%B-%Y" December-2018 > date -d" 1 month ago" "+%B-%Y" October-2018 > ``` More.. ``` > date -d" 7 month ago" "+%B-%Y" April-2018 > date -d" 7 month " "+%B-%Y" June-2019 > ``` Upvotes: 0
2018/03/20
760
3,349
<issue_start>username_0: I am using the `connectivity` plugin in my flutter to check for the connection status, but occasionally hitting the error `PlatForm Exception(No active stream to cancel, null)` even though i have handled the null case. I have subscribed to the stream in `initState` and cancelled the subscription in `dispose` state my code looks something like this. ``` StreamSubscription streamConnectionStatus; ------------ //remaining code ------------ @override void initState() { getConnectionStatus(); } getConnectionStatus() async { streamConnectionStatus = new Connectivity() .onConnectivityChanged .listen((ConnectivityResult result) { // Got a new connectivity status! if (result == ConnectivityResult.mobile || result == ConnectivityResult.wifi) { setState(() { boolHasConnection = true; }); } else { setState(() { boolHasConnection = false; }); } }); @override void dispose() { try { streamConnectionStatus?.cancel(); } catch (exception, stackTrace) { print(exception.toString()); updateError(exception.toString(), stackTrace); } finally { super.dispose(); } } ``` this is actually driving me crazy but i am guessing i am missing something or do i have to change the code. Many thanks, Mahi<issue_comment>username_1: I think your `dispose` function is defined inside `getConnectionStatus`. The IDE might not throw error as it still is a valid definition. Just remove it from inside and make sure it lies in the respective class. Your code just works like a charm. Example: ``` class ConnectivityExample extends StatefulWidget { @override _ConnectivityExampleState createState() => new _ConnectivityExampleState(); } class _ConnectivityExampleState extends State { StreamSubscription streamConnectionStatus; bool boolHasConnection; @override void initState() { getConnectionStatus(); } Future getConnectionStatus() async { streamConnectionStatus = new Connectivity() .onConnectivityChanged .listen((ConnectivityResult result) { debugPrint(result.toString()); if (result == ConnectivityResult.mobile || result == ConnectivityResult.wifi) { setState(() { boolHasConnection = true; }); } else { setState(() { boolHasConnection = false; }); } }); } // dispose function inside class @override void dispose() { try { streamConnectionStatus?.cancel(); } catch (exception, stackTrace) { print(exception.toString()); } finally { super.dispose(); } } @override Widget build(BuildContext context) { return new Container( color: Colors.white, alignment: Alignment.center, child: new Text(boolHasConnection.toString()), ); } } ``` Hope that helps! Upvotes: 0 <issue_comment>username_2: I encountered a similar issue. This is what helped me. I had subscribed the stream exposed by connectivity plugin in different widgets in the same widget tree. I removed the subscription from child widgets and retained the subscription only in the parent and passed on the connection status to the children from parent. By doing so my code got more cleaner and the stream subscription was maintained / disposed only at one place. Then I didn't encounter this issue any more. Upvotes: 3 [selected_answer]
2018/03/20
1,724
7,110
<issue_start>username_0: I would like to use generics to create an abstract service that can be subclassed So far I have created 3 entities: ``` @Inheritance(strategy = InheritanceType.JOINED) @Entity @Table(name = "logging_event_base") abstract class LoggingEventBaseEntity constructor( @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id") val id: Long = -1, @Column(name = "tracking_id") private val trackingID: String = "", @Column(name = "descriptive_message") private val descriptiveMessage: String = "", @Column(name = "message_code") private val messageCode: String = "", @Column(name = "time_stamp") private val timeStamp: LocalDateTime = LocalDateTime.now(), @OneToOne(mappedBy = "loggingEvent", cascade = [(CascadeType.ALL)]) val messageContentLogging: LoggingEventContentEntity? = null) { @OneToMany(mappedBy = "loggingEvent", cascade = [(CascadeType.ALL)]) private val _properties: MutableSet = mutableSetOf() val properties: Set get() = \_properties fun addProperty(property: LoggingEventPropertyEntity) { this.\_properties.add(property) } fun addProperty(name: String, value: String) { this.addProperty(LoggingEventPropertyEntity(name = name, value = value)) } } @Entity @Table(name = "logging\_event\_log") class LoggingEventLogEntity( id: Long = -1, trackingID: String = "", descriptiveMessage: String = "", messageCode: String = "", timeStamp: LocalDateTime = LocalDateTime.now(), messageContentLogging: LoggingEventContentEntity? = null, @Column(name = "log\_level") var logLevel: String = "", @Column(name = "log\_status") var logStatus: String = "") : LoggingEventBaseEntity(id, trackingID, descriptiveMessage, messageCode, timeStamp, messageContentLogging) { } @Entity @Table(name = "logging\_event\_exception") class LoggingEventExceptionEntity( id: Long = -1, trackingID: String = "", descriptiveMessage: String = "", messageCode: String = "", timeStamp: LocalDateTime = LocalDateTime.now(), messageContentLogging: LoggingEventContentEntity? = null, @Column(name = "error\_type") val errorType: ErrorEventTypes = ErrorEventTypes.UNKNOWN, @Column(name = "status") val status: ErrorEventStatuses = ErrorEventStatuses.OPEN, @Column(name = "stack\_trace") val stackTrace: String = "") : LoggingEventBaseEntity(id, trackingID, descriptiveMessage, messageCode, timeStamp, messageContentLogging) {} ``` 3 repositories: ``` @NoRepositoryBean interface LoggingEventBaseRepository : CrudRepository @Transactional interface LoggingEventRepository : LoggingEventBaseRepository @Transactional interface LoggingEventLogRepository : LoggingEventBaseRepository @Transactional interface LoggingEventExceptionRepository : LoggingEventBaseRepository ``` And 3 service interfaces: ``` interface LoggingEventBaseService { fun retrieveLogs(): List fun retrieveLog(id: Long): T? fun addLog(loggingEvent: T): T } interface LoggingEventLogService : LoggingEventBaseService interface LoggingEventExceptionService : LoggingEventBaseService ``` Which should be implemented with the following classes: ``` @Service @Transactional class LoggingEventBaseServiceJPA>( val loggingEventRepository: R, val logPropertyRepository: LoggingEventPropertyRepository, val logMessageContentRepository: LoggingEventContentRepository) : LoggingEventLogService { override fun retrieveLogs(): E = loggingEventRepository.findAll().toList() override fun retrieveLog(id: Long): E? = loggingEventRepository.findById(id).orElse(null) override fun addLog(loggingEvent: E): LoggingEventLogEntity { loggingEvent.properties.forEach { it.loggingEvent = loggingEvent logPropertyRepository.save(it) } loggingEvent.messageContentLogging?.let { logMessageContentRepository.save(it) } return loggingEventRepository.save(loggingEvent) } } @Service @Transactional class LoggingEventLogServiceJPA: LoggingEventBaseServiceJPA, LoggingEventLogService @Service @Transactional class LoggingEventExceptionServiceJPA: LoggingEventBaseServiceJPA, LoggingEventLogService ``` I'm running into a couple of issues here: 1. In the `LoggingEventBaseServiceJPA` constructor `val loggingEventRepository: R` shows an error `Could not autowire. No beans of 'R' type found` 2. `override fun retrieveLogs(): E = loggingEventRepository.findAll().toList()` shows an error `Return type is 'E', which is not subtype of overwridden` 3. The same error is show on `override fun retrieveLog(id: Long): E? = loggingEventRepository.findById(id).orElse(null)` 4. Because I subclass `LoggingEventBaseServiceJPA`, the parameters should be passed to this superclass. This does make sense, but this should be solved using autowiring in some way. Does anyone know what would be a proper approach in these situations, where the Entity is subclassed, so the Repositories and Services need to have subclasses too? The goal in this case is to keep everything as DRY and maintainable as possible. It might be that I took a complete wrong approach from the start, so please let me know if you think that is the case.<issue_comment>username_1: I think your `dispose` function is defined inside `getConnectionStatus`. The IDE might not throw error as it still is a valid definition. Just remove it from inside and make sure it lies in the respective class. Your code just works like a charm. Example: ``` class ConnectivityExample extends StatefulWidget { @override _ConnectivityExampleState createState() => new _ConnectivityExampleState(); } class _ConnectivityExampleState extends State { StreamSubscription streamConnectionStatus; bool boolHasConnection; @override void initState() { getConnectionStatus(); } Future getConnectionStatus() async { streamConnectionStatus = new Connectivity() .onConnectivityChanged .listen((ConnectivityResult result) { debugPrint(result.toString()); if (result == ConnectivityResult.mobile || result == ConnectivityResult.wifi) { setState(() { boolHasConnection = true; }); } else { setState(() { boolHasConnection = false; }); } }); } // dispose function inside class @override void dispose() { try { streamConnectionStatus?.cancel(); } catch (exception, stackTrace) { print(exception.toString()); } finally { super.dispose(); } } @override Widget build(BuildContext context) { return new Container( color: Colors.white, alignment: Alignment.center, child: new Text(boolHasConnection.toString()), ); } } ``` Hope that helps! Upvotes: 0 <issue_comment>username_2: I encountered a similar issue. This is what helped me. I had subscribed the stream exposed by connectivity plugin in different widgets in the same widget tree. I removed the subscription from child widgets and retained the subscription only in the parent and passed on the connection status to the children from parent. By doing so my code got more cleaner and the stream subscription was maintained / disposed only at one place. Then I didn't encounter this issue any more. Upvotes: 3 [selected_answer]
2018/03/20
954
3,345
<issue_start>username_0: I have this code: <https://pastebin.com/zgJdYhzN> in Javascript. It's supposed to fade in text when the scrolling function reaches a certain point and while this does work, there will be several pages using it and I'd like to avoid having to create several instances of this function. It would be better if I could just create a function and for every element that had the ".split" class, this would act upon it. ``` //EXECUTES ONLY ONCE function once(fn, context) { var result; return function() { if(fn) { result = fn.apply(context || this, arguments); fn = null; } return result; }; } // Usage var split1 = once(function() { fadeInText(".split1"); }); var pl = once(function() { fadeInText(".pl"); }); var pl1 = once(function() { fadeInText(".pl1"); }); var smallp = once(function() { fadeInText(".smallp"); }); var smallp1 = once(function() { fadeInText(".smallp1"); }); var smallp2 = once(function() { fadeInText(".smallp2"); }); var smallp3 = once(function() { fadeInText(".smallp3"); }); var head0 = once(function() { fadeInText(".head0"); }); $(window).scroll(function() { if( $(this).scrollTop() + $(window).height() > $(".split1").offset().top) { split1(); } if( $(this).scrollTop() + $(window).height() > $(".pl").offset().top) { pl(); } if( $(this).scrollTop() + $(window).height() > $(".pl1").offset().top) { pl1(); } if( $(this).scrollTop() + $(window).height() > $(".smallp").offset().top) { smallp(); } if( $(this).scrollTop() + $(window).height() > $(".smallp1").offset().top) { smallp1(); } if( $(this).scrollTop() + $(window).height() > $(".smallp2").offset().top) { smallp2(); } if( $(this).scrollTop() + $(window).height() > $(".smallp3").offset().top) { smallp3(); } if( $(this).scrollTop() + $(window).height() > $(".head0").offset().top) { head0(); } }); ```<issue_comment>username_1: Just generate the functions for all elements using a loop: ``` const handlers = [".split1", ".pl" /*...*/] .map(s => ({ el: $(s), show: once(() => fadeInText(s)) })); $(window).scroll(function() { for(const {el, show} of handlers) { if( $(this).scrollTop() + $(window).height() > el.offset().top) show(); } }); ``` You could also generate the handlers for all elements of a class: ``` const handlers = $(".split").toArray() .map(s => ({ el: $(s), show: once(() => fadeInText(s)) })); ``` Upvotes: 0 <issue_comment>username_2: Not sure if I'm missing why you need the `once` method. Is there a reason you couldn't do something like this: ``` var selectors = ['.one', '.two', '.three']; var elements = {}; selectors.forEach(function(selector){ elements[selector] = $(selector); }); function elementOnScreen(selector) { if(!elements[selector]){ return false; } return $(window).scrollTop() + $(window).height() > elements[selector].offset().top } $(window).scroll(function() { selectors.forEach(function(selector) { if(elementOnScreen(selector)){ fadeInText(selector); delete elements[selector]; } if(Object.keys(elements).length === 0){ $(window).off('scroll'); } }); }); ``` Upvotes: 1
2018/03/20
764
2,436
<issue_start>username_0: I am working on a project and got stuck. I have a background image which is 16:9 ratio. ``` body { margin: 0px; padding: 0px; background-image: url("background.jpg"); background-size: 100%; position: absolute; background-repeat: no-repeat; } //for example: .inventory { background-color: rgba(30, 30, 30, 0.5); top: 28%; left: 18%; font-size: 30px; text-align: center; color: white; width: 19%; height: 55.6%; overflow: auto; position:fixed } ``` What I want now is, that when I decrease the size of the window everything stays on its position and has its relative height and width. The problem I am facing at the moment is, that everything I position with percentage is related to the size of the window. But i want to make everything relate to the size of the image. Hope you can help me out!<issue_comment>username_1: If you need a locked aspect ratio for something, `padding` allows for this because it's always relative to an element's width. In this example, you'd set your `background-image` to replace `background: green;` and place all of your content inside that `container-inner` div ```css .container { width: 100%; padding-bottom: 56.25%; height: 0; font-size: 0; position: relative; } .container-inner { font-size: 1rem; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background: green; } ``` ```html ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: To use `background-image` you will need to set the `height` of that element. If you resize the window the `width` will change according to window but `height` will be fix resulting background image size will increase and in that case you can't control the `top`, `left`, `right` and `bottom` values of position... So you will need to use a `img` instead of a `background-image` here...and use `position:absolute` to place the items over image having `%` values not `px`...remember to set `position:relative` to its parent container ```css .parent { position: relative; } .parent img { max-width: 100%; } .parent .spot { position: absolute; width: 20px; height: 20px; background: red; border-radius: 50%; border: 2px solid #fff; top: 30%; left: 20%; } ``` ```html ![](http://via.placeholder.com/1600x900) ``` Try to edit **[Fiddle](https://fiddle.jshell.net/bhuwanb9/8hjLm4qs/1/)** here Upvotes: 0
2018/03/20
693
2,629
<issue_start>username_0: I'm learning about vectors in `Accelerated C++` by <NAME> and <NAME>. Can anyone explain the difference between these? ``` vector x; ``` and ``` vector x(const string& s) { ... } ``` Does the second one define a function `x` whose return type must be a vector and whose return value is stored somehow in x? The exact code from the book starts like this: ``` map > xref(istream& in, vector find\_words(const string&) = split) { ```<issue_comment>username_1: You are correct in that the latter is a function. The `vector` that function `x` creates however is not stored in a variable named `x`, rather the function itself is named `x`. You might do something like: ``` string my_str = "Hello world."; // Call x to create my_vector: vector my\_vector = x(my\_str); ``` Upvotes: 0 <issue_comment>username_2: ``` vector x; ``` This is a declaration of a variable named `x` of type `vector` ``` vector x(const string& s) { ... } ``` this is a function named `x` that takes 1 parameter of type `const string&` and returns a `vector` As you can see they are different beasts. Upvotes: 3 [selected_answer]<issue_comment>username_3: It seems this function definition ``` map > xref(istream& in, vector find\_words(const string&) = split) { //... } ``` confuses you. The second parameter has the function type `vector(const string&)` that is supplied with the default argument `split`. Take into account that instead of the function type you could use a function pointer type. In any case the compiler adjusts a parameter that has a function type to a parameter that has a function pointer type. Consider this demonstrative program. The both function declarations declare the same one function. In the first function declaration there is used a function type for the second parameter while in the second function declaration there is used a function pointer type for the same parameter. ``` #include #include #include #include using namespace std; vector split( const string & ); map > xref( istream& in, vector find\_words(const string&) = split ); map > xref( istream& in, vector ( \*find\_words )(const string&) ); int main() { return 0; } ``` I only declared the function `split` without providing its definition because it is not required for this simple demonstrative program. So you can call the function `xref` either with one argument and in this case the function will use the funcion `split` as the second argument by default. Or you could explicitly specify the second argument setting it to the name of your own written function. Upvotes: 1
2018/03/20
976
3,862
<issue_start>username_0: i'm trying to deserialize the following json: ``` { "oxide":{ "Al2O3":"0.3", "CaO":"0.3", "FeO":"0.3", "MgO":"0.3", "MnO":"0.3", "SiO2":"0.3" }, "temperature": "1800" ``` } When I convert in this way everthing works: ``` Oxides oxides = new Oxides(); string oxidequery = req.Query["oxide"]; string temperature = req.Query["temperature"]; string requestBody = new StreamReader(req.Body).ReadToEnd(); dynamic data = JsonConvert.DeserializeObject(requestBody); oxide.Al2O3 = data?.oxide.Al2O3; oxide.CaO = data?.oxide.CaO; oxide.FeO = data?.oxide.FeO; oxide.MgO = data?.oxide.MgO; oxide.MnO = data?.oxide.MnO; oxide.SiO2 = data?.oxide.SiO2; double tempDouble = temperature ?? data?.temperature; ``` But when I doing this it not working: ``` Oxides oxides = new Oxides(); string oxidequery = req.Query["oxide"]; string temperature = req.Query["temperature"]; string requestBody = new StreamReader(req.Body).ReadToEnd(); dynamic data = JsonConvert.DeserializeObject(requestBody); oxide = (Oxides)data?.oxide; double tempDouble = temperature ?? data?.temperature; ``` In the second aproach i get the error > > Cannot convert type 'Newtonsoft.Json.Linq.JObject' to 'Oxides'. > > > What I'm missing? I already searched and found that I have to explicit convert data.oxide, but I'm already doing this.<issue_comment>username_1: Try this: ``` dynamic data = JsonConvert.DeserializeObject(requestBody); oxide = ((JObject)data?.oxide).ToObject(); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I believe your issue is where you do the `Stream` to read into a dynamic. When I encountered a similar issue, I returned as an object. ``` public object DownloadFromApi(string url) { if (WebRequest.Create(url) is HttpWebRequest request) { request.Method = "GET"; request.ContentType = "application/json"; try { using (Stream response = request.GetResponse().GetResponseStream()) if (response != null) using (var reader = new StreamReader(response)) return JsonConvert.DeserializeObject(reader.ReadToEnd()); } catch (HttpRequestException exception) { ApplicationProvider.Log.Error($"An unhandled HTTP error has occurred.{Environment.NewLine}{exception.Message}"); throw new Exception(exception.Message); } } ApplicationProvider.Log.Error($"Web request failed due to null value. {url}"); throw new Exception($"A null parameter or response has occurred for {url}"); } ``` This conversion appeared to solve my issue with the object being incorrectly deserialized as incorrect types. So I was able to do the following to read into my object. ``` private static IEnumerable GetAllDataForAsset(AssetModel asset) { dynamic apiResponse = new ApiRequestUtility().DownloadFromApi(optiAssetDataWithinRangeUrl); for (var index = 0; index != apiResponse.Items.Count; index++) yield return new AssetDataModel { Id = apiResponse["Items"][index].id, ResourceId = apiResponse["Items"][index].value[0].resourceId, Timestamp = apiResponse["Items"][index].timeValue, Value = apiResponse["Items"][index].value[0].value }; } ``` That allowed the correct conversion when I iterated through the response to build my data collection. So some of the odd trying to deserialize to float rather than decimal seemed to alleviate with our laboratory data. Also would like to point out, return as an object but still have return type as dynamic to be determined at runtime. Upvotes: 0
2018/03/20
532
2,037
<issue_start>username_0: Try to disable tracking position on occulus, don't find any doc on it. I've search postionnal tracking, but nothing in aframe <https://github.com/aframevr/aframe/search?utf8=%E2%9C%93&q=tracking&type=> Maybe with THREE.JS?<issue_comment>username_1: This is not possible using any built-in A-Frame components, as of 0.8.0. What you're probably looking for is the `look-controls` component, which handles mouse and headset rotation/position. [look-controls documentation](https://aframe.io/docs/0.8.0/components/look-controls.html) There is an option to disable HMDs entirely, but none for just turning off position. You could request the feature or create your own version of `look-controls`, but I would be hesitant about that — it will make VR experiences much less comfortable for HMD users. Upvotes: 2 [selected_answer]<issue_comment>username_2: Quick hack :) This code is not tested, but here the idea :) ``` // html //js const cameraParent = document.querySelector('.Camera__parent') const camera = document.querySelector('.Camera') function update() { requestAnimationFrame(update) cameraParent.object3D.position.y = -camera.object3D.position.y } if(AFRAME.utils.device.checkHasPositionalTracking()) update() ``` Upvotes: 0 <issue_comment>username_3: I would have to agree with Don that limiting positional tracking on any 6 DOF device makes for a bad user experience as certain types of movement are expected; but there is always room for experimentation! Though not built into A-Frame you could look at creating a component to reset position (and only position - if you mess with the matrix transforms they will overwrite all transforms - scale, transform, rotation - in the THREEjs layer underneath). Looks like there is an example below that might work. You could also look here for a start [glitch.com/edit/#!/aframe-parent-constraint](http://glitch.com/edit/#!/aframe-parent-constraint) though I must admit I had difficulties updating individual transforms at the end. Upvotes: 0
2018/03/20
346
961
<issue_start>username_0: I've got such list: ``` List rows = new CsvParser(settings).parseAll(new File("Z:/FV 18001325.csv"), "UTF-8"); ``` What's the simplest way to print them to console? I've tried ``` System.out.println(Arrays.toString(rows)); ``` And also: ``` String joined = String.join(",", rows); System.out.println(joined); ``` But no use...<issue_comment>username_1: ``` System.out.println(Arrays.toString(rows.toArray())); ``` Upvotes: 1 <issue_comment>username_2: The code of the other answer would print something like: ``` [[Ljava.lang.String;@4c873330, [Ljava.lang.String;@119d7047] ``` What you need is either: ``` rows.forEach(arr -> System.out.println(Arrays.toString(arr))); ``` which would print the output like this: ``` [a, b, c] [d, e, f] ``` or ``` System.out.println(Arrays.deepToString(rows.toArray())); ``` which would print the output like this: ``` [[a, b, c], [d, e, f]] ``` Upvotes: 4 [selected_answer]
2018/03/20
437
1,429
<issue_start>username_0: I currently use vim and tmux to edit my code. During debugging, I often have to: 1. Save my changes in vim 2. Head over to the terminal pane 3. Run `python -i script.py` 4. Do some testing (e.g. print some variables, check out error messages, etc), then head back to my vim pane. However this becomes cumbersome when after several edits, as every time, I would switch over to the terminal pane, quit the python -i session, and rerun python. Worst of all, I lose all my python history when I restart the session! I thought of binding something like `:!python -i` in my .vimrc, but that would not solve the problem as I can't edit the script while testing it at the same time (as one would do with an IDE, and also the reason I got tmux). Running `python -i` seems to crash vim anyway. What should I do?<issue_comment>username_1: ``` System.out.println(Arrays.toString(rows.toArray())); ``` Upvotes: 1 <issue_comment>username_2: The code of the other answer would print something like: ``` [[Ljava.lang.String;@4c873330, [Ljava.lang.String;@119d7047] ``` What you need is either: ``` rows.forEach(arr -> System.out.println(Arrays.toString(arr))); ``` which would print the output like this: ``` [a, b, c] [d, e, f] ``` or ``` System.out.println(Arrays.deepToString(rows.toArray())); ``` which would print the output like this: ``` [[a, b, c], [d, e, f]] ``` Upvotes: 4 [selected_answer]
2018/03/20
1,044
4,465
<issue_start>username_0: I am creating a .net-core2 web-api, which allows users from an Azure-AD to consume it. The API is multi-tenant, so users from multiple Azure-AD's should be able to authorize. However, it is also possible to create an account for users who do not have a corporate Azure-AD account. These users are stored in a database (local users). Because it is a web-api, I implemented a custom token provider, so that the local users can get a token to consume the protected web-api. However, I cannot add two separate 'Bearer' authentications to the web-api: ``` services.AddAuthentication(options => { options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddAzureAdBearer(options => Configuration.Bind("AzureAd", options)) .AddJwtBearer(options => new JwtBearerOptions { TokenValidationParameters = tokenValidationParameters }); ``` This throws an error: > > System.InvalidOperationException: Scheme already exists: Bearer > > > Which I totally understand. But how I can implement both authentication mechanisms in parallel?<issue_comment>username_1: You have to specify a different identifier. Both are using the "Bearer" identifier at the moment. For example, you can specify a different one for JWT Bearer by: ``` .AddJwtBearer("CustomJwt", options => { }); ``` This solves the issue with the identifier clash, but in order to support two authentication schemes in parallel, you will need to do additional modifications. One way in 2.0 is something suggested by David Fowler: <https://github.com/aspnet/Security/issues/1469> ``` app.UseAuthentication(); app.Use(async (context, next) => { // Write some code that determines the scheme based on the incoming request var scheme = GetSchemeForRequest(context); var result = await context.AuthenticateAsync(scheme); if (result.Succeeded) { context.User = result.Principal; } await next(); }); ``` In your case you could all the Bearer (Azure AD) scheme if there is no user on the context when you hit the middleware. In ASP.NET Core 2.1 we will get "virtual authentication schemes", which allow this scenario in a more first-class way: <https://github.com/aspnet/Security/pull/1550> Upvotes: 2 [selected_answer]<issue_comment>username_2: Thanks to username_1 I found a working solution. What I did: In Startup.cs ConfigureServices I added both authentication schemes: ```cs services.AddAuthentication(options => { options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddAzureAdBearer(options => Configuration.Bind("AzureAd", options)) .AddJwtBearer("JWTBearer", options => { options.TokenValidationParameters = tokenValidationParameters; }); ``` Then make sure in the Authorization you enable both schemes: ```cs services.AddAuthorization(config => { config.AddPolicy(PolicyNames.RequireKeyUser, policy => { policy.AddRequirements(new KeyUserRequirement()); policy.RequireAuthenticatedUser(); policy.AddAuthenticationSchemes("JWTBearer", JwtBearerDefaults.AuthenticationScheme); }); }); ``` And write some logic in Configure to determine the auth scheme on runtime: ```cs app.Use(async (context, next) => { // Write some code that determines the scheme based on the incoming request var scheme = GetSchemeForRequest(context); if (!String.IsNullOrEmpty(scheme)) { var result = await context.AuthenticateAsync(scheme); if (result.Succeeded) { context.User = result.Principal; } } await next(); }); ``` I decided to use an additional header 'Authorization-Type' to define my custom JWT authorization and use the default 'Bearer' prefix in the 'Authorization' header. So my GetSchemeForRequest function: ```cs private string GetSchemeForRequest(HttpContext context) { var scheme = ""; try { if (!String.IsNullOrEmpty(context.Request.Headers["Authorization"].ToString())) { string authHeader = context.Request.Headers["Authorization-Type"].ToString(); if (authHeader == "JWT") { scheme = "JWTBearer"; } else { scheme = "Bearer"; } } } catch (Exception ex) { // Use your own logging mechanism } return scheme; } ``` Upvotes: 0
2018/03/20
919
3,404
<issue_start>username_0: I wanted to convert textfile to dataframe using case class and below is my code. It works till map split where I can see the value using `rdd_metadata_schema.take(1).foreach(arr => print(arr.toList))` but is empty when I check for the dataframe. ``` case class metadata_schema( field_name:String, field_pos:String, field_dataType:String ) val rdd_metadata = Spark.sparkSession.sparkContext.textFile("textfile") val rdd_metadata_schema = rdd_metadata.map(row => row.split('|')).map(field => metadata_schema(field(0), field(1), field(2))) val df_metadata = Spark.sparkSession.createDataFrame(rdd_metadata_schema,classOf[metadata_schema]) **textfile:** field1|1-2|String field2|3|String ``` Everything looks fine to me. I wanted to create df only using case class as the other approach works for me. This is to refresh my skills.<issue_comment>username_1: If you are using Spark 2, there is an easier way using Dataset: ``` val revenues = spark .read .format("csv") .option("delimiter", "|") .option("header", "true") .load("textfile") .as[metadata_schema] .toDF() ``` Upvotes: 1 <issue_comment>username_2: When a `case class` is used in `rdd` (as you have done), `schema` is already created using reflection on rdd so you don't have to use `sqlContext` and `schema` to create dataframe. Just do ``` import sqlContext.implicits._ rdd_metadata_schema.toDF().show(false) ``` and you should get ``` +----------+---------+--------------+ |field_name|field_pos|field_dataType| +----------+---------+--------------+ |field1 |1-2 |String | |field2 |3 |String | +----------+---------+--------------+ ``` [official documentation](https://spark.apache.org/docs/latest/sql-programming-guide.html) says so too > > > > > > The Scala interface for Spark SQL supports automatically converting an RDD containing case classes to a DataFrame. The case class defines the schema of the table. The names of the arguments to the case class are read using reflection and become the names of the columns. ... > > > > > > > > > You can even create `dataset` as ``` import sqlContext.implicits._ rdd_metadata_schema.toDS().show(false) ``` I hope the answer is helpful Upvotes: 1 <issue_comment>username_3: This is how you do it ``` case class metadata_schema( field_name:String, field_pos:String, field_dataType:String ) ``` Make sure the case class is outside the object. ``` val spark = SparkSession.builder().appName("test").master("local").getOrCreate() import spark.implicits._ val rdd_metadata = spark.sparkContext.textFile("file path") val rdd_metadata_schema = rdd_metadata.map(row => row.split('|')).map(field => metadata_schema(field(0), field(1), field(2))) //you don't need to create using createDataFrame() just toDF is sufficient rdd_metadata_schema.toDF().show() ``` Output: ``` +----------+---------+--------------+ |field_name|field_pos|field_dataType| +----------+---------+--------------+ | field1| 1-2| String| | field2| 3| String| +----------+---------+--------------+ ``` Upvotes: 3 [selected_answer]
2018/03/20
448
1,417
<issue_start>username_0: I've created a calendar timeline in html. I want to create a vertical line that progresses from left to right over time, the line needs to overlay all the html elements. Similar to Google calendar's way of letting you know what time of day you are at. An example below of what I want: [![timeline](https://i.stack.imgur.com/lnjRP.png)](https://i.stack.imgur.com/lnjRP.png) The black line going vertically across is what I want to create in javascript. I want it to be on top of all elements I'm trying to create this for a university project and I have no clue where to start So far I have a progress bar element (using twitter bootstrap) and javascript that updates the width % of the progress bar by calculating the % val of currentTime/(end-start)<issue_comment>username_1: CSS: ``` position: fixed; width: 2px; height: 100vh; background-color: #222; left: 0; top: 0; bottom: 0; z-index: 100 ``` Then animate CSS width or javascript the left value Upvotes: 1 <issue_comment>username_2: This is a rough example of where to start. You want to look for efficiency now ```js var vline=$('#vline'); setInterval(function(){ vline.css('left', parseInt(vline.css('left')) + 1); }, 50); ``` ```css #vline{ position: fixed; height: 100%; width: 0px; border: 1px solid red; top: 0; left: 0; z-index: 1000000 } ``` ```html ``` Upvotes: 4 [selected_answer]
2018/03/20
511
1,382
<issue_start>username_0: i have 3 different PC models, I want to deploy a script to those 3 models. the script run wmic query and check the PC model and when it match to one of those it go to a path and install application. I am testing the below script, But could not make it work This is normal text ``` @for /f "tokens=*" %%m in ('wmic computersystem get model /value ^| find "="') do set %%m @echo %Model% If "%Model%" == "HP EliteBook 840 G3" then Goto:840G3 If "%Model%" == "HP EliteBook 840 G4" then Goto:840G4 If "%Model%" == "HP EliteBook 850 G4" then Goto:850G4 Goto WrongModel :840G3 ping google.com Goto END :840G4 ping yahoo.com Goto END :850G4 timeout /t 100 Goto END :END ```<issue_comment>username_1: CSS: ``` position: fixed; width: 2px; height: 100vh; background-color: #222; left: 0; top: 0; bottom: 0; z-index: 100 ``` Then animate CSS width or javascript the left value Upvotes: 1 <issue_comment>username_2: This is a rough example of where to start. You want to look for efficiency now ```js var vline=$('#vline'); setInterval(function(){ vline.css('left', parseInt(vline.css('left')) + 1); }, 50); ``` ```css #vline{ position: fixed; height: 100%; width: 0px; border: 1px solid red; top: 0; left: 0; z-index: 1000000 } ``` ```html ``` Upvotes: 4 [selected_answer]
2018/03/20
572
1,712
<issue_start>username_0: I am trying to use Travis stages in a matrix but it seem that script used is the default one (`npm run test`) except for one. Here the travis.yml ```html if: tag IS blank git: depth: 1 sudo: false matrix: fast_finish: true cache: yarn language: node_js node_js: - "node" - "lts/*" env: - COMPONENT=@emmanuelgautier/lerna-example-hapi - COMPONENT=@emmanuelgautier/lerna-example-react-app - COMPONENT=@emmanuelgautier/lerna-example-validators stages: - test - build - name: publish if: branch = master jobs: include: - script: yarn bootstrap --scope=$COMPONENT && yarn lerna run --scope=$COMPONENT test:ci - stage: build script: yarn lerna --scope=$COMPONENT build - stage: publish env: - COMPONENT=all script: yarn publish - stage: deploy script: skip ``` Here an example issue : <https://travis-ci.org/emmanuelgautier/lerna-example/builds/355884540> Do you think the error is from the travis file or because this feature is in beta mode ?<issue_comment>username_1: CSS: ``` position: fixed; width: 2px; height: 100vh; background-color: #222; left: 0; top: 0; bottom: 0; z-index: 100 ``` Then animate CSS width or javascript the left value Upvotes: 1 <issue_comment>username_2: This is a rough example of where to start. You want to look for efficiency now ```js var vline=$('#vline'); setInterval(function(){ vline.css('left', parseInt(vline.css('left')) + 1); }, 50); ``` ```css #vline{ position: fixed; height: 100%; width: 0px; border: 1px solid red; top: 0; left: 0; z-index: 1000000 } ``` ```html ``` Upvotes: 4 [selected_answer]
2018/03/20
1,350
4,101
<issue_start>username_0: I have a sentence for which i need to identify the Person names alone: For example: ``` sentence = "<NAME> is an American business magnate and computer scientist who is the co-founder of Google, alongside <NAME>" ``` I have used the below code to identify the NERs. ``` from nltk import word_tokenize, pos_tag, ne_chunk print(ne_chunk(pos_tag(word_tokenize(sentence)))) ``` The output i received was: ``` (S (PERSON Larry/NNP) (ORGANIZATION Page/NNP) is/VBZ an/DT (GPE American/JJ) business/NN magnate/NN and/CC computer/NN scientist/NN who/WP is/VBZ the/DT co-founder/NN of/IN (GPE Google/NNP) ,/, alongside/RB (PERSON Sergey/NNP Brin/NNP)) ``` I want to extract all the person names, such as ``` <NAME> <NAME> ``` In order to achieve this, I refereed this [link](https://stackoverflow.com/questions/30664677/extract-list-of-persons-and-organizations-using-stanford-ner-tagger-in-nltk) and tried this. ``` from nltk.tag.stanford import StanfordNERTagger st = StanfordNERTagger('/usr/share/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz','/usr/share/stanford-ner/stanford-ner.jar') ``` However i continue to get this error: ``` LookupError: Could not find stanford-ner.jar jar file at /usr/share/stanford-ner/stanford-ner.jar ``` Where can i download this file? As informed above, the result that i am expecting in the form of list or dictionary is : ``` <NAME> <NAME> ```<issue_comment>username_1: In Long ======= Please read these **carefully**: * <https://stackoverflow.com/a/49345866/610569> * [Extract list of Persons and Organizations using Stanford NER Tagger in NLTK](https://stackoverflow.com/questions/30664677/extract-list-of-persons-and-organizations-using-stanford-ner-tagger-in-nltk) Understand the solution, don't just copy and paste. --- TL;DR ===== In terminal: ``` pip install -U nltk wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31 java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \ -preload tokenize,ssplit,pos,lemma,parse,depparse \ -status_port 9000 -port 9000 -timeout 15000 ``` In Python ``` from nltk.tag.stanford import CoreNLPNERTagger def get_continuous_chunks(tagged_sent): continuous_chunk = [] current_chunk = [] for token, tag in tagged_sent: if tag != "O": current_chunk.append((token, tag)) else: if current_chunk: # if the current chunk is not empty continuous_chunk.append(current_chunk) current_chunk = [] # Flush the final current_chunk into the continuous_chunk, if any. if current_chunk: continuous_chunk.append(current_chunk) return continuous_chunk stner = CoreNLPNERTagger() tagged_sent = stner.tag('<NAME> is studying at Stony Brook University in NY'.split()) named_entities = get_continuous_chunks(tagged_sent) named_entities_str_tag = [(" ".join([token for token, tag in ne]), ne[0][1]) for ne in named_entities] print(named_entities_str_tag) ``` [out]: ``` [('<NAME>', 'PERSON'), ('Stony Brook University', 'ORGANIZATION'), ('NY', 'LOCATION')] ``` You might find this help too: [Unpacking a list / tuple of pairs into two lists / tuples](https://stackoverflow.com/questions/7558908/unpacking-a-list-tuple-of-pairs-into-two-lists-tuples) Upvotes: 5 [selected_answer]<issue_comment>username_2: In the first place you need to download the jar files and the rest of the necessary files. Follow the link : <https://gist.github.com/troyane/c9355a3103ea08679baf>. Run the code to download the files(except the last few line). Once done with the downloading part you are now ready to do the extraction part. ``` from nltk.tag.stanford import StanfordNERTagger st = StanfordNERTagger('/home/saheli/Downloads/my_project/stanford-ner/english.all.3class.distsim.crf.ser.gz', '/home/saheli/Downloads/my_project/stanford-ner/stanford-ner.jar') ``` Upvotes: 0
2018/03/20
1,165
3,720
<issue_start>username_0: I want to Export data table to JSON file i tried these code lines ``` import { ErrorHandler } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Component, Input } from '@angular/core'; import { Injectable } from '@angular/core'; import { Http, Response, Headers, RequestOptions } from '@angular/http'; // tslint:disable-next-line:import-blacklist import {Observable} from 'rxjs/Rx'; import 'rxjs/add/operator/map'; import 'rxjs/add/operator/catch'; @Injectable() export class CarService { handleError: any; constructor(private http: HttpClient) { } getOrderSummary(): Observable { // get users from api return this.http.get('assets/ordersummary.json')// , options) .map((response: Response) => { console.log('mock data' + response.json()); return response.json(); } ) .catch(this.handleError); } } ``` I put these in appcomponent.ts ``` exportToJSON() { this.carService.getOrderSummary(); } ``` and this line in appcomponent.html ``` Export to JSON ``` but it says that failed- no file after downloading the file , what is the reason? THANKS :))<issue_comment>username_1: In Long ======= Please read these **carefully**: * <https://stackoverflow.com/a/49345866/610569> * [Extract list of Persons and Organizations using Stanford NER Tagger in NLTK](https://stackoverflow.com/questions/30664677/extract-list-of-persons-and-organizations-using-stanford-ner-tagger-in-nltk) Understand the solution, don't just copy and paste. --- TL;DR ===== In terminal: ``` pip install -U nltk wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31 java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \ -preload tokenize,ssplit,pos,lemma,parse,depparse \ -status_port 9000 -port 9000 -timeout 15000 ``` In Python ``` from nltk.tag.stanford import CoreNLPNERTagger def get_continuous_chunks(tagged_sent): continuous_chunk = [] current_chunk = [] for token, tag in tagged_sent: if tag != "O": current_chunk.append((token, tag)) else: if current_chunk: # if the current chunk is not empty continuous_chunk.append(current_chunk) current_chunk = [] # Flush the final current_chunk into the continuous_chunk, if any. if current_chunk: continuous_chunk.append(current_chunk) return continuous_chunk stner = CoreNLPNERTagger() tagged_sent = stner.tag('<NAME> is studying at Stony Brook University in NY'.split()) named_entities = get_continuous_chunks(tagged_sent) named_entities_str_tag = [(" ".join([token for token, tag in ne]), ne[0][1]) for ne in named_entities] print(named_entities_str_tag) ``` [out]: ``` [('<NAME>', 'PERSON'), ('Stony Brook University', 'ORGANIZATION'), ('NY', 'LOCATION')] ``` You might find this help too: [Unpacking a list / tuple of pairs into two lists / tuples](https://stackoverflow.com/questions/7558908/unpacking-a-list-tuple-of-pairs-into-two-lists-tuples) Upvotes: 5 [selected_answer]<issue_comment>username_2: In the first place you need to download the jar files and the rest of the necessary files. Follow the link : <https://gist.github.com/troyane/c9355a3103ea08679baf>. Run the code to download the files(except the last few line). Once done with the downloading part you are now ready to do the extraction part. ``` from nltk.tag.stanford import StanfordNERTagger st = StanfordNERTagger('/home/saheli/Downloads/my_project/stanford-ner/english.all.3class.distsim.crf.ser.gz', '/home/saheli/Downloads/my_project/stanford-ner/stanford-ner.jar') ``` Upvotes: 0
2018/03/20
2,151
5,098
<issue_start>username_0: I want to download a number of .txt-files. I have a data frame'"New\_test in which the urls are under 'url' and the dest. names under 'code "New\_test.txt" ``` "url" "code" "1" "http://documents.worldbank.org/curated/en/704931468739539459/text/multi-page.txt" "704931468739539459.txt" "2" "http://documents.worldbank.org/curated/en/239491468743788559/text/multi-page.txt" "239491468743788559.txt" "3" "http://documents.worldbank.org/curated/en/489381468771867920/text/multi-page.txt" "489381468771867920.txt" "4" "http://documents.worldbank.org/curated/en/663271468778456388/text/multi-page.txt" "663271468778456388.txt" "5" "http://documents.worldbank.org/curated/en/330661468742793711/text/multi-page.txt" "330661468742793711.txt" "6" "http://documents.worldbank.org/curated/en/120441468766519490/text/multi-page.txt" "120441468766519490.txt" "7" "http://documents.worldbank.org/curated/en/901481468770727038/text/multi-page.txt" "901481468770727038.txt" "8" "http://documents.worldbank.org/curated/en/172351468740162422/text/multi-page.txt" "172351468740162422.txt" "9" "http://documents.worldbank.org/curated/en/980401468740176249/text/multi-page.txt" "980401468740176249.txt" "10" "http://documents.worldbank.org/curated/en/166921468759906515/text/multi-page.txt" "166921468759906515.txt" "11" "http://documents.worldbank.org/curated/en/681071468781809792/text/DRD169.txt" "681071468781809792.txt" "12" "http://documents.worldbank.org/curated/en/358291468739333041/text/multi-page.txt" "358291468739333041.txt" "13" "http://documents.worldbank.org/curated/en/716041468759870921/text/multi0page.txt" "716041468759870921.txt" "14" "http://documents.worldbank.org/curated/en/961101468763752879/text/34896.txt" "961101468763752879.txt"` ``` this is the script ``` rm(list=ls()) require(quanteda) library(stringr) workingdir <-setwd("~/Study/Master/Thesis/Mining/R/WorldBankDownl") test <- read.csv(paste0(workingdir,"/New_test.txt"), header = TRUE, stringsAsFactors = FALSE, sep="\t") #Loop through every url in test_df and download in target directory with name = code for (url in test) { print(head(url)) print(head(test$code)) destfile <- paste0('~/Study/Master/Thesis/Mining/R/WorldBankDownl/Sources/', test$code) download.file(test$url, destfile, method = "wget", quiet=TRUE) ``` And this is the error I get ``` Error in download.file(test$url, destfile, method = "wget", quiet = TRUE) : 'url' must be a length-one character vector ```<issue_comment>username_1: In Long ======= Please read these **carefully**: * <https://stackoverflow.com/a/49345866/610569> * [Extract list of Persons and Organizations using Stanford NER Tagger in NLTK](https://stackoverflow.com/questions/30664677/extract-list-of-persons-and-organizations-using-stanford-ner-tagger-in-nltk) Understand the solution, don't just copy and paste. --- TL;DR ===== In terminal: ``` pip install -U nltk wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31 java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \ -preload tokenize,ssplit,pos,lemma,parse,depparse \ -status_port 9000 -port 9000 -timeout 15000 ``` In Python ``` from nltk.tag.stanford import CoreNLPNERTagger def get_continuous_chunks(tagged_sent): continuous_chunk = [] current_chunk = [] for token, tag in tagged_sent: if tag != "O": current_chunk.append((token, tag)) else: if current_chunk: # if the current chunk is not empty continuous_chunk.append(current_chunk) current_chunk = [] # Flush the final current_chunk into the continuous_chunk, if any. if current_chunk: continuous_chunk.append(current_chunk) return continuous_chunk stner = CoreNLPNERTagger() tagged_sent = stner.tag('<NAME> is studying at Stony Brook University in NY'.split()) named_entities = get_continuous_chunks(tagged_sent) named_entities_str_tag = [(" ".join([token for token, tag in ne]), ne[0][1]) for ne in named_entities] print(named_entities_str_tag) ``` [out]: ``` [('<NAME>', 'PERSON'), ('Stony Brook University', 'ORGANIZATION'), ('NY', 'LOCATION')] ``` You might find this help too: [Unpacking a list / tuple of pairs into two lists / tuples](https://stackoverflow.com/questions/7558908/unpacking-a-list-tuple-of-pairs-into-two-lists-tuples) Upvotes: 5 [selected_answer]<issue_comment>username_2: In the first place you need to download the jar files and the rest of the necessary files. Follow the link : <https://gist.github.com/troyane/c9355a3103ea08679baf>. Run the code to download the files(except the last few line). Once done with the downloading part you are now ready to do the extraction part. ``` from nltk.tag.stanford import StanfordNERTagger st = StanfordNERTagger('/home/saheli/Downloads/my_project/stanford-ner/english.all.3class.distsim.crf.ser.gz', '/home/saheli/Downloads/my_project/stanford-ner/stanford-ner.jar') ``` Upvotes: 0
2018/03/20
409
1,654
<issue_start>username_0: If I use CoreLocation by itself with the highest accuracy possible, I get an update once a second or 1Hz. However, if I have an external bluetooth connected GPS unit with a 10Hz refresh rate, I still only get a response from CoreLocation at 1Hz. Is there something I'm doing wrong? Or will CoreLocation return a result at 1Hz no matter what? Would I be required to connect to the bluetooth GPS unit directly using a bluetooth framework to get the 10Hz refresh rate data?<issue_comment>username_1: As far as I know, CoreLocation does not work with any external accessories whatsoever. Thus the answer is yes, if you want to get location updates from external bluetooth GPS accessory , you'll need to work through bluetooth protocol and APIs... Upvotes: 0 <issue_comment>username_2: There are external GPS units, such as those from [Bad Elf](https://bad-elf.com), that will be used by iOS instead of the internal GPS receiver, but the abstraction layer of Core Location hides their details; you just get more accurate location. If you want faster updates then you will need to [integrate directly with the GPS](https://bad-elf.com/pages/sdk) receiver and not use Core Location. > > The GPS SDK provides developers direct access to configure Bad Elf GPS accessories for a wide variety of use cases. The common configurations are: > > > * Configuration of high resolution location data reporting (2-10Hz) mode > * Fix quality indicators: number of satellites, WAAS status, raw HDOP/PDOP/VDOP values > * UTC date and time > * Satellites in view with elevation, azimuth, and raw SNR values > > > Upvotes: 2 [selected_answer]
2018/03/20
441
1,514
<issue_start>username_0: I have the following table created with fixed column width as follows, ``` Table headerTable = new Table(new float[]{5,5,5}); headerTable.setWidthPercent(100); headerTable.addCell(new Cell().add(new Paragraph("Student Name : <NAME>(xxxx-xxx-xxx-xxx)")).setFontSize(10).setTextAlignment(TextAlignment.LEFT)); headerTable.addCell(new Cell().add(new Paragraph("Admission Date : 2012-05-01")).setFontSize(10).setTextAlignment(TextAlignment.CENTER)); headerTable.addCell(new Cell().add(new Paragraph("Current Standard : Eigth Standard - 'B' Section")).setFontSize(10).setTextAlignment(TextAlignment.RIGHT)); ``` But when I see the output format in my PDF file , the column width is uneven. [![enter image description here](https://i.stack.imgur.com/KRxNr.png)](https://i.stack.imgur.com/KRxNr.png) Am I missing something in that code snippet ?<issue_comment>username_1: Please upgrade to the latest version of `iText` - 7.1.x line - and use the code below to create a table with columns of even width: ``` Table headerTable = new Table(UnitValue.createPercentArray(new float[]{5,5,5})); headerTable.setWidth(UnitValue.createPercentValue(100)); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: To fix column width we can use `setFixedLayout()`, in iText7. And it worked for me ``` Table content = new Table(UnitValue.createPercentArray(new float[]{3,5,10})); content.setWidth(UnitValue.createPercentValue(100)); content.setFixedLayout(); ``` Upvotes: 4
2018/03/20
238
1,000
<issue_start>username_0: My team has an app that was developed with react-native. We are now contemplating creating a new version with native iOS technology and language (Swift). I want to make sure before we start developing, that Apple will accept an app that is completely new in every way, but that has the same bundle Id. Is this the case?<issue_comment>username_1: > > Is this the case? > > > Yes. Apple doesn’t know or care how the source code changes. Language, frameworks, architectures, interface, functionality — obviously all such choices must be free to evolve. Upvotes: 0 <issue_comment>username_2: We have done this with our old app. We just replaced our new app with old one and apple has accepted that.. Upvotes: 3 [selected_answer]<issue_comment>username_3: No problem at all. However when you submit that app for review, you should make sure to set the checkbox that this version contains major changes. Otherwise you run chances of Apple rejecting that version. Upvotes: 1
2018/03/20
830
2,298
<issue_start>username_0: I am trying to count the number of duplicates that appears on a table for example: ``` First| Last | ADDR1 | City | ST | Zip -----+-------+-----------------+-----------+----+------ John | Smith | 1234 Fake St. | Hollywood | CA | 12345 John | Smith | 1234 Fake St. | Hollywood | CA | 12345 John | Smith | 1234 Fake St. | Hollywood | CA | 12345 John | Smith | 1234 Fake St. | Hollywood | CA | 12345 Jane | Smith | 1111 Junkertown | Phoenix | AR | 22222 Jane | Smith | 1111 Junkertown | Phoenix | AR | 22222 Jane | Smith | 1111 Junkertown | Phoenix | AR | 22222 ``` Here is my select statement however it is not liking my where statement. I only want to return rows with counts > 1 ``` select distinct t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t where numberofdupes > 1 group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip ``` If anyone can point me in the right direction. Please and thank you.<issue_comment>username_1: ``` select distinct t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip HAVING COUNT(*) > 1 ``` Upvotes: 1 <issue_comment>username_2: If you want to filter by the results of the grouping, then you need to put your filters in the `HAVING` (after the `GROUP BY`) instead of the `WHERE`. ``` select t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip HAVING count(*) > 1 ``` You can also remove the `DISTINCT` if you are already grouping by all the columns. Upvotes: 2 <issue_comment>username_3: Skip the `WHERE` clause, use `HAVING` for aggregate function conditions. No need to do `SELECT DISTINCT`, the `GROUP BY` returns no duplicate rows. ``` select t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip having count(*) > 1 ``` Upvotes: 4 [selected_answer]
2018/03/20
710
1,905
<issue_start>username_0: I have a dataframe(df1) as below: ``` clust longitude latitude 1 77.62279999 12.95248389 1 77.62517676 12.95027966 2 77.62753442 12.93745478 2 77.62753442 12.93745478 3 77.62217671 12.93353553 3 77.62217671 12.93353553 ``` I have a another dataframe with set of longitude/latitude(df2). I want to loop over all these points and for each point iterate over all points of df1 and assign the cluster value based on minimum distance. I can do a for looping and compute distance and but how to assign the cluster with minimum distance. Is looping the best method for such problem or there can be better method like knn or Kd-tree?<issue_comment>username_1: ``` select distinct t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip HAVING COUNT(*) > 1 ``` Upvotes: 1 <issue_comment>username_2: If you want to filter by the results of the grouping, then you need to put your filters in the `HAVING` (after the `GROUP BY`) instead of the `WHERE`. ``` select t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip HAVING count(*) > 1 ``` You can also remove the `DISTINCT` if you are already grouping by all the columns. Upvotes: 2 <issue_comment>username_3: Skip the `WHERE` clause, use `HAVING` for aggregate function conditions. No need to do `SELECT DISTINCT`, the `GROUP BY` returns no duplicate rows. ``` select t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip having count(*) > 1 ``` Upvotes: 4 [selected_answer]
2018/03/20
538
1,552
<issue_start>username_0: My code in HTML ``` ``` My component ``` this.today = Date.now(); this.title = 'Añadir'; this.alumno = new Alumno('','','','','','',null,'','','Alta'); ``` In the array of alumno the value of fechainscripcion is null. I want to see value "{{today | date:'dd/MM/yyyy'}}" in the input.<issue_comment>username_1: ``` select distinct t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip HAVING COUNT(*) > 1 ``` Upvotes: 1 <issue_comment>username_2: If you want to filter by the results of the grouping, then you need to put your filters in the `HAVING` (after the `GROUP BY`) instead of the `WHERE`. ``` select t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip HAVING count(*) > 1 ``` You can also remove the `DISTINCT` if you are already grouping by all the columns. Upvotes: 2 <issue_comment>username_3: Skip the `WHERE` clause, use `HAVING` for aggregate function conditions. No need to do `SELECT DISTINCT`, the `GROUP BY` returns no duplicate rows. ``` select t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip, count(*) as numberofdupes from name_addr t group by t.first_name, t.last_name, t.addr_line_1, t.city, t.state_cd, t.zip having count(*) > 1 ``` Upvotes: 4 [selected_answer]
2018/03/20
699
2,339
<issue_start>username_0: I am running a code that navigates through folders and finally arrives at a destination and downloads a file by just clicking on it. For example, to find a folder I am using: ``` find_element_by_link_text("Pricing and Catalogs") ``` The problem is that the folder sometimes isn't written exactly like that. For example, it has double spaces. So my question is: is it possible to find an element by text that contains certain words? For example something like this (I know it's not correct, I just want you to understand me): ``` find_element_by_link_text(containing "Pricing" and "Catalogs") ``` I searched for the answer but couldn't find what I was looking for. If this is duplicated I apologize and ask to be pointed in the right direction. Thanks in advance! EDIT: Using `find_element_by_partial_link_text` won't do it since there are other folders with either **Pricing** or **Catalogs** in their names. ``` [Pricing and Catalogues](javascript:_spNavigateHierarchy(this,'','TAKES YOU TO SOME PAGE',false,'FolderNode', '') "Pricing and Catalogues") | ```<issue_comment>username_1: You could try XPath: Example: ``` find_element_by_xpath("//a[contains(text(), 'Pricing') and contains(text(), 'Catalogues')]") ``` **EDIT** You've misspelled link text. Code has 'Catalogs'. But HTML has 'Catalogues'. Corrected xpath. Upvotes: 2 [selected_answer]<issue_comment>username_2: Also, replacing text with node sometimes works better: Example of usage: ``` find_element_by_xpath("//a[contains(node(), 'Pricing') and contains(node(), 'Catalogs')]") ``` Upvotes: 1 <issue_comment>username_3: To click on the intendeed element containing the texts **Pricing** and **Catalogs** you can use either of the following lines of code : * `LINK_TEXT` : ``` WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.LINK_TEXT, "Pricing and Catalogues"))).click() ``` * `XPATH` : ``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[contains(.,'Pricing') and contains(.,'Catalogs')]"))).click() ``` * `CSS_SELECTOR` : ``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.ctl00_PlaceHolderLeftNavBar_ctl02_WebTreeView_0.ms-tv-item.ctl00_PlaceHolderLeftNavBar_ctl02_WebTreeView_1[title='Pricing and Catalogues']"))).click() ``` Upvotes: 0
2018/03/20
1,061
3,381
<issue_start>username_0: In C# I can do the following: ``` public delegate void Callback(); void f1() { } void f2() { } Callback c = f1; c+=f2; ``` And then when I call `c()` I will `f1` and `f2` get called. How can I achieve the same result in javascript? I mean I can do in javascript the following: ``` var c; function f1() { } function f2() { } c = f1; ``` But I can not add `f2` to `c`.<issue_comment>username_1: ``` var t = function(callback){ // do stuff callback(); } function callbackFuction(){ // this is your callback fucntion } function diffrentFuntion(){ t(callbackFuction); } ``` hope this anwsers your question Upvotes: 1 <issue_comment>username_2: It's called a multicast delegate and I'm not sure there's a direct way to represent this in JavaScript. But I found this [article on implementing multicast delegates in JavaScript](http://www.delphicdigital.com/blog/experiments-in-javascript-multicast-delegate). Upvotes: -1 <issue_comment>username_3: No, you can't. JavaScript does not have the concept of C# delegates. Using jQuery you can achieve something similar, but not the same: ```js var c = $({}); function a() { console.log('a called!'); } function b() { console.log('b called!'); } c.on('fire', a); c.on('fire', b); c.trigger('fire'); // calls both event handlers ``` If it suits you ... Or you can implement it yourself, it is easy: ```js function Delegate() { this.callbacks = []; } Delegate.prototype.add = function(fn) { if (typeof fn === 'function' && this.callbacks.indexOf(fn) === -1) { this.callbacks.push(fn); } }; Delegate.prototype.remove = function(fn) { var index = this.callbacks.indexOf(fn); this.callbacks.splice(index, 1); }; Delegate.prototype.trigger = function() { var args = arguments; this.callbacks.forEach(function(fn) { fn.apply(null, args); }); }; // so var d = new Delegate(); var f1 = function(arg) { console.log('f1', arg); }; var f2 = function(arg) { console.log('f2', arg); }; var f3 = function(arg) { console.log('f3', arg); }; d.add(f1); d.add(f2); d.add(f3); d.trigger('param'); // executes the three callbacks passing them the parameter // remove one of the callbacks; d.remove(f2); // add a repeated callback d.add(f1); d.trigger('again'); // only f1 and f3 are fired. ``` this Delegate class acts as similarly to C# delegates as it is possible Upvotes: 1 <issue_comment>username_4: Here's another example. This may or may not meet your needs, but maybe you can modify it to do what you want. Basically, you're creating an object that holds an array of function references. When you call `.apply()` on the object, it will call all the functions in the order you added them. ```js function delegate(fn){ this.callbacks = []; this.add(fn); } delegate.prototype.add = function(fn){ if(typeof fn === "function"){ this.callbacks.push(fn); } } delegate.prototype.apply = function(){ for(var i in this.callbacks){ this.callbacks[i].apply(); } } // example // define callbacks function cb1() { console.log('callback 1'); } function cb2() { console.log('callback 2'); } // create delegate with first callback var callback = new delegate(cb1); // add second callback callback.add(cb2); // call delegates callback.apply(); ``` Upvotes: -1 [selected_answer]
2018/03/20
539
2,274
<issue_start>username_0: I have a public dictionnary within a public class as follow: ``` namespace ApiAssembly { public static class TypeStore { /// /// Initializes static members of the class. /// static TypeStore() { Store = new Dictionary(); } /// /// Gets the store. /// public static Dictionary Store { get; } public void AddTypes() { // This should be allowed TypeStore.Store.Add("object", typeof(object)); } } } ``` I want to prevent adding new elements to this dictionnary except internally (managed through API). What is the best way to achieve this? ``` namespace ClientAssembly { using ApiAssembly; public class Client { public void AddTypes() { // How to prevent this call? TypeStore.Store.Add("object", typeof(object)); } } } ``` The content of the Dictionnary must be publicly accessible, so just flipping the access modifier is not an option<issue_comment>username_1: Expose it as: ``` IReadOnlyDictionary dictionary = new Dictionary(); ``` Or additionally use `ReadOnlyDictionary` wrapper to prevent casting back to `Dictionary`. Full example: ``` public static class TypeStore { private static Dictionary store; private static ReadOnlyDictionary storeReadOnly ; /// /// Initializes static members of the class. /// static TypeStore() { store = new Dictionary(); storeReadOnly = new ReadOnlyDictionary(store); } /// /// Gets the store. /// public static IReadOnlyDictionary Store => storeReadOnly; public static void AddTypes() { // This should be allowed TypeStore.store.Add("object", typeof(object)); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You should separate the actual storage dictionary and the one you use for the outside world. An easy way would be: ``` private static Dictionary Storage { get; } = new Dictionary(); public static ReadOnlyDictionary Store => new ReadOnlyDictionary(Storage); ``` Where `Storage` is the actual backing dictionary you can edit. Or even better, expose the methods you want to have available through your class (it serves as a proxy), where you never grant external classes access to the dictionary itself. Upvotes: 3
2018/03/20
391
1,550
<issue_start>username_0: Hi I have a date field and few of the records are null. my date field is called LSF.EarliestFrom when I use this code ``` isnull(LSF.EarliestFrom,'') as LSFDateAppliesFrom ``` the output comes as 1900-01-01. Any help will be appreciated.<issue_comment>username_1: Expose it as: ``` IReadOnlyDictionary dictionary = new Dictionary(); ``` Or additionally use `ReadOnlyDictionary` wrapper to prevent casting back to `Dictionary`. Full example: ``` public static class TypeStore { private static Dictionary store; private static ReadOnlyDictionary storeReadOnly ; /// /// Initializes static members of the class. /// static TypeStore() { store = new Dictionary(); storeReadOnly = new ReadOnlyDictionary(store); } /// /// Gets the store. /// public static IReadOnlyDictionary Store => storeReadOnly; public static void AddTypes() { // This should be allowed TypeStore.store.Add("object", typeof(object)); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You should separate the actual storage dictionary and the one you use for the outside world. An easy way would be: ``` private static Dictionary Storage { get; } = new Dictionary(); public static ReadOnlyDictionary Store => new ReadOnlyDictionary(Storage); ``` Where `Storage` is the actual backing dictionary you can edit. Or even better, expose the methods you want to have available through your class (it serves as a proxy), where you never grant external classes access to the dictionary itself. Upvotes: 3
2018/03/20
825
2,616
<issue_start>username_0: ``` select m.messageid, m.message, m.orig, m.recip, d.company as orig_company, d.department as orig_department, d.office as orig_office, d.country as orig_country from department d join messages m on m.originator = d.address select m.messageid, m.message, m.orig, m.recip, d.company as recip_company, d.department as recip_department, d.office as recip_office, d.country as recip_country from department d join messages m on m.recip = d.address ``` I would like to make one selection that will have the information: > > messageid, message, orig, recip, orig\_company, orig\_office, > orig\_country, orig\_office, recip\_company, recip\_office, recip\_country, > recip\_office > > > How can be done? Thank you in advance.<issue_comment>username_1: You can use `UNION ALL`. The important part is that both results must have the same amount of columns and data types. ``` select m.messageid, m.message, m.orig, m.recip, d.company as orig_company, d.department as orig_department, d.office as orig_office, d.country as orig_country, CONVERT(VARCHAR(200), NULL) AS recip_company, CONVERT(VARCHAR(200), NULL) AS recip_department, CONVERT(VARCHAR(200), NULL) AS recip_office, CONVERT(VARCHAR(200), NULL) AS recip_country from department d join messages m on m.originator = d.address UNION ALL select m.messageid, m.message, m.orig, m.recip, NULL as orig_company, NULL as orig_department, NULL as orig_office, NULL as orig_country d.company as recip_company, d.department as recip_department, d.office as recip_office, d.country as recip_country from department d join messages m on m.recip = d.address ``` Please check if the data type `VARCHAR(200)` is OK for your case. The first query result from the `UNION` will datermine the resulting data type, that's why you need to explicitly say which type your hard-coded `NULL` will have. Upvotes: 3 [selected_answer]<issue_comment>username_2: `LEFT JOIN` department table twice. First time for *orig* values, second time for *recip* values: ``` select m.messageid, m.message, m.orig, m.recip, d1.company as orig_company, d1.department as orig_department, d1.office as orig_office, d1.country as orig_country, d2.company as recip_company, d2.department as recip_department, d2.office as recip_office, d2.country as recip_country from messages m left join department d1 on m.originator = d1.address left join department d2 on m.recip = d2.address ``` Upvotes: 2
2018/03/20
1,050
3,717
<issue_start>username_0: Spring Noob: OK. I start with a STS Spring Starter Project / Maven / Java 8 / Spring Boot 2.0, and select the Web and Actuator dependencies. It builds and runs fine, and reponds to <http://localhost:8080/actuator/health>. I add an "Endpoint" to the main application class, so that it looks like this. ``` package com.thumbsup; import org.springframework.boot.SpringApplication; import org.springframework.boot.actuate.endpoint.annotation.Endpoint; import org.springframework.boot.actuate.endpoint.annotation.ReadOperation; import org.springframework.boot.actuate.endpoint.annotation.Selector; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class YourStash11Application { public static void main(String[] args) { SpringApplication.run(YourStash11Application.class, args); } @Endpoint(id="mypoint") public class CustomPoint { @ReadOperation public String getHello(){ return "Hello" ; } } } ``` I try to enable everything in application.properties: ``` management.endpoints.enabled-by-default=true management.endpoint.conditions.enabled=true management.endpoint.mypoint.enabled=true management.endpoints.web.exposure.include=* ``` But when it builds, there's no reference to mapping /actuator/mypoint, and <http://localhost:8080/actuator/mypoint> and <http://localhost:8080/application/mypoint> both return 404 errors. What am I missing? Thanks!<issue_comment>username_1: The initial partly problem was that the code did not add an endpoint with no "selector" [Source](https://spring.io/blog/2017/08/22/introducing-actuator-endpoints-in-spring-boot-2-0) ``` @Endpoint(id = "loggers") @Component public class LoggersEndpoint { @ReadOperation public Map loggers() { ... } @ReadOperation public LoggerLevels loggerLevels(@Selector String name) { ... } @WriteOperation public void configureLogLevel(@Selector String name, LogLevel configuredLevel) { ... } } ``` > > This endpoint exposes three operations: > > > GET on /application/loggers: the configuration of all loggers (as it > has no "selector" parameter): > > > GET on /application/loggers/{name}: the configuration of a named > logger (using the name @Selector). > > > ... > > > The edited question led to the conclusion [that it should be a bean](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-endpoints.html#production-ready-endpoints-custom) > > If you add a @Bean annotated with @Endpoint, any methods annotated > with @ReadOperation, @WriteOperation, or @DeleteOperation are > automatically exposed over JMX and, in a web application, over HTTP as > well. Endpoints can be exposed over HTTP using Jersey, Spring MVC, or > Spring WebFlux. > > > Or see [comments in the announcement of the feature](https://spring.io/blog/2017/08/22/introducing-actuator-endpoints-in-spring-boot-2-0#comment-3481606304) Upvotes: 0 <issue_comment>username_2: OK, solved: ``` @Endpoint(id="mypoint") @Component public class myPointEndPoint { @ReadOperation public String mypoint(){ return "Hello" ; } } ``` What was missing was the "@Component" annotation. But, where is this in the docs? Upvotes: 4 <issue_comment>username_3: Maybe this will help somebody. Looks like `@ReadOperation` doesn't support return type `Void`. You should return at least an empty string on your `invoke` method. spring-boot 2.0.3.RELEASE ``` @Component @Endpoint(id = "heartbeat") public class HeartbeatEndpoint { @ReadOperation public String invoke() { return ""; } } ``` Upvotes: 0
2018/03/20
424
1,407
<issue_start>username_0: I am using Vue.js 2.0 and I am trying to emit an event from `child component` to `parent component` but it's not working. You can see my code below: **child component:** ``` Send export default { methods: { confirmSendMessage () { this.$emit('confirmed') } } ``` **parent component:** ``` import ConfirmMessage from './ConfirmMessage' export default { events: { confirmed() { console.log('confirmed') } }, components: { ConfirmMessage } } ``` When I click on the button, nothing appears to me on the Chrome console. I don't know why. Can anybody help me? I am new on Vue JS.<issue_comment>username_1: You missed to listen the emitted event. Use [v-on](https://v2.vuejs.org/v2/guide/events.html#Listening-to-Events) to listen to the event: ``` ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You have to listen the events, if you look the [Emit Documentation](https://v2.vuejs.org/v2/api/#vm-emit), it expects as first argument the name of the event as string, an then the values (if you have) you want to pass to the listener. so it will be: ``` ``` the method will be: ``` functionToExecuteWhenTeEventIsEmitted(someValue) { //do whatever with someValue} ``` and on the child: ``` this.$emit('nameOfEvent', someValue) ``` **In your case** You aren't passing values so `this.$emit('confirmed')` and it will be enought Upvotes: 2
2018/03/20
598
2,127
<issue_start>username_0: I'am Scanning a CSV File with the following Code: ``` public void scanFile() { boolean isNumber = false; String test; try { sc = new Scanner(Gui.selectedFile); sc.useDelimiter("[;\"]"); while (sc.hasNext() && isNumber == false) { test = sc.next(); if(test.equals("{9}")) { System.out.println("success"); } System.out.println();; if (sc.hasNextInt()) { isNumber = true; } } sc.close(); } catch (Exception e) { System.out.println("error"); } ``` Now i need a way, to create a String for EACH entry in the CSV. There are around 60 entry's in my CSV. I need to use the read data further in the program.<issue_comment>username_1: You can do it the following way with just 3 lines of code: ``` List> data = new ArrayList>(); List lines = Files.lines(Paths.get(file)).collect(Collectors.toList()); lines.stream().forEach(s -> data.add(Arrays.asList(s.split(",")))); ``` Here's what it does. The second line above reads all the lines from the CSV file. In the third line, we stream through all the lines (one line at a time) and split the line with a comma as a separator. This split gives us an array of cells in that line. We just convert the array to a list and add it to our datastructure `data` which is a list of lists. Later, say, if you want to access the value of 7th cell in the 4th row of the CSV, all you have to do is following: `String cell = data.get(3).get(6);` Upvotes: 1 <issue_comment>username_2: ``` public ArrayList scanFile() { boolean isNumber = false; String test; ArrayList output = new ArrayList(); try { sc = new Scanner(Gui.selectedFile); sc.useDelimiter("[;\"]"); while (sc.hasNext() && isNumber == false) { test = sc.next(); output.add( test ); if(test.equals("{9}")) { System.out.println("success"); } System.out.println();; if (sc.hasNextInt()) { isNumber = true; } } sc.close(); } catch (Exception e) { System.out.println("error"); } return output; } ``` Upvotes: 0
2018/03/20
657
2,146
<issue_start>username_0: It might be a stupid question, but I can't get my wrap my head around it. How to execute a jar created with `maven package` that brings quite a few dependencies with it (the resulting jar is 100MB)? I don't have a mainClass but I wish to run it depending on the modules I want to execute. Things I tried: 1. ``` scala my_app_2.0.1-jar-with-dependencies.jar App1 java.lang.NullPointerException at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at scala.reflect.internal.util.ScalaClassLoader.$anonfun$tryClass$1(ScalaClassLoader.scala:44) at scala.util.control.Exception$Catch.$anonfun$opt$1(Exception.scala:242) at scala.util.control.Exception$Catch.apply(Exception.scala:224) at scala.util.control.Exception$Catch.opt(Exception.scala:242) at scala.reflect.internal.util.ScalaClassLoader.tryClass(ScalaClassLoader.scala:44) ``` 2. ``` scala -classpath my_app_2.0.1-jar-with-dependencies.jar package.path.App1 java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps; at org.rogach.scallop.ScallopConf.performOptionNameGuessing(ScallopConf.scala:17) at org.rogach.scallop.ScallopConfBase.verifyConf(ScallopConfBase.scala:686) at org.rogach.scallop.ScallopConfBase.verify(ScallopConfBase.scala:698) at iit.cnr.it.socialpipeline.utils.ArgConf.(ArgConf.scala:19) ... ```<issue_comment>username_1: ANSWERING TO MYSELF, for future reference. Well since you created the jar with `maven` that works with java and since you used `scala-plugin`, and this is inside the dependencies you just need to use the command: `java -classpath my_app_2.0.1-jar-with-dependencies.jar package.path.App1` easy as that, you noob - no offense (well, I am answering to myself. So I think it's pretty fair to be rude to myself :D ). Upvotes: 2 [selected_answer]<issue_comment>username_2: Since it can be run as a normal java program, you can use the [Exec Maven Plugin](http://www.mojohaus.org/exec-maven-plugin/usage.html). I confirm that it is viable Upvotes: 0
2018/03/20
1,102
4,590
<issue_start>username_0: Is it possible to have a 2 way data flow using Subjects in a service? Suppose for example that I want some component to retrieve information and then post it through the service Subject for other another component to consume. The consuming component then makes some changes to this information and then re-posts it so that the original component can retrieve changes. Is this possible using the Observer pattern? Also, if I wanted to watch this data for changes (let's say that the data came in through an array), would I have to use a [proxy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy) to accomplish this?<issue_comment>username_1: Thats totally possible: 1. Figure out what type of subject you need, normal subject, Behaviour Subject or Replay subject. Each has its own use case, I'd recommend taking a look at this question for a clear and concise explanation: [Angular 2 special Observables (Subject / Behaviour subject / ReplaySubject)](https://stackoverflow.com/questions/43118769/angular-2-special-observables-subject-behaviour-subject-replaysubject). 2. Declare your subject in a service. 3. Call a `next()` value in your main component which your second component will listen to. 4. You can then subscribe to that subject from your secondary component and modify it. 5. From here you can either call the `next()` value on the same subject using the modified data or create a separate subject in your service and use that to pass your modified data into. Either case you can subscribe in your main component to get that data. I'd recommend the later option though as I'm assuming if you modify the data you'll change the object and it's good to strictly type your subjects to catch errors. Hope this helps. Upvotes: 1 <issue_comment>username_2: When passing data between components, I find the RxJS `BehaviorSubject` very useful. You can also use a regular RxJS `Subject` for sharing data via a service, but here’s why I prefer a BehaviorSubject. 1. It will always return the current value on subscription - there is no need to call onNext(). 2. It has a getValue() function to extract the last value as raw data. 3. It ensures that the component always receives the most recent data. 4. you can get an observable from behavior subject using the `asObservable()` method on behavior subject. 5. [Refer this for more](https://stackoverflow.com/questions/43348463/what-is-the-difference-between-subject-and-behaviorsubject) **Example** In a service, we will create a private BehaviorSubject that will hold the current value of the message. We define a currentMessage variable to handle this data stream as an observable that will be used by other components. Lastly, we create the function that calls next on the `BehaviorSubject` to change its value. The parent, child, and sibling components all receive the same treatment. We inject the DataService in the components, then subscribe to the currentMessage observable and set its value equal to the message variable. Now if we create a function in any one of these components that changes the value of the message. The updated value is automatically broadcasted to all other components. **shared.service.ts** ``` import { Injectable } from '@angular/core'; import { BehaviorSubject } from 'rxjs'; @Injectable() export class SharedService { private messageSource = new BehaviorSubject("default message"); currentMessage = this.messageSource.asObservable(); constructor() { } changeMessage(message: string) { this.messageSource.next(message) } } ``` **parent.component.ts** ``` import { Component, OnInit } from '@angular/core'; import { DataService } from "../data.service"; @Component({ selector: 'app-parent', template: ` {{message}} `, styleUrls: ['./parent.component.css'] }) export class ParentComponent implements OnInit { message: string; constructor(private data: DataService) { } ngOnInit() { this.data.currentMessage.subscribe(message => this.message = message) } } ``` **sibling.component.ts** ``` import { Component, OnInit } from '@angular/core'; import { SharedService } from "../shared.service"; @Component({ selector: 'app-sibling', template: ` {{message}} New Message `, styleUrls: ['./sibling.component.css'] }) export class SiblingComponent implements OnInit { message: string; constructor(private service: SharedService) { } ngOnInit() { } newMessage() { this.service.changeMessage("Hello from Sibling"); } } ``` Upvotes: 4 [selected_answer]
2018/03/20
1,857
5,590
<issue_start>username_0: I have a sensor that output data consist of one attribute (mono value). An example of punch of sequenced data is as follows: sample: 199 200 205 209 217 224 239 498 573 583 583 590 591 594 703 710 711 717 719 721 836 840 845 849 855 855 856 857 858 858 928 935 936 936 942 943 964 977 You can see the data from the first image **input**. [![input](https://i.stack.imgur.com/HwvOn.png)](https://i.stack.imgur.com/HwvOn.png) The data is divided into levels. The number of levels is given for me (5 levels in this example). However, the number of samples for each level is unknown, as well as the distances between the levels are also unknown. I need to exclude the outliers and define the center of each level (look at the second image **output**. [![output](https://i.stack.imgur.com/8T88u.png)](https://i.stack.imgur.com/8T88u.png) The red samples represent the outliers and the yellow represent the centers of levels). Is there any algorithm, mathematical formula, c++ code may help me to achieve this requirement? I tried KMeans (with K = 5 in this example) and I got bad result because of the random initial K centroids. Most time some inintial centroids share the same level that let this level become two clusters, whereas other two levels belongs to one cluster. If I set the initial centroids manually by selecting one centroid from each level I get very good results.<issue_comment>username_1: if difference between two successive data points if greater than particular value (consider this as Delta )then it belongs to different cluster. for this data set : 199 200 205 209 217 224 239 498 573 583 583 590 591 594 703 710 711 717 719 721 836 840 845 849 855 855 856 857 858 858 928 935 936 936 942 943 964 977 assume delta be 15 (fine tune this based on Sensor) if successive data points difference is not greater than 15 then they belong to same cluster .you could find the center point by finding mid value of the cluster . if point is having nearby point with difference of delta then it could be considered as outlier .Another options is we can vary the delta based in current value of data set. Upvotes: 2 <issue_comment>username_2: This is an extension of the answer of @KarthikeyanMV. +1. Yes, you need to be able to determine a value for Delta. Here is a process that will do that. I am writing my code in R, but I think that the process will be clear. Presumably, the gaps between groups are bigger than the gaps within any group, so just look at the difference between successive points and ask where the big gaps are. Since you believe that there should be 5 groups, there should be 4 big gaps, so look at the 4th biggest difference. ``` ## Your data dat = c(199, 200, 205, 209, 217, 224, 239, 498, 573, 583, 583, 590, 591, 594, 703, 710, 711, 717, 719, 721, 836, 840, 845, 849, 855, 855, 856, 857, 858, 858, 928, 935, 936, 936, 942, 943, 964, 977) (Delta = sort(diff(dat), decreasing=TRUE)[4]) [1] 75 ``` This *looks like* Delta should be 75, but we failed to account for the outliers. Are there any points that are more than Delta from both the next point above *and* below? Yes. ``` BigGaps = diff(dat) >= Delta (Outliers = which(c(BigGaps, T) & c(T, BigGaps))) [1] 8 ``` Point 8 is too far away to belong to either the group above or below. So let's remove it and try again. ``` dat = dat[-Outliers] (Delta = sort(diff(dat), decreasing=TRUE)[4]) [1] 70 BigGaps = diff(dat) >= Delta (Outliers = which(c(BigGaps, T) & c(T, BigGaps))) integer(0) ``` After we remove point 8 the new Delta is 70. We check for outliers using the new Delta (70) and find none. So let's cluster using Delta = 70. ``` Cluster = cumsum(c(1, diff(dat)>=Delta)) plot(dat, pch=20, col=Cluster+1) ``` [![Clustered data](https://i.stack.imgur.com/ICyGE.png)](https://i.stack.imgur.com/ICyGE.png) This mostly found the clusters that you want *except* that it included the last two points in the highest cluster rather than declaring them to be outliers. I do not see why they should be outliers instead of part of this group. Maybe you could elaborate on why you think that they should not be included. I hope that this helps. Upvotes: 3 [selected_answer]<issue_comment>username_3: Id suggest DBSCAN instead of K-Means. It is a density based clustering algorithm that groups data points that are in the same proximity as each other without having to define an initial k or centroids like K-Means. In DBSCAN, distance and k-neighbors are user defined. If you know that Index has a consistent interval, DBSCAN might be suitable to solve your problem. Upvotes: 0 <issue_comment>username_4: I notice that those levels look somewhat like lines. You could do something like that: ``` 1. sort the points 2. take the first two unprocessed points into an ordered set called the current line 3. lay a line between the first and last point of the set 4. test whether the line formed by the first point and the next unprocessed point form a line that has an angle lower than some threshold to the other line 5. If yes, add the point and go to 3 6. If no, store the current line somewhere and start again at 2 ``` You could also start by checking whether the first two points of such a line have an angle to the x-axis that is above another threshold and if so, store the first point as something singular. The outliers. Another version would be to go only by the angle of the connection of two points to the x-axis. On a level change, there will be a far bigger angle (incline, slope) than between two points on a level. Upvotes: 0
2018/03/20
1,801
6,192
<issue_start>username_0: I have a team sheet and I need to report the forecast for the next week every Friday via Email. [![ExcelSheet](https://i.stack.imgur.com/4nv4o.png)](https://i.stack.imgur.com/4nv4o.png) I built a macro which is creating an email. ```vb Sub SendMail() Dim rng As Range Dim OutApp As Object Dim OutMail As Object Dim VBAWeekNum As Integer Set rng = Nothing On Error Resume Next 'Only the visible cells in the selection Set rng = Sheets("Availability List").Range("A1:C7, D1:J7").SpecialCells(xlCellTypeVisible) On Error GoTo 0 If rng Is Nothing Then MsgBox "The selection is not a range or the sheet is protected" & _ vbNewLine & "please correct and try again.", vbOKOnly Exit Sub End If With Application .EnableEvents = False .ScreenUpdating = False End With Set OutApp = CreateObject("Outlook.Application") Set OutMail = OutApp.CreateItem(0) On Error Resume Next With OutMail .To = "<EMAIL>" .CC = "" .BCC = "" .Subject = "X" .HTMLBody = "Guten Tag Herr X," & vbCrLf & "anbei wie besprochen die Übersicht für die kommende Woche." & vbCrLf & "Vielen Dank im Voraus." & vbCrLf & "Mit freundlichen Grüßen X" & RangetoHTML(rng) .Display 'or use .sent End With On Error GoTo 0 With Application .EnableEvents = True .ScreenUpdating = True End With Set OutMail = Nothing Set OutApp = Nothing End Sub Function RangetoHTML(rng As Range) Dim fso As Object Dim ts As Object Dim TempFile As String Dim TempWB As Workbook TempFile = Environ$("temp") & "\" & Format(Now, "dd-mm-yy h-mm-ss") & ".htm" 'Copy the range and create a new workbook to past the data in rng.Copy Set TempWB = Workbooks.Add(1) With TempWB.Sheets(1) .Cells(1).PasteSpecial Paste:=8 .Cells(1).PasteSpecial xlPasteValues, , False, False .Cells(1).PasteSpecial xlPasteFormats, , False, False .Cells(1).Select Application.CutCopyMode = False On Error Resume Next .DrawingObjects.Visible = True .DrawingObjects.Delete On Error GoTo 0 End With 'Publish the sheet to a htm file With TempWB.PublishObjects.Add( _ SourceType:=xlSourceRange, _ Filename:=TempFile, _ Sheet:=TempWB.Sheets(1).Name, _ Source:=TempWB.Sheets(1).UsedRange.Address, _ HtmlType:=xlHtmlStatic) .Publish (True) End With 'Read all data from the htm file into RangetoHTML Set fso = CreateObject("Scripting.FileSystemObject") Set ts = fso.GetFile(TempFile).OpenAsTextStream(1, -2) RangetoHTML = ts.readall ts.Close RangetoHTML = Replace(RangetoHTML, "align=center x:publishsource=", _ "align=left x:publishsource=") 'Close TempWB TempWB.Close savechanges:=False 'Delete the htm file we used in this function Kill TempFile Set ts = Nothing Set fso = Nothing Set TempWB = Nothing End Function ``` Now I would like to automate the whole process. Therefore the range ``` Set rng = Sheets("Availability List").Range("A1:C7,D1:J7").SpecialCells(xlCellTypeVisible) ``` is not correctly defined. I would like that the second part `D1:J7").SpecialCells(xlCellTypeVisible)` is moving according to the actual calendar week. E.g. this week it should select the CW13 (means K1:Q7).<issue_comment>username_1: Are the week number cells merged cells? If so Use `OFFSET` and `MATCH` and `WEEKNUM` so base around `OFFSET(MATCH(WEEKNUM(TODAY())` Upvotes: 0 <issue_comment>username_2: Rather than using `Range("A1:C7, D1:J7")`, define separate range variables in which the second range (`r2`) can be offset as needed according to the value of i=0,1,2... The code would look something like this ``` dim r as range, r1 as range, r2 as range, i as integer set r1=range("A1:C7") set r2=range("D1:J7") set r = range(r1,r2.offset(0,7*i)) ``` Upvotes: 1 <issue_comment>username_3: First define a weeknumber variable. You can do this with =WEEKNUM(TODAY()) Lets say this variable is called x. Then I would continue like this ``` Set rng1 = Sheets("Availability List").Range(Cells(1,1),Cells(7,3)).SpecialCells(xlCellTypeVisible) Set rng2 = Sheets("Availability List").Range(Cells(1,x),Cells(7,x+6)).SpecialCells(xlCellTypeVisible) ``` Upvotes: 0 <issue_comment>username_4: you could try ``` With Sheets("Availability List") Set rng = Union(.Range("A1:C7"), _ .Rows(2).Find(what:=WorksheetFunction.WeekNum(Date), LookIn:=xlValues, lookat:=xlWhole).Offset(-1).Resize(7)). _ SpecialCells(xlCellTypeVisible) End With ``` Upvotes: 0 <issue_comment>username_5: The following line will pick up your range based on the week number: ``` Set Rng = [2:2].Find(Application.WorksheetFunction.WeekNum(Date)).Resize(1, 7) ``` It works by searching Row 2 for the weeknumber and then offseting by 7 to capture the whole weeks range. The only word of caution I would give is to ensure that the return of week number matches your definition of week number but this can be alter using the argument as detailed [here](https://support.office.com/en-ie/article/weeknum-function-e5c43a03-b4ab-426c-b411-b18c13c75340). Also I would change `[2:2]` to a more robust reference to the row based on the sheets in question. If you need to move past this year eg into week 53 then I would also add in an offset based on the year, although I would assume that you would not have more than 52 weeks of data in the one sheet. Upvotes: 0 <issue_comment>username_6: Use Intersect on the rows over the merge area columns. ``` Dim wn As Long, rng As Range wn = 13 With Sheets("Availability List") Set rng = Union(.Range("A1:C7"), _ Intersect(.Rows("1:7"), .Cells(2, Application.Match(wn, .Rows(2), 0)).MergeArea.EntireColumn)) Debug.Print rng.SpecialCells(xlCellTypeVisible).Address End With ``` I'n unclear on why specialcells is necessary. Your helper function may have to be adjusted to work with areas. Upvotes: 0
2018/03/20
847
2,819
<issue_start>username_0: I see one of my makefile and see the below command. ``` g++-5 -std=c++11 ``` From what I understand, the std option represents this - GCC supports different dialects of C++, corresponding to the multiple published ISO standards. Which standard it implements can be selected using the -std= command-line option. What does -5 in the g++ command signifies?<issue_comment>username_1: The 5 almost certainly means the version of GCC, 5.something in this case. Having names like this makes it easier to mix different versions on the same system. You can confirm this by running ``` g++ --version ``` and you should get something like > > g++ (GCC) 5.2.0 > > > You are correct about the -std option. In addition to released versions, there is often support for upcoming versions. You can find a summary of the support [here](https://gcc.gnu.org/projects/cxx-status.html) and the options such as `-std=c++2a` for features planned for C++20 [here](https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html). Upvotes: 0 <issue_comment>username_2: parameter -n next to the g++ indicates GNU C++ Compiler version you want to compile code with Upvotes: 0 <issue_comment>username_3: The `-5` is part of the actual g++ executable's name. The exact choice of name is essentially up to the maintainers of your operating system or distribution. Naming the executable this way allows multiple GCC versions to be installed in parallel, a feature that is often useful, for example if you want to offer your software to a variety of users and want to make sure that different compiler versions can handle your code. For example, on one of my (slightly older) boxes I currently see ``` $ ls -l /usr/bin/g++* lrwxrwxrwx 1 root root 7 Apr 8 2014 /usr/bin/g++ -> g++-4.8 -rwxr-xr-x 1 root root 259176 Mar 20 2014 /usr/bin/g++-4.4 -rwxr-xr-x 1 root root 775888 May 7 2016 /usr/bin/g++-4.8 $ /usr/bin/g++-4.4 --version g++-4.4 (Ubuntu/Linaro 4.4.7-8ubuntu1) 4.4.7 Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ /usr/bin/g++-4.8 --version g++-4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` Different compiler versions not only have differing support regarding C++ standard features, they may also have different bugs or quirks. Regarding GCC's support for different versions of the C++ standard, see <https://gcc.gnu.org/projects/cxx-status.html>. For example, some C++17 language features were only introduced in GCC 5.1. Upvotes: 1
2018/03/20
1,100
4,585
<issue_start>username_0: Is it possible to pass a function to a `stencilJs` component? Something like: ``` @Prop() okFunc: () => void; ``` I have a modal and want to dynamically call a passed function on the `Ok` button clicked in the modal footer, like an `onClick` on a normal HTML button.<issue_comment>username_1: You can just add a `@Prop() someFunc: Function` to any component and pass it from the outside like `console.log('coming from the outside')} />` Within anyComponent just check `if (this.someFunc) { this.someFunc() }` Upvotes: 0 <issue_comment>username_2: Yes, you can. It is just a normal `@Prop()` declaration and plays very nicely in TSX. However... As noted in another answer and comments, if you are consuming a Stencil component in plain ol' HTML, you will not be able to use Stencil's attribute-prop binding to pass a function (or any non-scalar value) to a prop via an HTML attribute. Use the DOM API --------------- This means that *you have to interact with the DOM* if you want to attach events or pass function props to Stencil components. Once your component makes it into the DOM, there is really nothing special about it compared to any other Custom Element. Without JSX or another template DSL (e.g. Angular), you will need to attach events and set your function- or object-reference props with the JavaScript DOM API: ``` const componentInstance = document.querySelector('my-stencil-component') // This works and will trigger a rerender (as expected for a prop change) componentInstance.someFunc = (...) => { ... } // This works for any event, including custom events fired from Stencil's EventEmitter componentInstance.addEventListener('myCustomEvent', (event: MyCustomEvent) => { ... }) ``` If you absolutely **must** do this in your HTML document for some reason: ``` ... var componentInstance = document.currentScript.previousElementSibling componentInstance.someFunc = function(...) { ... } ``` --- ### Why do I have to do this? It's important to realize that `Properties ≠ Attributes`. Props are [JavaScript Properties](https://developer.mozilla.org/en-US/docs/Glossary/property/JavaScript), in this case, properties of the DOM object representing an element. [Attributes](https://developer.mozilla.org/en-US/docs/Glossary/Attribute) are XML attributes, although HTML attributes have some unique characteristics and behave slightly differently than typical XML. Stencil will automatically "bind" HTML attributes to properties for you where possible - in particular, for scalar values (`boolean`, `number`, `string`). Object references, and therefore functions, cannot be used as the value of an attribute. Technically, only `string`s can be attribute values, but Stencil is smart enough to convert `string` to other another scalar type (`boolean` or `number`) when you specify the type of your `@Prop()`. --- Other Solutions --------------- I developed a solution for my team to bind attributes containing JSON to Stencil properties using a `MutationObserver`. It basically watches for a special attribute `bind-json`, then maps attributes starting with `json-` onto the corresponding camelCase DOM properties. Usage looks like this: ``` ``` With the `MutationObserver` in place, this is identical to: ``` const componentInstance = document.querySelector('my-stencil-component') componentInstance.propName = JSON.parse(componentInstance.getAttribute('json-prop-name')) ``` However, there really is not a satisfying solution for binding *functions* in plain HTML. It really just can't be done without some sort of ugly hack like `eval` described in another comment. Not only does that pollute your component's API, it's problematic for [all kinds of other reasons I won't get into here](https://stackoverflow.com/questions/197769/when-is-javascripts-eval-not-evil) and its use will automatically fail your app in practically any modern security check. In our Storybook stories we bind callback function definitions to the `window`, and use `</code> tags and <code>document.currentScript</code> or <code>querySelector</code> to pass the function props and event bindings to the component instance:</p> <pre><code>const MyStory = ({ ... }) => { window.myStoryFunc = () => { ... } window.myStoryClickHandler = () => { ... } return ` <my-stencil-component ... >...</my-stencil-component> <script> const componentInstance = document.currentScript.previousElementSibling componentInstance.someFunc = window.myStoryFunc componentInstance.addEventListener('click', window.myStoryClickHandler) ` }` Upvotes: 3
2018/03/20
1,142
4,667
<issue_start>username_0: I use TYPO3 8.7. I want to include contact form on site via typoscript. This contact form inserted in gridelement with id=29. In typoscript I write ``` lib.contactForm = RECORDS lib.contactForm { source = 29 tables = tt_content dontChekPid = 1 } ``` When I logged into back-end I see the form. When I logout from back-end I don't see my form.<issue_comment>username_1: You can just add a `@Prop() someFunc: Function` to any component and pass it from the outside like `console.log('coming from the outside')} />` Within anyComponent just check `if (this.someFunc) { this.someFunc() }` Upvotes: 0 <issue_comment>username_2: Yes, you can. It is just a normal `@Prop()` declaration and plays very nicely in TSX. However... As noted in another answer and comments, if you are consuming a Stencil component in plain ol' HTML, you will not be able to use Stencil's attribute-prop binding to pass a function (or any non-scalar value) to a prop via an HTML attribute. Use the DOM API --------------- This means that *you have to interact with the DOM* if you want to attach events or pass function props to Stencil components. Once your component makes it into the DOM, there is really nothing special about it compared to any other Custom Element. Without JSX or another template DSL (e.g. Angular), you will need to attach events and set your function- or object-reference props with the JavaScript DOM API: ``` const componentInstance = document.querySelector('my-stencil-component') // This works and will trigger a rerender (as expected for a prop change) componentInstance.someFunc = (...) => { ... } // This works for any event, including custom events fired from Stencil's EventEmitter componentInstance.addEventListener('myCustomEvent', (event: MyCustomEvent) => { ... }) ``` If you absolutely **must** do this in your HTML document for some reason: ``` ... var componentInstance = document.currentScript.previousElementSibling componentInstance.someFunc = function(...) { ... } ``` --- ### Why do I have to do this? It's important to realize that `Properties ≠ Attributes`. Props are [JavaScript Properties](https://developer.mozilla.org/en-US/docs/Glossary/property/JavaScript), in this case, properties of the DOM object representing an element. [Attributes](https://developer.mozilla.org/en-US/docs/Glossary/Attribute) are XML attributes, although HTML attributes have some unique characteristics and behave slightly differently than typical XML. Stencil will automatically "bind" HTML attributes to properties for you where possible - in particular, for scalar values (`boolean`, `number`, `string`). Object references, and therefore functions, cannot be used as the value of an attribute. Technically, only `string`s can be attribute values, but Stencil is smart enough to convert `string` to other another scalar type (`boolean` or `number`) when you specify the type of your `@Prop()`. --- Other Solutions --------------- I developed a solution for my team to bind attributes containing JSON to Stencil properties using a `MutationObserver`. It basically watches for a special attribute `bind-json`, then maps attributes starting with `json-` onto the corresponding camelCase DOM properties. Usage looks like this: ``` ``` With the `MutationObserver` in place, this is identical to: ``` const componentInstance = document.querySelector('my-stencil-component') componentInstance.propName = JSON.parse(componentInstance.getAttribute('json-prop-name')) ``` However, there really is not a satisfying solution for binding *functions* in plain HTML. It really just can't be done without some sort of ugly hack like `eval` described in another comment. Not only does that pollute your component's API, it's problematic for [all kinds of other reasons I won't get into here](https://stackoverflow.com/questions/197769/when-is-javascripts-eval-not-evil) and its use will automatically fail your app in practically any modern security check. In our Storybook stories we bind callback function definitions to the `window`, and use `</code> tags and <code>document.currentScript</code> or <code>querySelector</code> to pass the function props and event bindings to the component instance:</p> <pre><code>const MyStory = ({ ... }) => { window.myStoryFunc = () => { ... } window.myStoryClickHandler = () => { ... } return ` <my-stencil-component ... >...</my-stencil-component> <script> const componentInstance = document.currentScript.previousElementSibling componentInstance.someFunc = window.myStoryFunc componentInstance.addEventListener('click', window.myStoryClickHandler) ` }` Upvotes: 3
2018/03/20
1,551
3,428
<issue_start>username_0: I have a dict like this: ``` SHOPS_AND_ORDERNUM = { 'Shop - Produce - 20180212.xlsx': 1334, 'Shop - Organic - 20180223.xlsx': 8893, 'Shop - Fresh - 20180226.xlsx': 5557, 'Shop - Dairy - 20180227.xlsx': 3870 } ``` I want to extract the dates from the dict above into the form: DD-MM-YYYY I am new to regular expressions, and my attempts keep failing. I have started something like this: ``` for i, j in DATA_FILES_AND_SO.items(): m = re.search(some_logic, i) if m: found = m.group(1) ``` Any help would be much appreciated!<issue_comment>username_1: If the format does not change, you can use this (no need of RegEx): ``` SHOPS_AND_ORDERNUM = { 'Shop - Produce - 20180212.xlsx': 1334, 'Shop - Organic - 20180223.xlsx': 8893, 'Shop - Fresh - 20180226.xlsx': 5557, 'Shop - Dairy - 20180227.xlsx': 3870 } for item in SHOPS_AND_ORDERNUM: date = item.split('.xlsx')[0][-8:] print(date) ``` Output: ``` 20180212 20180223 20180226 20180227 ``` Now, to get the date in the format you want, you can use the [`datetime`](https://docs.python.org/3/library/datetime.html) module, like this: ``` for item in SHOPS_AND_ORDERNUM: date = datetime.datetime.strptime(item.split('.xlsx')[0][-8:], '%Y%m%d').strftime('%d-%m-%Y') print(date) ``` Output: ``` 12-02-2018 23-02-2018 26-02-2018 27-02-2018 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use regex: ``` import re SHOPS_AND_ORDERNUM = { 'Shop - Produce - 20180212.xlsx': 1334, 'Shop - Organic - 20180223.xlsx': 8893, 'Shop - Fresh - 20180226.xlsx': 5557, 'Shop - Dairy - 20180227.xlsx': 3870 } new_data = {(lambda x:(x[7:], x[4:6], x[:4]))(re.findall('\d+', a)[0]):b for a, b in SHOPS_AND_ORDERNUM.items()} ``` Output: ``` {('2', '02', '2018'): 1334, ('3', '02', '2018'): 8893, ('6', '02', '2018'): 5557, ('7', '02', '2018'): 3870} ``` Or, instead of tuples: ``` new_data = {'{}-{}-{}'.format(*a[::-1]):b for a, b in new_data.items()} ``` Output: ``` {'2018-02-2': 1334, '2018-02-7': 3870, '2018-02-3': 8893, '2018-02-6': 5557} ``` Upvotes: 2 <issue_comment>username_3: You can use the `datetime` module to get your required date format **Ex:** ``` # -*- coding: utf-8 -*- import datetime SHOPS_AND_ORDERNUM = { 'Shop - Produce - 20180212.xlsx': 1334, 'Shop - Organic - 20180223.xlsx': 8893, 'Shop - Fresh - 20180226.xlsx': 5557, 'Shop - Dairy - 20180227.xlsx': 3870 } for k,v in SHOPS_AND_ORDERNUM.items(): print datetime.datetime.strptime(k.split("-")[-1].rstrip(".xlsx").strip(), "%Y%m%d" ).strftime("%d-%m-%Y") ``` **Output:** ``` 27-02-2018 26-02-2018 23-02-2018 12-02-2018 ``` **MoreInfo** 1. `k.split("-")[-1].rstrip(".xlsx").strip()` #to get the date string from key. ex: `20180212` 2. `datetime.datetime.strptime` #to convert datetime to your required format. `"%d-%m-%Y"` Upvotes: 2 <issue_comment>username_4: ``` import datetime dates = [] for i, j in DATA_FILES_AND_SO.items(): date = i[-13:-5] dates.append(datetime.datetime.strptime(date, '%Y%m%d')) ``` Upvotes: 2 <issue_comment>username_5: The basic regex you're looking for is ([0-9]+)(?=.)\g. You can play around with it on <https://regex101.com/>. For date string conversion, you can use the Carbon library, such as `$newDateString = \Carbon::parse('20180212')->format('DD-MM-YYYY');` See also the [Carbon Docs](https://carbon.nesbot.com/docs/). Upvotes: 1
2018/03/20
1,322
5,353
<issue_start>username_0: I got an exception null object reference, when i use setAdapter to recyceler view in onCreateView method in Fragment. Please tell me what i do wrong or how i can do this? I use getActivity() instead of "this". I need create TabbedActivity. Can i do this whithout fragment? One.java ``` public class One extends Fragment { @BindView(R.id.recycler_view) RecyclerView recyclerView; @BindView(R.id.button2) Button button; private WebSitesViewModel mWebSitesViewModel; int i = 0; MyRecViewAdapter adapter; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { ButterKnife.bind(getActivity()); String url = getActivity().getIntent().getStringExtra(Intent.EXTRA_TEXT); if (url != null) { if (isNetworkConnected(getActivity())) { One.JsoupAsyncTask jsoupAsyncTask = new One.JsoupAsyncTask(this); jsoupAsyncTask.execute(url); Toast.makeText(getActivity(), R.string.bookmarks_added, Toast.LENGTH_LONG).show(); } else { Toast.makeText(getActivity(), "Network not connected!", Toast.LENGTH_LONG).show(); } } mWebSitesViewModel = ViewModelProviders.of(this).get(WebSitesViewModel.class); adapter = new MyRecViewAdapter(getActivity(), mWebSitesViewModel ); recyclerView.setAdapter(adapter); recyclerView.setLayoutManager(new LinearLayoutManager(getActivity())); mWebSitesViewModel.getAllWebSites().observe(this, new Observer>() { @Override public void onChanged(@Nullable final List webSites) { // Update the cached copy of the words in the adapter. adapter.setWebSites(webSites); } }); initSwipe(); return inflater.inflate(R.layout.fragment\_container, container, false); } public static boolean isNetworkConnected(Context c) { ConnectivityManager connectivityManager = (ConnectivityManager) c.getSystemService(Context.CONNECTIVITY\_SERVICE); NetworkInfo activeNetworkInfo = connectivityManager.getActiveNetworkInfo(); return activeNetworkInfo != null && activeNetworkInfo.isConnected(); } private static class JsoupAsyncTask extends AsyncTask { private final WeakReference activityWeakReference; JsoupAsyncTask(One context) { activityWeakReference = new WeakReference<>(context); } @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected Void doInBackground(String... url) { try { Document document = Jsoup.connect(url[0]).get(); Element img = document.select("img").first(); String imgSrc = img.absUrl("src"); addData(document.title(), url[0], imgSrc); } catch (IOException e ) { e.printStackTrace(); } return null; } private void addData(String head, String url, String imgSrc) { WebSites webSites = new WebSites(head, url, imgSrc, 0); activityWeakReference.get().mWebSitesViewModel.insert(webSites); } } private void initSwipe() { ItemTouchHelper.SimpleCallback simpleItemTouchCallback = new ItemTouchHelper.SimpleCallback(0, ItemTouchHelper.LEFT | RIGHT) { @Override public int getMovementFlags(RecyclerView recyclerView, RecyclerView.ViewHolder viewHolder) { return makeMovementFlags(0, RIGHT); } @Override public boolean onMove(RecyclerView recyclerView, RecyclerView.ViewHolder viewHolder, RecyclerView.ViewHolder target) { return false; } @Override public void onSwiped(RecyclerView.ViewHolder viewHolder, int direction) { final int position = viewHolder.getAdapterPosition(); if (direction == RIGHT) { mWebSitesViewModel.delete(adapter.getmWebSites().get(position)); adapter.notifyItemRemoved(position); } } }; ItemTouchHelper itemTouchHelper = new ItemTouchHelper(simpleItemTouchCallback); itemTouchHelper.attachToRecyclerView(recyclerView); } } ``` fragment\_container.xml ``` ``` Exception ``` java.lang.NullPointerException: Attempt to invoke virtual method 'void android.support.v7.widget.RecyclerView.setAdapter(android.support.v7.widget.RecyclerView$Adapter)' on a null object reference at dev.zca.mybookmarks.selectfragment.One.onCreateView(One.java:77) ```<issue_comment>username_1: You need to bind your views in a different manner when using fragments. Instead of using: ``` ButterKnife.bind(getActivity()); ``` You need to first inflate your view and then feed it into Butterknife: ``` @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_container, container, false); ButterKnife.bind(this, view); ... return view; } ``` There's an example on the [Butterknife homepage](http://jakewharton.github.io/butterknife/) under *Non-Activity Binding*. Upvotes: 3 [selected_answer]<issue_comment>username_2: You should inflate your fragment view BEFORE everything else, NOT after. Also, your ButterKnife bind method is completely wrong. ``` @Nullable @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // BEFORE EVERYTHING ELSE View view = inflater.inflate(R.layout.fragment_container, container, false); ButterKnife.bind(this, view); ... // RETURN IT AT THE END OF METHOD return view; } ``` Upvotes: 0
2018/03/20
888
2,413
<issue_start>username_0: I am working with a project in ASP MVC and Bootstrap where I have the need to show the regulation of a transformer, this is represented in the following way: [![enter image description here](https://i.stack.imgur.com/VzVcA.png)](https://i.stack.imgur.com/VzVcA.png) I was trying to do it in the following way but I did not achieve the result that I intend: ```html | | | | --- | --- | | 132 ± | 12x1,5 % | | 8x1.5% | ```<issue_comment>username_1: Here is something you can start with: ```css .txt{ height: 40px; display: table-cell; vertical-align: middle; } .supsub { display: inline-block; } .supsub sub { top: .3em !important; } .op { font-size: 36px } .supsub sup, .supsub sub { position: relative; display: block; font-size: .8em !important; line-height: 1.2; } ``` ```html ( 132 + 12x1,5 % − 8x1.5% )/ 13,86 KV ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try something similar to what I've posted. If you only need to do something like this once or twice this will get you by. If you need to use math/science notations frequently it probably isn't practical. ```css * { padding:0; margin:0; box-sizing:border-box; text-align:center; } .container { width:250px; height:75px; display:flex; flex-direction:row; align-items:center; } .left { display:flex; flex:1; height:100%; align-items:center; justify-content:center; } .top-bottom { display:flex; flex:1; height:100%; flex-direction:column; align-items:center; justify-content:center; } .top { display:flex; align-items:center; justify-content:center; width:100%; height:50%; } .bottom { display:flex; align-items:center; justify-content:center; width:100%; height:50%; border-top:1px solid green; } .right { display:flex; align-items:center; justify-content:center; flex:1; height:100%; } ``` ```html 132 ± 12 × 1.5% 8 × 1.5% / 13.86kV ``` Upvotes: 1 <issue_comment>username_3: You're on the right track of using `rowspan` with table. I completed it with the example below, and it's better to use `monospace` fonts. ```css table { font-family: monospace; } td[rowspan] { font-size: 1.5em; } ``` ```html | | | | | --- | --- | --- | | (132 | + 12x1,5 % | )/13,86kV | | - 8x1.5 % | ``` Upvotes: 2
2018/03/20
846
2,385
<issue_start>username_0: I feel like this should be so simple. I need to validate a decimal value with a range of (0 - 99.999999) OR I need to be able to mark the value with "TBD". I have no way to change that requirement or I may have done it another way. Is there a way to handle multiple validation types through data annotations? I suck at regex. If this can be done that way, can anyone point me in the right direction?<issue_comment>username_1: Here is something you can start with: ```css .txt{ height: 40px; display: table-cell; vertical-align: middle; } .supsub { display: inline-block; } .supsub sub { top: .3em !important; } .op { font-size: 36px } .supsub sup, .supsub sub { position: relative; display: block; font-size: .8em !important; line-height: 1.2; } ``` ```html ( 132 + 12x1,5 % − 8x1.5% )/ 13,86 KV ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try something similar to what I've posted. If you only need to do something like this once or twice this will get you by. If you need to use math/science notations frequently it probably isn't practical. ```css * { padding:0; margin:0; box-sizing:border-box; text-align:center; } .container { width:250px; height:75px; display:flex; flex-direction:row; align-items:center; } .left { display:flex; flex:1; height:100%; align-items:center; justify-content:center; } .top-bottom { display:flex; flex:1; height:100%; flex-direction:column; align-items:center; justify-content:center; } .top { display:flex; align-items:center; justify-content:center; width:100%; height:50%; } .bottom { display:flex; align-items:center; justify-content:center; width:100%; height:50%; border-top:1px solid green; } .right { display:flex; align-items:center; justify-content:center; flex:1; height:100%; } ``` ```html 132 ± 12 × 1.5% 8 × 1.5% / 13.86kV ``` Upvotes: 1 <issue_comment>username_3: You're on the right track of using `rowspan` with table. I completed it with the example below, and it's better to use `monospace` fonts. ```css table { font-family: monospace; } td[rowspan] { font-size: 1.5em; } ``` ```html | | | | | --- | --- | --- | | (132 | + 12x1,5 % | )/13,86kV | | - 8x1.5 % | ``` Upvotes: 2
2018/03/20
790
2,148
<issue_start>username_0: I’m a I.T student.. I need help with who Java work with creating a pdf files, but without using the “pdfbox” Or can we do this in first place? Thanks for helping<issue_comment>username_1: Here is something you can start with: ```css .txt{ height: 40px; display: table-cell; vertical-align: middle; } .supsub { display: inline-block; } .supsub sub { top: .3em !important; } .op { font-size: 36px } .supsub sup, .supsub sub { position: relative; display: block; font-size: .8em !important; line-height: 1.2; } ``` ```html ( 132 + 12x1,5 % − 8x1.5% )/ 13,86 KV ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try something similar to what I've posted. If you only need to do something like this once or twice this will get you by. If you need to use math/science notations frequently it probably isn't practical. ```css * { padding:0; margin:0; box-sizing:border-box; text-align:center; } .container { width:250px; height:75px; display:flex; flex-direction:row; align-items:center; } .left { display:flex; flex:1; height:100%; align-items:center; justify-content:center; } .top-bottom { display:flex; flex:1; height:100%; flex-direction:column; align-items:center; justify-content:center; } .top { display:flex; align-items:center; justify-content:center; width:100%; height:50%; } .bottom { display:flex; align-items:center; justify-content:center; width:100%; height:50%; border-top:1px solid green; } .right { display:flex; align-items:center; justify-content:center; flex:1; height:100%; } ``` ```html 132 ± 12 × 1.5% 8 × 1.5% / 13.86kV ``` Upvotes: 1 <issue_comment>username_3: You're on the right track of using `rowspan` with table. I completed it with the example below, and it's better to use `monospace` fonts. ```css table { font-family: monospace; } td[rowspan] { font-size: 1.5em; } ``` ```html | | | | | --- | --- | --- | | (132 | + 12x1,5 % | )/13,86kV | | - 8x1.5 % | ``` Upvotes: 2
2018/03/20
1,038
3,285
<issue_start>username_0: Quite often I am navigating a two-dimensional image matrix of pixels or a three-dimensional volumetric data set. Typically the underlying data structure is a one-dimensional array of some primitive type like `Double` or `Int`. The multi-dimensional nature is navigated through by nested for-loops as illustrated below. Given that Kotlin does not currently support the typically C and Java-like for-loop structure, there seems some lost flexibility. I'm wondering if there is a clever iterator or Kotlin syntax to make this more elegant. In Java below, we have the ability to capture the whole iteration concisely by embedding the master index `i` into the for-loops and only involving implicit increment operations which are presumably computationally efficient (compared to division and modulo)... ``` public static void main(String[] args) { int nrow=1000, ncol=1000; int size=nrow*ncol; double[] data = new double[size]; for (int i=0, r = 0; r < nrow; r++) { for (int c = 0; c < ncol; c++,i++) { data[i]= r*c; } } } ``` In Kotlin as follows, I find a solution where the index `i` is forced to have scope outside the loop block. Also the `i++` line is somewhat buried and separated from the loop structure. I'm missing in this situation the elegant Kotlin syntax such as 'builders', no semi-colons, etc. I humbly submit that this is because the for-loop flow-control structure is less expressive in Kotlin. Admittedly, this is not critical, but more of a disappointment. ``` fun main(args : Array) { val nrow = 1000 val ncol = 1000 val size = nrow \* ncol val data = DoubleArray(size) var i = 0 for(r in 0 until nrow) { for(c in 0 until ncol) { data[i] = (r \* c).toDouble() i++ } } } ```<issue_comment>username_1: This is one way of creating your array in Kotlin, without using `i` (or `for` loops): ``` val data = (0 until nrow).flatMap { r -> (0 until ncol).map { c -> (r * c).toDouble() } }.toDoubleArray() ``` Upvotes: 1 <issue_comment>username_2: The simple way ============== You can calculate the index based on the offsets... ``` fun main(args : Array) { val nrow = 1000 val ncol = 1000 val size = nrow \* ncol val data = DoubleArray(size) for(r in 0 until nrow) { for(c in 0 until ncol) { data[(ncol\*r) + c] = (r \* c).toDouble() } } } ``` The wrapper way =============== You can wrap the array, simplifying access... ``` class ArrayWrapper(val height: Int, val width: Int, val default: Int) { private val data: Array = Array(height, {default}) operator fun get(x: Int, y: Int) = data[(width \* y) + x] as T operator fun set(x: Int, y: Int, value: T) { data[(width \* y) + x] = value } val rowIndices = (0 until width) val columnIndices = (0 until height) } fun main(args : Array) { val nrow = 1000 val ncol = 1000 val data = ArrayWrapper(nrow, ncol, 0) for(r in data.rowIndices) { for(c in data.columnIndices) { data[r, c] = (r \* c).toDouble() } } } ``` Upvotes: 2 <issue_comment>username_3: Here is the answer I mentioned involving division and modulo ``` fun main(args : Array) { val nrow = 1000 val ncol = 1000 val size = nrow \* ncol val data=DoubleArray(size,{(it/ncol \* it%ncol).toDouble()}) } ``` Upvotes: 0
2018/03/20
6,444
15,007
<issue_start>username_0: The markers on a Leaflet map are not centered when I open <https://waarismijnstemlokaal.nl/s/Losser> in Internet Explorer 11, see the image below: [![Markers are not centered](https://i.stack.imgur.com/JjJYF.png)](https://i.stack.imgur.com/JjJYF.png) But if I press F5 or click the refresh button (see the red arrow in the image below) the markers are somehow centered: [![Markers are centered after refresh using refresh buttton](https://i.stack.imgur.com/QnMLn.png)](https://i.stack.imgur.com/QnMLn.png) If I click on the URL in the address bar and press enter I am shown the non-centered map again. The map is behaving fine in other browsers (also IE9, IE10 and Edge). I have no clue what is going on, does anybody have an idea on why this is happening and how it can be fixed? The Leaflet JavaScript for this map can be found on [GitHub](https://github.com/openstate/stembureaus/blob/master/app/templates/map.html#L68).<issue_comment>username_1: I faced same issue in one my past leaflet project. Now you can observe behavior: 1. As you mentioned, When you press F5 or click the refresh button(IE 11) map loads fine and markers also centered correctly. 2. But when you click on the URL in the address bar and press enter you get the non-centered map again. Let's follow 2nd step(IE 11). You will get non-centered markers. Now try to do/open **Inspect Element** or press `F12` key. **Your markers will get centered automatically.** also you will get warning in console : `DOM7011: The code on this page disabled back and forward caching.` **So to solve your problem try to put both `preventDefault()` and `stopPropagation()` on the click handler of the child HTML node.** Hope this will helps you Upvotes: 0 <issue_comment>username_2: If I put all your map code in the `$(document).ready()` function it loads and centers consistently for me. However, I'm not sure why F5 works and hitting enter in the address bar doesn't. ``` $(document).ready(function() { var StembureausApp = window.StembureausApp || {stembureaus: [],links_external: false}; StembureausApp.stembureaus = [{"Akoestiek": "", "CBS buurtnummer": "BU01680003", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016800", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "4", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2625519442, "Longitude": 7.00713625176, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "<NAME>", "Nummer stembureau": 1, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Losser", "Postcode": "7581AG", "Straatnaam": "Raadhuisplein", "UUID": "adaae5609dfc4c87acf862fb314a1368", "_id": 1, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680001", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016800", "Extra adresaanduiding": "Ingang aan de linker zij-ingang", "Gemeente": "Losser", "Huisnummer": "81", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2590554131, "Longitude": 7.00253546685, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "KBS De Verrekijker", "Nummer stembureau": 2, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Losser", "Postcode": "7582CB", "Straatnaam": "Hogeweg", "UUID": "38fd8afc1a314354bf661816bc7d3e0b", "_id": 3, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680003", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016800", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "1", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2569840479, "Longitude": 7.01045325858, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "RK School De Wegwijzer", "Nummer stembureau": 3, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Losser", "Postcode": "7581EP", "Straatnaam": "Sperwerstraat", "UUID": "8e80ddbdf5c44b5a9e273e9417dadd02", "_id": 4, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680001", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016800", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "2", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2677807736, "Longitude": 6.99684508373, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "Basisschool De Saller", "Nummer stembureau": 4, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Losser", "Postcode": "7582AS", "Straatnaam": "Vlasakker", "UUID": "ad92cfe2cd20471c8de57938a0352d20", "_id": 5, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680003", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016800", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "1", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2642907387, "Longitude": 7.00589841214, "Mindervalide toilet aanwezig": "Y", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "Maartens-Stede", "Nummer stembureau": 5, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Losser", "Postcode": "7581XZ", "Straatnaam": "Vicarystraat", "UUID": "cf2f2a8a2a9541f1b7456fe824b75912", "_id": 7, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680003", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016800", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "19", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2683150503, "Longitude": 7.01398755325, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "Clubgebouw Schuttersvereniging \u0027De Schuttersput\u0027", "Nummer stembureau": 6, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Losser", "Postcode": "7581BB", "Straatnaam": "Bookholtlaan", "UUID": "c575345f63aa4b819bb680f1dd94052e", "_id": 20, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680201", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016802", "Extra adresaanduiding": "Ingang aan de Kerkhofweg", "Gemeente": "Losser", "Huisnummer": "59", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2412110828, "Longitude": 7.0323857378, "Mindervalide toilet aanwezig": "Y", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "Woonzorgcentrum De Driehoek", "Nummer stembureau": 7, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Overdinkel", "Postcode": "7586BL", "Straatnaam": "Hoofdstraat", "UUID": "a7b3134eadc14d24ae239bbff228a835", "_id": 23, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680201", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016802", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "1", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.235374652, "Longitude": 7.03445575788, "Mindervalide toilet aanwezig": "Y", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "Tiekerhook", "Nummer stembureau": 8, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Overdinkel", "Postcode": "7586EX", "Straatnaam": "Schaepmanstraat", "UUID": "3b1cc7e2bd3242d5a2dbd01837ca3e8d", "_id": 33, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680101", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016801", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "330", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2317955365, "Longitude": 7.0028685204, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "<NAME>", "Nummer stembureau": 9, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Glane", "Postcode": "7585PE", "Straatnaam": "Gronausestraat", "UUID": "d3ffef93452b4839965b7f1a94df3499", "_id": 39, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680301", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016803", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "7", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.3148502543, "Longitude": 6.98836190824, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "De Vereeniging", "Nummer stembureau": 10, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "de Lutte", "Postcode": "7587AA", "Straatnaam": "Dorpstraat", "UUID": "b616ae9b73db4fb88aefcc2c97debb79", "_id": 50, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680301", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016803", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "14", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.3166104915, "Longitude": 6.986468149, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "<NAME>", "Nummer stembureau": 11, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "de Lutte", "Postcode": "7587AM", "Straatnaam": "Plechelmusstraat", "UUID": "817184d3e3c143b69ec03e3fecb4e613", "_id": 94, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680401", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016804", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "71", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.3587679684, "Longitude": 6.99845691193, "Mindervalide toilet aanwezig": "", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "Cafe \u0027t Sterrebos", "Nummer stembureau": 12, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Beuningen", "Postcode": "7588RG", "Straatnaam": "Beuningerstraat", "UUID": "0a4828d3c7754fa7aba12867f7a83343", "_id": 102, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}, {"Akoestiek": "", "CBS buurtnummer": "BU01680009", "CBS gemeentecode": "GM0168", "CBS wijknummer": "WK016800", "Extra adresaanduiding": "", "Gemeente": "Losser", "Huisnummer": "134", "Huisnummertoevoeging": "", "Invalidenparkeerplaatsen": "", "Latitude": 52.2813806025, "Longitude": 6.97477270796, "Mindervalide toilet aanwezig": "Y", "Mindervaliden toegankelijk": "Y", "Naam stembureau": "De Losserhof", "Nummer stembureau": 13, "Openingstijden": "2018-03-21T07:30:00 tot 2018-03-21T21:00:00", "Plaats": "Losser", "Postcode": "7581PW", "Straatnaam": "Oldenzaalsestraat", "UUID": "92dd91269d384f24b7c057a4585ae85d", "_id": 162, "elections": ["Referendum over de Wet op de inlichtingen- en veiligheidsdiensten", "Gemeenteraadsverkiezingen 2018"]}]; StembureausApp.stembureaus_markers = []; //console.log(StembureausApp.links_external ? 'external links for markers' : 'internal links for markers'); StembureausApp.getPopup = function(s) { var opinfo = StembureausApp.stembureaus[i]['Openingstijden'].split(' tot '); var target = StembureausApp.links_external ? ' target="_blank"' : ''; output = "[" + StembureausApp.stembureaus[i]['Naam stembureau'] + "](\"/s/") "; if (StembureausApp.stembureaus[i]['Straatnaam']) { output += StembureausApp.stembureaus[i]['Straatnaam']; } if (StembureausApp.stembureaus[i]['Huisnummer']) { output += ' ' + StembureausApp.stembureaus[i]['Huisnummer']; } if (StembureausApp.stembureaus[i]['Huisnummertoevoeging']) { output += '-' + StembureausApp.stembureaus[i]['Huisnummertoevoeging']; } if (StembureausApp.stembureaus[i]['Plaats']) { output += " " + StembureausApp.stembureaus[i]['Plaats'] + " "; } else { output += "*Gemeente " + StembureausApp.stembureaus[i]['Gemeente'] + "* "; } output += '**Open:** ' + opinfo[0].split('T')[1].slice(0, 5) + ' ‐ ' + opinfo[1].split('T')[1].slice(0, 5) + ' '; if (StembureausApp.stembureaus[i]["Mindervaliden toegankelijk"] == 'Y') { output += ''; } output += ' '; return output; }; for(var i=0; i < StembureausApp.stembureaus.length; i++) { StembureausApp.stembureaus_markers.push( L.marker( [StembureausApp.stembureaus[i].Latitude, StembureausApp.stembureaus[i].Longitude] ).bindPopup(StembureausApp.getPopup(StembureausApp.stembureaus[i])) ); } StembureausApp.map = L.map('map').setView([52.2, 5.592], 6); StembureausApp.map._layersMaxZoom = 19; StembureausApp.clustermarkers = L.markerClusterGroup({maxClusterRadius: 50}); for(var i=0; i < StembureausApp.stembureaus_markers.length; i++) { StembureausApp.stembureaus_markers[i].addTo(StembureausApp.clustermarkers); } if (StembureausApp.stembureaus_markers.length > 50) { StembureausApp.map.addLayer(StembureausApp.clustermarkers); } else { StembureausApp.map.addLayer(L.layerGroup(StembureausApp.stembureaus_markers)); } StembureausApp.group = L.featureGroup(StembureausApp.stembureaus_markers.filter( function (s) { return (StembureausApp.stembureaus_markers.length <= 50) || (s._latlng.lng > 0); })); StembureausApp.map.fitBounds(StembureausApp.group.getBounds(), {maxZoom: 16}); L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© [OpenStreetMap](http://osm.org/copyright) contributors | [Waar is mijn stemlokaal](https://waarismijnstemlokaal.nl/)' }).addTo(StembureausApp.map); StembureausApp.show(StembureausApp.stembureaus); $('#model-embed').on('shown.bs.modal', function () { $('textarea:visible').select().focus(); }); $('#form-embed-search-checkbox').on('click', function (e) { $('#form-embed-search').toggle(); $('#form-embed-no-search').toggle(); $('textarea:visible').select().focus(); }); }); ``` Upvotes: 2 [selected_answer]
2018/03/20
1,695
4,399
<issue_start>username_0: I would like to compute the 1-year rolling average for each row in this Dataframe `test`: ``` index id date variation 2313 7034 2018-03-14 4.139148e-06 2314 7034 2018-03-13 4.953194e-07 2315 7034 2018-03-12 2.854749e-06 2316 7034 2018-03-09 3.907458e-06 2317 7034 2018-03-08 1.662412e-06 2318 7034 2018-03-07 1.346433e-06 2319 7034 2018-03-06 8.731700e-06 2320 7034 2018-03-05 7.145597e-06 2321 7034 2018-03-02 4.893283e-06 ... ``` For example, I would need to calculate: * mean of variation of id `7034` between 2018-03-14 and 2017-08-14 * mean of variation of id `7034` between 2018-03-13 and 2017-08-13 * etc. I tried: ``` test.groupby(['id','date'])['variation'].rolling(window=1,freq='Y',on='date').mean() ``` but I got the error message: ``` ValueError: invalid on specified as date, must be a column (if DataFrame) or None ``` How can I use the pandas `rolling()` function in this case? --- **[EDIT 1]** [thanks to Sacul] I tested: ``` df['date'] = pd.to_datetime(df['date']) df.set_index('date').groupby('id').rolling(window=1, freq='Y').mean()['variation'] ``` But `freq='Y'` doesn't work (I got: `ValueError: Invalid frequency: Y`) Then I used `window = 365, freq = 'D'`. But there is another issue: because there are never 365 consecutive dates for each combined `id-date`, the result is always empty. Even if there missing dates, I would like to ignore them and consider all dates between the current date and the (current date - 365) to compute the rolling mean. For instance, imagine I have: ``` index id date variation 2313 7034 2018-03-14 4.139148e-06 2314 7034 2018-03-13 4.953194e-07 2315 7034 2017-03-13 2.854749e-06 ``` Then, * for 7034 2018-03-14: I would like to compute MEAN(4.139148e-06,4.953194e-07, 2.854749e-06) * for 7034 2018-03-13: I would like to compute also MEAN(4.139148e-06,4.953194e-07, 2.854749e-06) How can I do that? --- **[EDIT 2]** Finally I used the formula below to calculate rolling median, averages and standard deviation on 1 Year by ignoring missing values: ``` pd.rolling_median(df.set_index('date').groupby('id')['variation'],window=365, freq='D',min_periods=1) pd.rolling_mean(df.set_index('date').groupby('id')['variation'],window=365, freq='D',min_periods=1) pd.rolling_std(df.set_index('date').groupby('id')['variation'],window=365, freq='D',min_periods=1) ```<issue_comment>username_1: I believe this should work for you: ``` # First make sure that `date` is a datetime object: df['date'] = pd.to_datetime(df['date']) df.set_index('date').groupby('id').rolling(window=1, freq='A').mean()['variation'] ``` using `pd.DataFrame.rolling` with datetime works well when the `date` is the index, which is why I used `df.set_index('date')` (as can be seen in one of the [documentation's examples](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html)) I can't really test if it works on the year's average on your example dataframe, as there is only one year and only one ID, but it should work. Arguably Better Solution: ========================= **[EDIT]** As pointed out by <NAME>, `freq` is now a deprecated argument. Here is an alternative (and probably more future-proof) way to do what you're looking for: ``` df.set_index('date').groupby('id')['variation'].resample('A').mean() ``` You can take a look at the [`resample`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.resample.html#pandas.Series.resample) documentation for more details on how this works, and [this link](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases) regarding the frequency arguments. Upvotes: 2 <issue_comment>username_2: First you need to set your date as the index: ``` df['date'] = pd.to_datetime(df['date']) df.index = df['date'] ``` Then, the easiest way to define you window is to use a string using one of the [offset aliases](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases): ``` df.groupby('id').rolling('365D')['variation'].mean() ``` Alternatively, for more flexibility, you can also use a timedelta: ``` from datetime import timedelta one_year = timedelta(days=365) df.groupby('id').rolling(one_year)['variation'].mean() ``` Upvotes: 0
2018/03/20
419
1,612
<issue_start>username_0: A few week ago, WebUSB was able to communicate with my custom HID device on both MAC and Window platform. Now it seems that latest Chrome update (65.0.3325.162) has removed this support on Window platform. <https://chromium-review.googlesource.com/c/chromium/src/+/951635> Can someone please confirm this? What would be the solution for custom HID device to work with WebUSB then?<issue_comment>username_1: That is correct. On all platforms a device advertising the HID protocol will be claimed by the system HID driver and should not be claimed by a site using the WebUSB API. If you want to connect to an HID interface you will need to modify the device so that the interface is no longer marked as HID. See [this thread](https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/LZXocaeCwDw) for more information on this policy. Upvotes: 3 [selected_answer]<issue_comment>username_2: As a "workaround" on Windows, you can tell the OS to use WinUSB driver instead of HIDUSB as suggested in <http://gerritniezen.com/2017/10/connecting-to-usb-devices-with-your-browser/> I used the Zadig tool to force the use of the WinUSB driver and it works for me. I can communicate with my HID device from Chrome and WebUSB. On my laptop, this HID device will not be accessed by the HIDUSB driver anymore as long as the driver switch has not been reverted (so some applications cannot communicate with it anymore). But on other laptop, it still works as a standard USB HID device. I am still looking for a way/workaround to allow Chrome to claim directly the HID device... Upvotes: 0
2018/03/20
351
1,282
<issue_start>username_0: Does anyone know a reason why the body of the map function is not being called? Other than the API not working, which I have checked and it is working. ``` this.httpClient.get(this.url) .map( (data) => { console.log('here'); console.log(data); return data; } ) ```<issue_comment>username_1: An `Observable` is cold, meaning that you have to subscribe on it to get it called and have a value. ``` this.httpClient.get(this.url) .map( (data) => { console.log('here'); console.log(data); return data; } ).subscribe(data => console.log('GOT DATA', data)); ``` Just add a subscription and the pipeline will be called. Upvotes: 2 <issue_comment>username_2: You need to mention the type of `data` i.e `Response` and apart from this `map` operator is used to parse your response in the desired format and it returns an observable which shall be `subscribed` ``` this.httpClient.get(this.url) .map(response => response.json()) .subscribe(data=>console.log(data)); ``` I suppose you are using angular >=4.2.3 which means you don't require `map` operatoer because The new `HttpClient` by default formats the response to JSON so we no longer need to parse it using `response.json():` Upvotes: 0
2018/03/20
486
1,647
<issue_start>username_0: I am trying to collect prices for films on Vudu. However, when I try to pull data from the relevant div container, it returns as empty. ``` from bs4 import BeautifulSoup url = "https://www.vudu.com/content/movies/details/title/835625" response = get(url) html_soup = BeautifulSoup(response.text, 'html.parser') type(html_soup) price_container = html_soup.find_all('div', class_ = 'row nr-p-0 nr-mb-10') ``` Result: ``` In [43]: price_container Out[43]: [] ``` As you can see here, the price information is contained in a the div class I specified: [![enter image description here](https://i.stack.imgur.com/PasBB.jpg)](https://i.stack.imgur.com/PasBB.jpg)<issue_comment>username_1: An `Observable` is cold, meaning that you have to subscribe on it to get it called and have a value. ``` this.httpClient.get(this.url) .map( (data) => { console.log('here'); console.log(data); return data; } ).subscribe(data => console.log('GOT DATA', data)); ``` Just add a subscription and the pipeline will be called. Upvotes: 2 <issue_comment>username_2: You need to mention the type of `data` i.e `Response` and apart from this `map` operator is used to parse your response in the desired format and it returns an observable which shall be `subscribed` ``` this.httpClient.get(this.url) .map(response => response.json()) .subscribe(data=>console.log(data)); ``` I suppose you are using angular >=4.2.3 which means you don't require `map` operatoer because The new `HttpClient` by default formats the response to JSON so we no longer need to parse it using `response.json():` Upvotes: 0
2018/03/20
341
1,234
<issue_start>username_0: I need to get a component that I cliked and see its target property. I try to get it but the evt param is undefined ``` getcomponent(evt){ console.log(evt.target) //... } //... render() { return (this.getcomponent()}>); } ```<issue_comment>username_1: An `Observable` is cold, meaning that you have to subscribe on it to get it called and have a value. ``` this.httpClient.get(this.url) .map( (data) => { console.log('here'); console.log(data); return data; } ).subscribe(data => console.log('GOT DATA', data)); ``` Just add a subscription and the pipeline will be called. Upvotes: 2 <issue_comment>username_2: You need to mention the type of `data` i.e `Response` and apart from this `map` operator is used to parse your response in the desired format and it returns an observable which shall be `subscribed` ``` this.httpClient.get(this.url) .map(response => response.json()) .subscribe(data=>console.log(data)); ``` I suppose you are using angular >=4.2.3 which means you don't require `map` operatoer because The new `HttpClient` by default formats the response to JSON so we no longer need to parse it using `response.json():` Upvotes: 0
2018/03/20
870
3,595
<issue_start>username_0: I have a repository with a LaTeX project that includes the .pdf compiled and generated from the .tex file. I want the history and the source files to be private but the .pdf should be public and with a fixed URL. GitHub itself provides a fixed URL for the single file but in order to make it publicly available I need to set the repository public and this exposes also the history and all other files to the public. Do you think there is a way where I can have GitHub (or BitBucket, or ...) to push the single .pdf file somewhere else so that it has a fixed unique and public URL? I thought I could somehow push it to AWS's S3 or have a Lambda receiving a HTTP call and going to fetch the single file but there could be a far easier way I don't know.<issue_comment>username_1: Continuing off of Max's answer, one solution could be to maintain separate private and public repos. I like this option because you would not have to introduce another service. Your workflow might look something like this: 1. Compile your PDF locally in your private repo 2. Commit the changes to your private repo 3. Have a post-commit git hook to copy the compiled PDF to your local public repo 4. Push from your local public repo to Github For the post-commit hook, create a file called `post-commit` (no extension) in the git `.git/hooks` directory. Then simply put a bash command inside that file that copies the PDF from the private repo to the public repo, e.g: `cp ~/private-repo/document.pdf ~/public-repo/document.pdf`. Upvotes: 2 <issue_comment>username_2: You could use GitHub pages on a private repository to achieve this. From [the documentation](https://help.github.com/articles/what-is-github-pages/): > > GitHub Pages sites are publicly available on the internet, even if their repositories are private. > > > You could commit code to the `master` branch and have a CI tool run the build steps, force remove the source files and commit the PDF all to the `gh-pages` branch. If this was to a user page (e.g. username.github.io repository), then the same applies, but GitHub pages is built from `master` branch, so you'd need to use something like a `release` branch for your build tool to listen to. I use a [similar workflow](https://github.com/username_2/cv) to build a PDF from a .tex file using Travis. The source is public, but the workflow would work for a private repository. The only problem with this is that Travis public version cannot work on private repositories, but [Travis Pro](https://travis-ci.com/) and other tools like [CircleCI](https://circleci.com/) can do this. Upvotes: 2 <issue_comment>username_3: Ok here's how I worked it out (many thanks to @username_2 who pointed me in the right direction). I created a very simple index.html file in the repository with the TeX project: ``` $(document).ready(function() { var link=document.createElement('a'); link.href = "EXISTING\_FILE\_NAME"; link.download = "NEW\_FILENAME"; link.click(); }); ``` I then set up the repository to serve the `master` branch for the git-page. I then used my own domain and created a subdomain on CloudFlare, that owns the NameServers of my domain, pointing it to my GitHub page, and set the same subdomain on the repository as my custom domain address. What happens now is this: a user goes to the custom subdomain, he gets the index.html page that automatically downloads the file in the most recent version as it's the one served. Yet the repository is private and even the name of the file gets changed so nobody sees it. Upvotes: 2 [selected_answer]
2018/03/20
749
2,988
<issue_start>username_0: How do I Show columns for specific fields . ``` SHOW COLUMNS FROM core_banking_mpesa WHERE FIELD= 'id' , FIELD ='LineNo' , FIELD ='Comments' ```<issue_comment>username_1: Continuing off of Max's answer, one solution could be to maintain separate private and public repos. I like this option because you would not have to introduce another service. Your workflow might look something like this: 1. Compile your PDF locally in your private repo 2. Commit the changes to your private repo 3. Have a post-commit git hook to copy the compiled PDF to your local public repo 4. Push from your local public repo to Github For the post-commit hook, create a file called `post-commit` (no extension) in the git `.git/hooks` directory. Then simply put a bash command inside that file that copies the PDF from the private repo to the public repo, e.g: `cp ~/private-repo/document.pdf ~/public-repo/document.pdf`. Upvotes: 2 <issue_comment>username_2: You could use GitHub pages on a private repository to achieve this. From [the documentation](https://help.github.com/articles/what-is-github-pages/): > > GitHub Pages sites are publicly available on the internet, even if their repositories are private. > > > You could commit code to the `master` branch and have a CI tool run the build steps, force remove the source files and commit the PDF all to the `gh-pages` branch. If this was to a user page (e.g. username.github.io repository), then the same applies, but GitHub pages is built from `master` branch, so you'd need to use something like a `release` branch for your build tool to listen to. I use a [similar workflow](https://github.com/username_2/cv) to build a PDF from a .tex file using Travis. The source is public, but the workflow would work for a private repository. The only problem with this is that Travis public version cannot work on private repositories, but [Travis Pro](https://travis-ci.com/) and other tools like [CircleCI](https://circleci.com/) can do this. Upvotes: 2 <issue_comment>username_3: Ok here's how I worked it out (many thanks to @username_2 who pointed me in the right direction). I created a very simple index.html file in the repository with the TeX project: ``` $(document).ready(function() { var link=document.createElement('a'); link.href = "EXISTING\_FILE\_NAME"; link.download = "NEW\_FILENAME"; link.click(); }); ``` I then set up the repository to serve the `master` branch for the git-page. I then used my own domain and created a subdomain on CloudFlare, that owns the NameServers of my domain, pointing it to my GitHub page, and set the same subdomain on the repository as my custom domain address. What happens now is this: a user goes to the custom subdomain, he gets the index.html page that automatically downloads the file in the most recent version as it's the one served. Yet the repository is private and even the name of the file gets changed so nobody sees it. Upvotes: 2 [selected_answer]
2018/03/20
500
2,016
<issue_start>username_0: We have a native Android app that uses WebRTC, and we need to find out what video codecs are supported by the host device. (VP8 is always supported but H.264 is subject to the device having a compatible chipset.) The idea is to create an offer and get the supported video codecs from the SDP. We can do this in a web app as follows: ``` const pc = new RTCPeerConnection(); if (pc.addTransceiver) { pc.addTransceiver('video'); pc.addTransceiver('audio'); } pc.createOffer(...); ``` Is there a way to do something similar on Android? It's important that we don't need to request camera access to create the offer.<issue_comment>username_1: I think this is what you are looking for: ``` private static void codecs() { MediaCodecInfo[] codecInfos = new MediaCodecList(MediaCodecList.ALL_CODECS).getCodecInfos(); for(MediaCodecInfo codecInfo : codecInfos) { Log.i("Codec", codecInfo.getName()); for(String supportedType : codecInfo.getSupportedTypes()){ Log.i("Codec", supportedType); } } } ``` You can check example on <https://developer.android.com/reference/android/media/MediaCodecInfo.html> Upvotes: 0 <issue_comment>username_2: Create a VideoEncoderFactory object and call getSupportedCodecs(). This will return a list of codecs that can be used. Be sure to create the PeerConnectionFactory first. ``` PeerConnectionFactory.InitializationOptions initializationOptions = PeerConnectionFactory.InitializationOptions.builder(this) .setEnableVideoHwAcceleration(true) .createInitializationOptions(); PeerConnectionFactory.initialize(initializationOptions); VideoEncoderFactory videoEncoderFactory = new DefaultVideoEncoderFactory(eglBase.getEglBaseContext() , true, true); for (int i = 0; i < videoEncoderFactory.getSupportedCodecs().length; i++) { Log.d("Codecs", "Supported codecs: " + videoEncoderFactory.getSupportedCodecs()[i].name); } ``` Upvotes: 2
2018/03/20
1,237
3,326
<issue_start>username_0: I have a dataframe `df` defined like so: ``` A B C D E F 0 a z l 1 qqq True 1 a z l 2 qqq True 2 a z l 3 qqq False 3 a z r 1 www True 4 a z r 2 www False 5 a z r 2 www False 6 s x 7 2 eee True 7 s x 7 3 eee False 8 s x 7 4 eee True 9 s x 5 1 eee True 10 d c l 1 rrr True 11 d c l 2 rrr False 12 d c r 1 fff False 13 d c r 2 fff True 14 d c r 3 fff True ``` My goal is to create a table based on the unique values of columns `A`, `B` and `C` so that I am able to count the number of elements of column `D` and the unique number of elements in column `C`. The output looks like this: ``` D E A B a z 6 2 d c 5 2 s x 4 2 ``` Where for example the 6 is how many elements are present in the column `A` having value `a`, and 2 indicates the number of unique elements in column `E` (`qqq`,`wwww`). I was able to achgieve this goal by using the following lines of code: ``` # Define dataframe df = pd.DataFrame({'A':['a','a','a','a','a','a','s','s','s','s','d','d','d','d','d'], 'B': ['z','z','z','z','z','z','x','x','x','x','c','c','c','c','c'], 'C': ['l','l','l','r','r','r','7','7','7','5','l','l','r','r','r'], 'D': ['1','2','3','1','2','2','2','3','4','1','1','2','1','2','3'], 'E': ['qqq','qqq','qqq','www','www','www','eee','eee','eee','eee','rrr','rrr','fff','fff','fff'], 'F': [True,True,False,True,False,False,True,False,True,True,True,False,False,True,True]}) # My code so far a = df.pivot_table(index=['A','B','C'], aggfunc={'E':'nunique', 'D':'count'}).sort_values(by='E') a = a.pivot_table(index=['A','B'], aggfunc='sum').sort_values(by='E') ``` **The Problem**: Now I would like also to count the number of `True` or `False` values present in the dataframe with the same criteria presented before so that the result looks like this: ``` D E True False A B a z 6 2 3 3 d c 5 2 3 2 s x 4 2 3 1 ``` As you can see the number of `True` values where `A`=`a` are 3 and `False` values are 3 as well. What is a smart and elegant way to achieve my final goal?<issue_comment>username_1: You just need two steps ``` pd.concat([df.groupby(['A','B','C']).agg({'E': 'nunique', 'D':'size'}).sum(level=[0,1]) ,df.groupby(['A','B']).F.value_counts().unstack()],1) Out[702]: E D False True A B a z 2 6 3 3 d c 2 5 2 3 s x 2 4 1 3 ``` Using `value_counts` ``` df.groupby(['A','B']).F.value_counts().unstack() ``` Upvotes: 2 <issue_comment>username_2: Using your code, you could extend like this: ``` # My code so far a = df.pivot_table(index=['A','B','C'], aggfunc={'E':'nunique', 'D':'count','F':sum}).sort_values(by='E').rename(columns={'F':'F_True'}) a = a.pivot_table(index=['A','B'], aggfunc='sum').sort_values(by='E').eval('F_False = D - F_True') ``` OUtput: ``` D E F_True F_False A B a z 6 2 3.0 3.0 d c 5 2 3.0 2.0 s x 4 2 3.0 1.0 ``` Upvotes: 4 [selected_answer]
2018/03/20
557
1,887
<issue_start>username_0: I want to style a table like [this](https://gyazo.com/15cbc922ca5dd560584e2212d8adab3b), I mean I want to style a column specifying width based on the data, color based on the data as well. ``` Price of buy orders(BTC) | Amount of buy orders(<%=@currency\_id%>) | Date of buy orders | <%if @buy\_orders.present?%> <%@buy\_orders.each do |buy|%> | <%=buy.price%> | <%=buy.amount%> | <%=buy.created\_at%> | <%end%> <%end%> ``` I tried the code below but it didn't work.<issue_comment>username_1: not sure what you exactly wanna do and what language are you using with HTML. Anyway the %=buy.amount% content should be inside the tag which is styled, something like this: ``` <%=buy.amount%> ... | ``` Upvotes: 0 <issue_comment>username_2: You cannot color a percentage of an element, so you need to create another element to overlay and size that element. I'm not a CSS guru, but this will do it. * `z-index: -1;` will place the element behind the default elements * `position: absolute;` will allow you to position the element in relation to its parent (container) * `position: relative;` creates the parent element that the child need to access for its absolute positioning I've used a simple condition to decide which color class to use. The second one you will have to calculate yourself from whatever values you are displaying. (Don't miss the '`%`' at the end of the `width:` in the `style`...) ``` .colorMeBlue { background-color: blue; } .colorMeGreen { background-color: green; } .parent { position: relative; } .overlay { position: absolute; top: 0; left: 0; z-index: -1; } " style="width: <%= buy.amount%>%">  <%= buy.amount%> | ``` Just alter the condition you want to check for where I have used `buy.amount > 100` and `"width: <%= buy.amount%>%"` as examples. Upvotes: 2 [selected_answer]
2018/03/20
1,714
5,178
<issue_start>username_0: I've been reading this article: <https://www.red-gate.com/simple-talk/sql/database-administration/manipulating-xml-data-in-sql-server/> I found what I need, but the only difference in my situation I need to use `FOR XML PATH(...)` **and not** `ROOT(...)` instead of `AUTO` mode. I have a XML file that uses columns of the table as a elements. The question that I'm trying to resolve: 1) how to produce a single XML file per each record in the select statement; 2) save each xml file with a unique name based on rowId on a shared server **Note**: hardcoding values for each XML row is not an option as my output contains thousands of rows. The problem here is that FOR XML and XML data type are not allowed in the Cursor statement. ``` CREATE TABLE #T1 ( ID INT NOT NULL, LName NVARCHAR(30) NULL, FName NVARCHAR(30) NULL, Comments NVARCHAR(MAX) NULL ); GO INSERT INTO #T1 (ID, LName, FName, Comments) VALUES (1, 'JONOTHAN', 'SMITH', 'is the best friend ever'), (2, 'ROGER', 'SHU`LTS', 'is the boss!'), (3, 'Jeremy', 'Deimer', 'is the a good drama actor'), (4, 'Alexandra', 'Norusis', 'is the smart feminist'); GO SELECT t.ID, t.LName, t.FName, t.Comments FROM #T1 t FOR XML PATH(''), ROOT ('body'); ``` **Below is the current output of the XML file format:** ``` 1 JONOTHAN SMITH is the best friend ever 2 ROGER SHU`LTS is the boss! 3 Jeremy Deimer is the a good drama actor 4 Alexandra Norusis is the smart feminist ``` **Below is the desired out (based on a record with ID = 1)** ``` 1 JONOTHAN SMITH is the best friend ever ```<issue_comment>username_1: ``` SELECT ID, LName, FName, Comments FROM #T1 FOR XML PATH('body'); ``` I think this is your expected result: ``` 1 JONOTHAN SMITH is the best friend ever 2 ROGER SHU`LTS is the boss! 3 Jeremy Deimer is the a good drama actor 4 Alexandra Norusis is the smart feminist ``` Upvotes: 1 <issue_comment>username_2: I think it's this what you need: ``` DECLARE @tbl TABLE ( ID INT NOT NULL, LName NVARCHAR(30) NULL, FName NVARCHAR(30) NULL, Comments NVARCHAR(MAX) NULL ); INSERT INTO @tbl (ID, LName, FName, Comments) VALUES (1, 'JONOTHAN', 'SMITH', 'is the best friend ever'), (2, 'ROGER', 'SHU`LTS', 'is the boss!'), (3, 'Jeremy', 'Deimer', 'is the a good drama actor'), (4, 'Alexandra', 'Norusis', 'is the smart feminist'); ``` --use this for your `CURSOR` ``` SELECT ID ,(SELECT t1.* FOR XML PATH('body'),TYPE) AS TheRowAsXml FROM @tbl AS t1 ``` The result ``` ID TheRowAsXml 1 1JONOTHANSMITHis the best friend ever 2 2ROGERSHU`LTSis the boss! 3 3JeremyDeimeris the a good drama actor 4 4AlexandraNorusisis the smart feminist ``` UPDATE Your example using a CURSOR ---------------------------------- ``` DECLARE @tbl TABLE ( ID INT NOT NULL, LName NVARCHAR(30) NULL, FName NVARCHAR(30) NULL, Comments NVARCHAR(MAX) NULL ); INSERT INTO @tbl (ID, LName, FName, Comments) VALUES (1, 'JONOTHAN', 'SMITH', 'is the best friend ever'), (2, 'ROGER', 'SHU`LTS', 'is the boss!'), (3, 'Jeremy', 'Deimer', 'is the a good drama actor'), (4, 'Alexandra', 'Norusis', 'is the smart feminist'); DECLARE @ID INT; DECLARE @xml XML; DECLARE @BusinessCursor as CURSOR; SET @BusinessCursor = CURSOR FOR SELECT ID ,( SELECT t1.* FOR XML PATH(''),ROOT('body'),TYPE ) AS TheXml FROM @tbl AS t1 OPEN @BusinessCursor; FETCH NEXT FROM @BusinessCursor INTO @ID, @xml; WHILE @@FETCH_STATUS = 0 BEGIN --Do something with the values PRINT 'ID: ' + CAST(@ID AS VARCHAR(10)); PRINT 'XML: ' + CAST(@xml AS NVARCHAR(MAX)); --Here you can build your BCP command. --Use the ID or any other information to build the file's name and save the XML out (use -w -T) FETCH NEXT FROM @BusinessCursor INTO @ID, @xml; END CLOSE @BusinessCursor; DEALLOCATE @BusinessCursor; ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: ``` DECLARE @tbl TABLE ( ID INT NOT NULL, LName NVARCHAR(30) NULL, FName NVARCHAR(30) NULL, Comments NVARCHAR(MAX) NULL ); INSERT INTO @tbl (ID, LName, FName, Comments) VALUES (1, 'JONOTHAN', 'SMITH', 'is the best friend ever'), (2, 'ROGER', 'SHU`LTS', 'is the boss!'), (3, 'Jeremy', 'Deimer', 'is the a good drama actor'), (4, 'Alexandra', 'Norusis', 'is the smart feminist'); DECLARE @xml XML DECLARE @BusinessCursor as CURSOR; SET @BusinessCursor = CURSOR FOR SELECT -- ID -- , ( SELECT * FROM @tbl AS t2 -- WHERE t1.ID=t2.ID FOR XML PATH(''),ROOT('body'),TYPE ) AS TheRowAsXml FROM @tbl AS t1 OPEN @BusinessCursor; FETCH NEXT FROM @BusinessCursor INTO @xml; WHILE @@FETCH_STATUS = 0 BEGIN SELECT ( SELECT * FROM @tbl AS t2 WHERE t1.ID=t2.ID FOR XML PATH(''),ROOT('body'), TYPE ) AS TheRowAsXml FROM @tbl AS t1 FETCH NEXT FROM @BusinessCursor INTO @xml; END CLOSE @BusinessCursor; DEALLOCATE @BusinessCursor; ``` Upvotes: 0
2018/03/20
1,332
4,751
<issue_start>username_0: Is there a way to revert to a certain commit without changing the remote history, basically undoing all the changes from that commit in a new commit just like git revert? For eg - I have 3 commits commit A -> B -> C, my head is currently at C I want to now create another commit D which will have the same code as A so that I can push that commit to the remote branch without changing history.<issue_comment>username_1: ### Be careful with the word "revert" When people say "I want to revert" in Git they sometimes mean what `git revert` does, which is more of a *back out* operation, and sometimes mean what you do, which is to *restore the source base from an earlier version*. To illustrate, suppose we have a commit that has just one file, `README`, and three commits: ``` A <-B <-C <-- master (HEAD) ``` The version of `README` in revision A says "I am a README file", and is just one line long. The version of `README` in revision B says "I am a README file." as before, but has a second line added, "This file is five lines long." The version of `README` in revision C is corrected in that its second line says "This file is two lines long." Git's `git revert` can *undo* a change, so that, right now, running `git revert` will attempt to *remove the added line*. This will fail since the line doesn't match up any more (and we can run `git revert --abort` to give up). Similarly, running `git revert` will attempt to *undo the correction*. This will succeed, effectively reverting *to* revision `B`! This question, [Undo a particular commit in Git that's been pushed to remote repos](https://stackoverflow.com/q/2318777/1256452), is all about the backing-out kind of reverting. While that sometimes results in the reverting-to kind of reverting, it's not the same. What you want, according to your question, is more: "make me a new commit `D` that has the same *source code* as commit `A`". You want to *revert to* version `A`. ### Git does not have a user command to *revert to*, but it's easy This question, [How to revert Git repository to a previous commit?](https://stackoverflow.com/q/4114095/1256452), is full of answers talking about using `git reset --hard`, which does the job—but does it by lopping off history. The [accepted answer](https://stackoverflow.com/a/4114122/1256452), though, includes one of the keys, specifically this: > > > ``` > git checkout 0d1d7fc32 . > > ``` > > This command tells Git to extract, from the given commit `0d1d7fc32`, *all the files* that are in that snapshot and in the current directory (`.`). If your current directory is the top of the work-tree, that will extract the files from all directories, since `.` includes, recursively, sub-directory files. The one problem with this is that, yes, it extracts all the files, but it doesn't *remove* (from the index and work-tree) any files that you have that you don't want. To illustrate, let's go back to our three-commit repository and add a fourth commit: ``` $ echo new file > newfile $ git add newfile $ git commit -m 'add new file' ``` Now we have four commits: ``` A <-B <-C <-D <-- master (HEAD) ``` where commit `D` has the correct two-line `README`, *and* the new file `newfile`. If we do: ``` $ git checkout -- . ``` we'll overwrite the index and work-tree version of `README` with the version from commit `A`. We'll be back to the one-line `README`. But we will still have, in our index and work-tree, the file `newfile`. To fix that, instead of just checkout out all files from the commit, we should start by *removing* all files that are in the index: ``` $ git rm -r -- . ``` Then it's safe to re-fill the index and work-tree from commit `A`: ``` $ git checkout -- . ``` (I try to use the `--` here automatically, in case the path name I want resembles an option or branch name or some such; it makes this work even if I just want to check out the file or directory named `-f`, for instance). Once you have done these two steps, it's safe to `git commit` the result. ### Minor: a shortcut Since Git actually just makes commits from the index, all you have to do is copy the desired commit into the index. The `git read-tree` command does this. You can have it update the work-tree at the same time, so: ``` $ git read-tree -u ``` suffices instead of remove-and-checkout. (You must still make a new commit as usual.) Upvotes: 6 [selected_answer]<issue_comment>username_2: Before starting, remember to `git stash` or backup all uncommitted files because they will be lost after these commands. 1. `git reset --hard` 2. `git reset --soft` 3. `git add . && git commit -m "Commit D"` Now it's at D and previous commits A, B and C have all been kept Upvotes: 2
2018/03/20
1,485
4,721
<issue_start>username_0: I'm teaching myself Shiny and I am stuck on my ggplot2 graph not being able to use the reactive dateRangeInput as my x-axis. I have a few questions: 1. Is there a way to use my data frame to grab the min, max values for date range input instead of having to hardcode them in so that when I add more tweets to the data frame I don't have to hardcode the values each time? 2. I am getting the error: `Aesthetics must be either length 1 or the same as the data (33108): x, y` when I try to use input$date as my aes(x = input$date... ```html library(shiny) library(tidyr) library(ggplot2) tweets <- read.csv(file.choose()) colnames(tweets)[1] <- "Content" tweets <- separate(tweets, created_at, c("Date", "Time"), sep = " ") tweets$Date <-as.Date(tweets$Date, "%m/%d/%Y") ui <- fluidPage( dateRangeInput(inputId = "date", strong("Date Range"), start = "2009-05-04", end = "2018-02-28", min = "2009-05-04", max ="2018-02-28" ), plotOutput("Graph") ) server <- function(input, output) { output$Graph <- renderPlot({ ggplot(tweets, aes(x = input$date, y = count)) + geom_bar(stat = "identity", position = "stack") + #scale_y_continuous(name = "Retweet Count", limits = c(0,370000), breaks=seq(0,370000,10000)) + theme(panel.background = element_rect(fill = "white", colour = "grey50")) }) } shinyApp(ui = ui, server = server) ```<issue_comment>username_1: For the first question you can use `updateDateRangeInput` see [here](https://shiny.rstudio.com/reference/shiny/0.14/updateDateRangeInput.html). So you would find your min and max dates in tweets outside of the `server` function then pass them to the input. Make sure to add `session` to your function: ``` server <- function(input, output, session) { observe({ updateDateRangeInput(session, "date", min = myMinDate, max = myMaxDate) }) } ``` For the second question you need to use `aes_string` to pass variables to ggplot, see [here](https://stackoverflow.com/questions/22309285/how-to-use-a-variable-to-specify-column-name-in-ggplot) or [here](https://stackoverflow.com/questions/34057280/on-aes-string-factor-and-rstudios-shiny-package). Upvotes: 2 <issue_comment>username_2: @username_1's answer summarizes the use of `updateDateRangeInput` well, for further information you can refer to [this part](https://shiny.rstudio.com/reference/shiny/0.14/updateDateRangeInput.html) of the shiny documentation. About your second problem: `input$date` will return a vector of length 2 with the first element beeing the lower and the second being the upper part of the selected range. You will most likely not use this directly as x-aesthetics but rather subset your data with this and then plot the newly subsettet data. You can e.g. write ``` library(dpylr) # alternatevly library(tidyverse) newtweets <- reactive({ filter(tweets, between(date ,input$date[1], input$date[2])) }) ``` then, in your ggplot, use `newtweets()` as your data. **Update** The functions `filter` and `between()` (which is a shortcut for x is greater than ... and lesser then ...) come fromt the package `dplyr`, which is great for working with dataframes and part of a collection of packages that play very nicely with each other called `tidyverse` (see [here](https://www.tidyverse.org/)). When you refer to the newly created reactive object `newtweets()`, make sure to not forget the paranthesis because it is now a function call, that enables shiny to update the dataframe should the input change. **Update** A full working example in which I create some artificial data: ``` library(shiny) library(tidyverse) library(lubridate) # tweets <- read.csv(file.choose()) st <- ymd("2009-05-01") en <- ymd("2018-02-28") dates <- seq.Date(from = st, to = en, by = 1) tweets <- tibble(date = dates, count = rnorm(length(dates), mean = 5, sd = 3)) ui <- fluidPage( dateRangeInput(inputId = "date", strong("Date Range"), start = "2009-05-04", end = "2018-02-28", min = "2009-05-04", max ="2018-02-28" ), plotOutput("Graph") ) server <- function(input, output) { newtweets <- reactive({ filter(tweets, between(date ,input$date[1], input$date[2])) }) output$Graph <- renderPlot({ ggplot(newtweets(), aes(x = date, y = count)) + geom_bar(stat = "identity", position = "stack") + #scale_y_continuous(name = "Retweet Count", limits = c(0,370000), breaks=seq(0,370000,10000)) + theme(panel.background = element_rect(fill = "white", colour = "grey50")) }) } shinyApp(ui = ui, server = server) ``` Upvotes: 4 [selected_answer]
2018/03/20
281
1,055
<issue_start>username_0: I have a custom annotation: ``` @Target(AnnotationTarget.FUNCTION) @Retention(AnnotationRetention.RUNTIME) @MustBeDocumented annotation class Listener ``` Used like this: ``` @Service class MyService { @Listener fun onNewThing(thing: Thing) { ... } } ``` In another service, every time something happen, I want to call each function annotated with `@Listener` and with a parameter of type `Thing`. How can I do it without looping through all the beans in the context and checking all the methods?<issue_comment>username_1: you can use org.reflections of java : `Set allMethods = new Reflections().getMethodsAnnotatedWith(yourAnnotation.class);` ``` for (Method m : allMethods) { if (m.isAnnotationPresent(yourAnnotation.class)) { //YOUR LOGIC } } ``` Upvotes: -1 <issue_comment>username_2: You can use the following: ``` Set methodsAnnotatedWith = new Reflections("com.example.spring.aop.api", new MethodAnnotationsScanner()).getMethodsAnnotatedWith(BusinessFunction.class); ``` Upvotes: 1
2018/03/20
691
2,821
<issue_start>username_0: I have a very simple wpf custom control that defines two constructors: ``` public class SomeControl : System.Windows.Controls.Button { public SomeControl() { } public SomeControl(ISomeService service) { } } ``` This control is defined in a class library called *ControlLib*. The ISomeService interface is defined in another class library project called *ServiceContracts* and *ControlLib* has a reference to it. The third project in the solution (called *FrontEnd*) is a simple WPF-project and i place the custom control on the MainWindow like this: ``` ``` Until now, everything works fine and as intended. The project structure looks roughly like this: [![Diagram that show the project structure](https://i.stack.imgur.com/HSuPu.png)](https://i.stack.imgur.com/HSuPu.png) The problem occurs when i give the costum control a name. When i set the **Name** attribute like this the project does not longer compile. I get the following error: > > Unknown build error, 'Cannot resolve dependency to assembly 'ServiceContracts, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' because it has not been preloaded. When using the ReflectionOnly APIs, dependent assemblies must be pre-loaded or loaded on demand through the ReflectionOnlyAssemblyResolve event. Line 8 Position 31.' FrontEnd C:\01\_Data\Tmp\SomeSolution\FrontEnd\MainWindow.xaml 8 > > > **My question is:** Why does it break when i add the *Name*-attribute and why does it work in the first place? I know that setting the Name-attribute will add a field to the designer generated \*.g.i.cs file to access the control from code behind, but compilation also breaks when i do the same in a template in some resource dictionary without any designer generated files. The following things solved the problem but im not exactly sure why: * Adding a reference in *FrontEnd* to *ServiceContracts* * Making the parametrized constructor internal<issue_comment>username_1: I think what is happening with giving it a name is that you get a local member variable of type SomeControl in FrontEnd. This pulls in the dependency. Before that, you just have baml in a resource and when the baml is deserialized at runtime, the SomeControl type is already loaded in the AddDomain and can be dynamically instantiated using reflection. Upvotes: 0 <issue_comment>username_2: This is caused by the XAML compiler. Please refer to the following question for more information. [Cannot resolve dependency to assembly 'PostSharp' because it has not been preloaded](https://stackoverflow.com/questions/19823460/cannot-resolve-dependency-to-assembly-postsharp-because-it-has-not-been-preloa) The solution is to add a reference to `ServiceContracts.dll` from the WPF application project. Upvotes: 2 [selected_answer]
2018/03/20
2,818
11,460
<issue_start>username_0: I have two ComboBoxes: ``` final ComboBox comboBoxMainCategory = new ComboBox<>(); final ComboBox comboBoxSubCategory = new ComboBox<>(); ``` Depending on the value chosen in comboBoxMainCategory, the comboBoxSubCategory should be populated with the corresponding enum. ``` public enum MainCategory { // extra enum class EUROPE("Europe"), USA("USA"); } public enum SubCategoryEurope { // extra enum class GERMANY("Germany"), FRANCE("France"); } public enum SubCategoryUSA { COLORADO("Colorado"), CALIFORNIA("California"); } ``` If "Europe" is chosen for comboBoxMainCategory, comboBoxSubCategory should be populated with SubCategoryEurope. If "USA", with SubCategoryUSA. How do you achieve this? Here's my code: ``` final ComboBox comboBoxMainCategory = new ComboBox<(); final ComboBox comboBoxSubCategory = new ComboBox<>(); comboBoxMainCategory.valueProperty().addListener((obs, oldValue, newValue) -> { if (newValue == null) { // newValue: Europe || USA comboBoxSubCategory.getItems().clear(); comboBoxSubCategory.setDisable(true); } else if (newValue.equals(MainCategory.EUROPE)) { comboBoxSubCategory.setItems(FXCollections.observableArrayList(SubCategoryEurope.values())); comboBoxSubCategory.setDisable(false); } else { comboBoxSubCategory.setItems(FXCollections.observableArrayList(SubCategoryUSA.values())); comboBoxSubCategory.setDisable(false);} }); ``` Problem is, because comboBoxSubCategory is "SubCategory", there is a type error if it is populated with 'SubCategoryEurope' or 'SubCategoryUSA'. What is the best way to solve this? Sorry if it's a silly question, I'm new to JavaFx. Thanks a lot!<issue_comment>username_1: Create a generic placeholder interface ``` public interface EnumPlaceHolder> { public abstract String getDisplayValue(); public abstract E getEnum(); } ``` Create an implementation for all your enums. For example ``` public class EuropePlaceholder implements EnumPlaceHolder { private final Europe value; public EuropePlaceholder(Europe pValue){ value = pValue; } @Override public String getDisplayValue() { // here you create a user-friendly version of your enum for display return value.toString(); } @Override public Europe getEnum() { return value; } } ``` Then change the type of your `ComboBox` to `ComboBox>` and you can add any of your implemented `EnumPlaceholders` to it. When retrieving the selected item you can check which one is contained via instance check ``` EnumPlaceholder selectedItem = ...; if(selectedItem instanceof EuropePlaceholder){ Europe selectedEuropeEnum = (Europe) selectedItem.getEnum(); } else if(....){ // check with else if for your other enums } ``` And to display your enum in your combobox you call the `getDisplayValue()` of the EnumPlaceholder and show the returned `String` in your cell :) **EDIT** Tho in general i have to agree with username_2s answer. You shouldn't use enums for a construct like this. Rather use a Map<> or a List<> with appropriate content and structure. Upvotes: 0 <issue_comment>username_2: I wouldn't use enums at all, since this doesn't allow for data manipulation without recompiling. If you insist on using enums though, you need to use `Object` or a interface implemented with both subcategory enum types as parameter type for `comboBoxSubCategory`: ``` comboBoxMainCategory.valueProperty().addListener((obs, oldValue, newValue) -> { if (newValue == null) { // newValue: Europe || USA comboBoxSubCategory.getItems().clear(); comboBoxSubCategory.setDisable(true); } else { comboBoxSubCategory.setDisable(false); List extends Object list; switch (newValue) { case EUROPE: list = Arrays.asList(SubCategoryEurope.values()); break; default: list = Arrays.asList(SubCategoryUSA.values()); break; } comboBoxSubCategory.getItems().setAll(list); } }); ``` --- The better approach would be using a `Map>` to store the data: ``` Map> data = new HashMap<>(); data.put("EUROPE", Arrays.asList("GERMANY", "FRANCE")); data.put("USA", Arrays.asList("COLORADO", "CALIFORNIA")); comboBoxMainCategory.valueProperty().addListener((obs, oldValue, newValue) -> { List list = data.get(newValue); if (list != null) { comboBoxSubCategory.setDisable(false); comboBoxSubCategory.getItems().setAll(list); } else { comboBoxSubCategory.getItems().clear(); comboBoxSubCategory.setDisable(true); } }); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: Just for fun (and to flesh out my comments): a more versatile approach than those in the other answers is to move away the focus of interest from the concrete nature of the backing data to a more general solution of the use-case at hand. The drawback of letting the UI implement special cases is always the same - you have to do it over and over again for each special UI and each special data type. The way out is always the same, too: implement a Model that takes over the general aspect and re-use that in concrete UI/data contexts. The general aspects here are: * there's list of items with each having a list of dependent objects (same or different type) * this (let's call it root) list of items is shown in a control * from root list, a single item can be chosen (aka: selected) * another control should show the dependents of the root The general approach is to have a Model that * manages list of items * has the notion of one of those items as selected (or current or active or ..) * manages a list of dependent items that always is the dependent list of the selected item * its state (root items, current item, dependent items) is exposed as properties The advantages of such a Model are * can be formally and rigorouly tested, so using code can rely on its proper functioning * it's re-usable for any data context * it's re-usable for many controls * usage is pretty simple by binding In the example below, the Model is named RelationModel which expects root items of type RelationProvider (which allows access to a list of dependents, it's one option, could just as well use f.i. a Function to build the dependents). It is used once with a plain Map of String/list and once with enums of Continents/Countries, each very simple to implement. Note that the resulting UI is blissfully unaware of the nature of the data, implemented solely against the model. Naturally, not production grade, in particular, not formally tested and the model with just the barest functionality :) ``` public class CombosWithCategories extends Application { public interface RelationProvider { default ObservableList getRelations() { return emptyObservableList(); }; } /\*\* \* A model that manages a list of RelationProviders and has the notion \* of a current relationProvider with relations (it's a kind-of selectionModel). \* \* the type of elements in the list of relations \*/ public static class RelationModel { /\*\* \* all relationProviders managed by this model \*/ private ListProperty> relationProviders; /\*\* \* The owner of the relations. Must be contained in the providers managed \* by this model. \*/ private ObjectProperty> relationProvider; private ListProperty relations; public RelationModel() { initProperties(); } /\*\* \* The RelationProviders managed by the model. \*/ public ListProperty> relationProvidersProperty() { return relationProviders; } /\*\* \* The RelationProvider that manages the current relations. \*/ public ObjectProperty> relationProviderProperty() { return relationProvider; } public RelationProvider getRelationProvider() { return relationProviderProperty().get(); } public ListProperty relations() { return relations; } /\*\* \* Callback from invalidation of current relationProvider. \* Implemented to update relations. \*/ protected void relationProviderInvalidated() { RelationProvider value = getRelationProvider(); relations().set(value != null ? value.getRelations() : emptyObservableList()); } /\*\* \* Creates and wires all properties. \*/ private void initProperties() { relationProviders = new SimpleListProperty<>(this, "relationProviders", observableArrayList()); relationProvider = new SimpleObjectProperty<>(this, "relationProvider") { @Override protected void invalidated() { // todo: don't accept providers that are not in the list relationProviderInvalidated(); } }; relations = new SimpleListProperty<>(this, "relations"); relationProviderInvalidated(); } } /\*\* \* Implement the ui against a RelationModel. Here we create \* the same UI with a model backed by enums or a Map, respectively \*/ private Parent createContent() { TabPane tabPane = new TabPane( new Tab("Enums", createRelationUI(createEnumRelationModel())), new Tab("Manual map", createRelationUI(createMapRelationModel())) ); return new BorderPane(tabPane); } /\*\* \* Common factory for UI: creates and returns a Parent that \* contains two combo's configured to use the model. \*/ protected Parent createRelationUI(RelationModel model) { ComboBox> providers = new ComboBox<>(); providers.itemsProperty().bind(model.relationProvidersProperty()); providers.valueProperty().bindBidirectional(model.relationProviderProperty()); ComboBox relations = new ComboBox<>(); relations.itemsProperty().bind(model.relations()); relations.valueProperty().addListener((src, ov, nv) -> { LOG.info("relation changed: " + nv); }); return new VBox(10, providers, relations); } // ------------- manual with maps /\*\* \* On-the-fly creation of a RelationModel using a backing map. \*/ protected RelationModel createMapRelationModel() { RelationModel model = new RelationModel<>(); Map> data = new HashMap<>(); data.put("EUROPE", observableArrayList("GERMANY", "FRANCE")); data.put("AMERICA", observableArrayList("MEXICO", "USA")); for (String key: data.keySet()) { model.relationProvidersProperty().add(new RelationProvider() { @Override public ObservableList getRelations() { return data.get(key); } @Override public String toString() { return key; } }); } return model; } //-------------------- enum /\*\* \* RelationModel using Enums. \*/ protected RelationModel createEnumRelationModel() { RelationModel model = new RelationModel(); model.relationProvidersProperty().setAll(Continent.values()); return model; } public enum EuropeanCountry { FRANCE, GERMANY; } public enum AmericanCountry { MEXICO, CANADA, USA; } public enum Continent implements RelationProvider { AMERICA(AmericanCountry.values()), EUROPE(EuropeanCountry.values()) ; ObservableList subs; private Continent(Object[] subs) { this.subs = FXCollections.observableArrayList(subs); } @Override public ObservableList getRelations() { return FXCollections.unmodifiableObservableList(subs); } } @Override public void start(Stage stage) throws Exception { stage.setScene(new Scene(createContent())); stage.setTitle(FXUtils.version()); stage.show(); } public static void main(String[] args) { launch(args); } @SuppressWarnings("unused") private static final Logger LOG = Logger .getLogger(CombosWithCategories.class.getName()); } ``` Upvotes: 1
2018/03/20
675
1,971
<issue_start>username_0: I'm trying to put SQL from a pass-through query into a VBA module since the query is giving me problems. I'm encountering an issue in it, however. Here's the error message I get: [![enter image description here](https://i.stack.imgur.com/43pXq.png)](https://i.stack.imgur.com/43pXq.png) Here's the section of code that generates the error ``` Sub Passthrough() Dim strSQL As String strSQL = "select spriden_id AS 'UIN', spriden_first_name AS 'First', spriden_last_name AS 'Last', SPBPERS_SSN AS 'SSN', pebempl_ecls_code," & _ "pebempl_term_date, pebempl_last_work_date, ftvvend_term_date," & _ "Case When sfrstcr_pidm is not null Then 'A'" & _ "When sfrstcr_pidm <> ' '" & _ "Then 'A' Else Null End AS 'StudentStatus'," & _ "spbpers_citz_code AS 'Citizenship',Null AS 'Current Student/Employee (Y/N)', Null 'TIN Match (Y/N)'" & _ ...... DoCmd.RunSQL strSQL End Sub ```<issue_comment>username_1: This is a spacing issue with the evaluated string. You can see in the error message that it is trying to run the sql "`... Then 'A'When sfrstcr_pidm <> ' 'Then 'A'...` where there are not spaces between the 'A' and next `When` statement or the ' ' and the next `Then` statement Just add a space to the end of the string where you are splitting it onto new lines. ``` Sub Passthrough() Dim strSQL As String strSQL = "select spriden_id AS 'UIN', spriden_first_name AS 'First', spriden_last_name AS 'Last', SPBPERS_SSN AS 'SSN', pebempl_ecls_code, " & _ "pebempl_term_date, pebempl_last_work_date, ftvvend_term_date, " & _ "Case When sfrstcr_pidm is not null Then 'A' " & _ "When sfrstcr_pidm <> ' ' " & _ "Then 'A' Else Null End AS 'StudentStatus', " & _ "spbpers_citz_code AS 'Citizenship',Null AS 'Current Student/Employee (Y/N)', Null 'TIN Match (Y/N)' " & _ ...... End Sub ``` Upvotes: 3 <issue_comment>username_2: Go back to your query object as you need a *connection* specified. *RunSQL* is for local queries only. Upvotes: 2