qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
966,457
This is **code-related** as in what the compiler will allow you to do in one language, but not allow you to do in another language (e.g. optional parameters in VB don't exist in C#). Please provide a code example with your answer, if possible. Thank you!
2009/06/08
[ "https://Stackoverflow.com/questions/966457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/114916/" ]
VB allows nonvirtual calls to virtual instance methods (`call` in IL), whereas C# only allows virtual calls (`callvirt` in IL). Consider the following code: ``` Class Base Public Overridable Sub Foo() Console.WriteLine("Base") End Sub Public Sub InvokeFoo() Me.Foo() MyClass.Foo() End Sub End Class Class Derived : Inherits Base Public Overrides Sub Foo() Console.WriteLine("Derived") End Sub End Class Dim d As Base = New Derived() d.InvokeFoo() ``` The output is: ``` Derived Base ``` That's not possible in C# (without resorting to `Reflection.Emit`).
In C# you have to assign your variable before you can use it. I think you can turn this off, but it's the default behavior. So something like this: ``` int something; if (something == 10) { ... } ``` Isn't allowed, but the VB equivalent would be.
966,457
This is **code-related** as in what the compiler will allow you to do in one language, but not allow you to do in another language (e.g. optional parameters in VB don't exist in C#). Please provide a code example with your answer, if possible. Thank you!
2009/06/08
[ "https://Stackoverflow.com/questions/966457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/114916/" ]
VB.NET has support for CIL Exception Filters, C# doesn't: ``` Try ... Catch ex As SomeException When ex.SomeProperty = 1 ... End Try ```
Off the top of my head (pre 4.0): VB language "features" not supported in C#: * Optional Parameters * Late Binding * Case insensativity I'm sure there's more. Your question might get better answers if you ask for specific examples of where each language excels. VB is a currently better than C# when interacting with COM. This is because COM is much less of a headache when optional parameters are available, and when you don't have to bind to the (often unknown type) at compile time. C# on the other hand, is preferable by many when writing complex logic because of its type safety (in that you can't bypass static typing) and its conciseness. In the end, the languages are mostly equivalent, since they only differ on the fringes. Functionally, they are equally capable. **EDIT** To be clear, I'm not implying that VB doesn't allow static typing... simply that *C# doesn't* [yet] *allow you to bypass static typing*. This makes C# a more attractive candidate for certain types of architectures. In the 4.0 C# language spec, you can bypass static typing, but you do it by defining a block of dynamic code, not by declaring the entire file "not strict," which makes it more deliberate and targeted.
5,253,732
I want to put some restrictions to the images uploaded by the users, so the script that process them never runs out of memory. The images that take more memory, are the ones with higher resolution. They don't need to have a big size in bytes. For example, a 46kb image, with 4000x2500 resolution, and some transparencies (PNG), took around 90mb to resize it. Is there a way to precalculate the memory needed accurately? Any ideas?
2011/03/09
[ "https://Stackoverflow.com/questions/5253732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/459537/" ]
Generally, the rule of thumb is, width x height x 4 (rgba) for both the source and destination images, and don't forget about the amount of memory the script itself has already consumed.
Using the GD library once the file has been uploaded you can use [getimagesize()](http://www.php.net/manual/en/function.getimagesize.php). You can then check the height and width to determine if you want to process it or return an error, it also returns the number of bits used per color if you want to take that into account as well. So your validation can be on file size and dimensions.
49,320,599
Size of data to get: 20,000 approx Issue: searching Elastic Search indexed data using below command in python but not getting any results back. ``` from pyelasticsearch import ElasticSearch es_repo = ElasticSearch(settings.ES_INDEX_URL) search_results = es_repo.search( query, index=advertiser_name, es_from=_from, size=_size) ``` **If I give size less than or equal to 10,000 it works fine but not with 20,000** Please help me find an optimal solution to this. PS: On digging deeper into ES found this message error: Result window is too large, from + size must be less than or equal to: [10000] but was [19999]. See the scrolling API for a more efficient way to request large data sets.
2018/03/16
[ "https://Stackoverflow.com/questions/49320599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2286762/" ]
for real time use the best solution is to use the [search after query](https://www.elastic.co/guide/en/elasticsearch/reference/master/search-request-search-after.html) . You need only a date field, and another field that uniquely identify a doc - it's enough a `_id` field or an `_uid` field. Try something like this, in my example I would like to extract all the documents that belongs to a single user - in my example the user field has a `keyword datatype`: ``` from elasticsearch import Elasticsearch es = Elasticsearch() es_index = "your_index_name" documento = "your_doc_type" user = "Francesco Totti" body2 = { "query": { "term" : { "user" : user } } } res = es.count(index=es_index, doc_type=documento, body= body2) size = res['count'] body = { "size": 10, "query": { "term" : { "user" : user } }, "sort": [ {"date": "asc"}, {"_uid": "desc"} ] } result = es.search(index=es_index, doc_type=documento, body= body) bookmark = [result['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ] body1 = {"size": 10, "query": { "term" : { "user" : user } }, "search_after": bookmark, "sort": [ {"date": "asc"}, {"_uid": "desc"} ] } while len(result['hits']['hits']) < size: res =es.search(index=es_index, doc_type=documento, body= body1) for el in res['hits']['hits']: result['hits']['hits'].append( el ) bookmark = [res['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ] body1 = {"size": 10, "query": { "term" : { "user" : user } }, "search_after": bookmark, "sort": [ {"date": "asc"}, {"_uid": "desc"} ] } ``` Then you will find all the doc appended to the `result` var If you would like to use `scroll query` - doc [here](http://elasticsearch-py.readthedocs.io/en/master/helpers.html#scan): ``` from elasticsearch import Elasticsearch, helpers es = Elasticsearch() es_index = "your_index_name" documento = "your_doc_type" user = "Francesco Totti" body = { "query": { "term" : { "user" : user } } } res = helpers.scan( client = es, scroll = '2m', query = body, index = es_index) for i in res: print(i) ```
Probably its [ElasticSearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-from-size.html) constraints. ``` index.max_result_window index setting which defaults to 10,000 ```
548,330
I'm trying to better organise a bash script as part of a re-write/re-organisation and am wondering what "best practice" is in the following situation: ``` ymd=`echo "${NOEXT}" | egrep -o -m 1 "\-20[0-9]{2}\-[0-9]{2}\-[0-9]{2}\-"` ``` or ``` ymd=`egrep -o -m 1 "\-20[0-9]{2}\-[0-9]{2}\-[0-9]{2}\-" <<< ${NOEXT}` ``` I believe the first version, due to the pipe, creates a subshell. Does the second version with the here string? I'm figuring it doesn't, but want to validate my assumptions.
2013/02/08
[ "https://superuser.com/questions/548330", "https://superuser.com", "https://superuser.com/users/46739/" ]
The latter doesn't which you can (roughly) verify by looking (on an idle system) at the increase of the PIDs - when running the former, it will increase by 2, when running the latter, it will only increase by 1. But, it's a [Bashism](http://mywiki.wooledge.org/Bashism), and maybe should be replaced with a here document: ``` ymd=`egrep -o -m 1 "\-20[0-9]{2}\-[0-9]{2}\-[0-9]{2}\-" <<EOF ${NOEXT} EOF` ```
If you can use `bash` and don't require POSIX compatibility, you can do the regular expression match without using the external call to `egrep`. ``` [[ $NOEXT =~ -20[0-9]{2}-[0-9]{2}-[0-9]{2}- ]] ymd=${BASH_REMATCH[0]} ```
13,455,937
TLDR; look at last paragrap. A developer from our partner software company needs to call our WCF (**basic http binding**) service, and he asked us to turn it to asmx for themselves, cause he has trouble with calling it from Oracle. WCF service is being used on different platforms (.net, java, php) with no error. His code gives him **Status code: 500 - Internal Server Error**. I assume its about sending wrong soap format or content. So i learned you should use **utl\_dbws** instead of **utl\_http** as that developer did. Ok, this seemed an easy task to me first. Find a working code sample from internet and send a e-mail like "Hi fellow developer friend you should use utl\_dbws package not utl\_http and the sample code at this link". I'm not the only person in the world that needs to do this, right ? Weird but i couldn't find any sample approved working piece of code that accomplishes calling a WCF service from Oracle. Here is some of link i found about it; <https://forums.oracle.com/forums/thread.jspa?threadID=2354357> <https://forums.oracle.com/forums/thread.jspa?threadID=1071996> <http://steveracanovic.blogspot.com/2008/10/using-utldbws-package-to-call-web.html> <https://forums.oracle.com/forums/thread.jspa?messageID=4205205&tstart=0#4205205> <http://www.oracle-base.com/articles/10g/utl_dbws-10g.php> Noone writes any working code example or noone tells that this is not possible. I would appreciate if anyone had a working code example that calling a WCF service from Oracle.
2012/11/19
[ "https://Stackoverflow.com/questions/13455937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/855509/" ]
This is down to the browser and how it interprets the styles, it is somewhat out of your control. However, with particular CSS and jQuery workarounds you should be able to get around it. For instance, if you do not need to the item to be positioned absolutely then you could remove this, or change it to `position:static;` Have a look at this [question](https://stackoverflow.com/questions/4278148/jquery-cssleft-returns-auto-instead-of-actual-value-in-chrome). > > As to why Chrome and IE return different values: .css() provides a > unified gateway to the browsers' computed style functions, but it > doesn't unify the way the browsers actually compute the style. It's > not uncommon for browsers to decide such edge cases differently. > > >
Just remove the `position` style and you will get `auto` instead of computed value. ``` div { top: auto; bottom:20px; right:20px; left:0px; } ``` you can test it [here](http://jsfiddle.net/UEyxD/16/).
13,455,937
TLDR; look at last paragrap. A developer from our partner software company needs to call our WCF (**basic http binding**) service, and he asked us to turn it to asmx for themselves, cause he has trouble with calling it from Oracle. WCF service is being used on different platforms (.net, java, php) with no error. His code gives him **Status code: 500 - Internal Server Error**. I assume its about sending wrong soap format or content. So i learned you should use **utl\_dbws** instead of **utl\_http** as that developer did. Ok, this seemed an easy task to me first. Find a working code sample from internet and send a e-mail like "Hi fellow developer friend you should use utl\_dbws package not utl\_http and the sample code at this link". I'm not the only person in the world that needs to do this, right ? Weird but i couldn't find any sample approved working piece of code that accomplishes calling a WCF service from Oracle. Here is some of link i found about it; <https://forums.oracle.com/forums/thread.jspa?threadID=2354357> <https://forums.oracle.com/forums/thread.jspa?threadID=1071996> <http://steveracanovic.blogspot.com/2008/10/using-utldbws-package-to-call-web.html> <https://forums.oracle.com/forums/thread.jspa?messageID=4205205&tstart=0#4205205> <http://www.oracle-base.com/articles/10g/utl_dbws-10g.php> Noone writes any working code example or noone tells that this is not possible. I would appreciate if anyone had a working code example that calling a WCF service from Oracle.
2012/11/19
[ "https://Stackoverflow.com/questions/13455937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/855509/" ]
You Can Get integer Value of top when you set 'auto' with below code: ``` $(function(){ var top = $('div').offset().top; alert(top); });​ ``` * offset Return Position Value When you Set To Auto
Just remove the `position` style and you will get `auto` instead of computed value. ``` div { top: auto; bottom:20px; right:20px; left:0px; } ``` you can test it [here](http://jsfiddle.net/UEyxD/16/).
13,455,937
TLDR; look at last paragrap. A developer from our partner software company needs to call our WCF (**basic http binding**) service, and he asked us to turn it to asmx for themselves, cause he has trouble with calling it from Oracle. WCF service is being used on different platforms (.net, java, php) with no error. His code gives him **Status code: 500 - Internal Server Error**. I assume its about sending wrong soap format or content. So i learned you should use **utl\_dbws** instead of **utl\_http** as that developer did. Ok, this seemed an easy task to me first. Find a working code sample from internet and send a e-mail like "Hi fellow developer friend you should use utl\_dbws package not utl\_http and the sample code at this link". I'm not the only person in the world that needs to do this, right ? Weird but i couldn't find any sample approved working piece of code that accomplishes calling a WCF service from Oracle. Here is some of link i found about it; <https://forums.oracle.com/forums/thread.jspa?threadID=2354357> <https://forums.oracle.com/forums/thread.jspa?threadID=1071996> <http://steveracanovic.blogspot.com/2008/10/using-utldbws-package-to-call-web.html> <https://forums.oracle.com/forums/thread.jspa?messageID=4205205&tstart=0#4205205> <http://www.oracle-base.com/articles/10g/utl_dbws-10g.php> Noone writes any working code example or noone tells that this is not possible. I would appreciate if anyone had a working code example that calling a WCF service from Oracle.
2012/11/19
[ "https://Stackoverflow.com/questions/13455937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/855509/" ]
I also had this annoying problem. Some browsers return the computed position if the element is visible at the moment. The trick is to hide it, read the css and then make it visible again (if was not already hidden). I wrote a convenient function that takes care of this and will return `auto` in Firefox. [**jsFiddle**](http://jsfiddle.net/UEyxD/19/) ``` var getCss = function($elem, prop) { var wasVisible = $elem.css('display') !== 'none'; try { return $elem.hide().css(prop); } finally { if (wasVisible) $elem.show(); } }; alert( getCss($('div'), 'top') ); ``` The `finally` is just to bring visibility back to the element, just before the function returns. You should use this function only for situations where you expect `auto` to be returned.
Just remove the `position` style and you will get `auto` instead of computed value. ``` div { top: auto; bottom:20px; right:20px; left:0px; } ``` you can test it [here](http://jsfiddle.net/UEyxD/16/).
71,020,450
This regex: `\p{L}+` matches these characters "ASKJKSDJKDSJÄÖÅüé" of the example string "ASKJKSDJK\_-.;,DSJÄÖÅ!”#€%&/()=?`¨’<>üé" which is great but is the exact opposite of what I want. Which leads me to negating regexes. Goal: ===== I want to match any and all characters that are *not a letter nor a number* in multiple languages. Could a negative regex be a natural direction for this? I should mention one intended use for the regex I'd like to find is to validate passwords for the rule: * that it needs to contain at least one special character, which I *define as not being a number nor a letter*. It would seem defining ranges of special characters should be avoided if possible, because why limit the possibilities? Thus my definition. I assume there could be some problems with such a wide definition, but it is a first step. If you have some suggestions for a better solution I'm giving below or just have some thoughts on the subject, I'm sure I'm not the only one that would like to learn about it. Thanks. Note I'm using double `\\` in the Java code. Platform is Java 11.
2022/02/07
[ "https://Stackoverflow.com/questions/71020450", "https://Stackoverflow.com", "https://Stackoverflow.com/users/427009/" ]
I created a script to solve your issue. In one folder create a .py file with the following: ``` import os import json files=os.listdir() for file in files: with open(file, "r") as json_file: if os.path.basename(__file__) != file: users = json.load(json_file) unique_users= list({ user['client_id'] : user for user in users }.values()) json_with_no_repetition = open(f"{file}_wo_duplicates.json", "w") json_with_no_repetition.write(json.dumps(unique_users, indent=4)) json_with_no_repetition.close() ``` Then, put all the files within the folder and run the script. I should recall that you should also try it by yourself and only ask questions when you are stuck in a specific moment.
``` #suppose here we have some json files with names using numbers ranging from 1 to 500 and ending with the extension .json #such as 1.json, 2.json, etc # but not limited to naming files using numbers, # but here I use numbers for convenience only import json from pathlib import Path def load_json(filename): file = open('{0}.json'.format(filename), 'r') return json.load(file) def remove_duplicate(data1, data2): return [data for data in data2 if data not in data1] def overwrite(filename, data): with open('{0}.json'.format(filename), 'w') as file: json.dump(data , file) #start creating variables from the first file lists = load_json('1'); #if you also want to remove duplicate data in the first file lists = [i for n, i in enumerate(lists) if i not in lists[n + 1:]] #'1' is the filename overwrite('1', lists) #let's do it on all files json_path = "./" for file in Path(json_path).glob('*.json'): #get filename without extension name = file.name.replace('.json', '') #if file 1 then skip if(name == '1'): continue #remove duplicate data data = remove_duplicate(lists, load_json(name)) #then overwrite the file overwrite(name, data) #add in lists to filter the next file lists += data ```
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Check NOAA's [GSHHG](http://www.ngdc.noaa.gov/mgg/shorelines/gshhs.html) (Global Self-consistent, Hierarchical, High-resolution Geography Database). The database is constantly being updated and maintained (latest update since Im answering this Q: July 1, 2013) GSHHG is: > > a high-resolution geography data set amalgamated from two data bases > in the public domain: World Vector Shorelines (WVS) and CIA World Data > Bank II (WDBII). The former is our basis for shorelines while the > latter is the basis for lakes, although there are instances where > differences in coastline representations necessitated adding WDBII > islands to GSHHG. The WDBII source also provides all political borders > and rivers. GSHHG data have undergone extensive processing and should > be free of internal inconsistencies such as erratic points and > crossing segments. The shorelines are constructed entirely from > hierarchically arranged closed polygons > > > The data are available in hdf4/3, ESRI Shapefile and binary formats. <http://www.soest.hawaii.edu/pwessel/gshhg/index.html>
GSHHG is definitive your first choice, but depend on the scale you need take also a look at <http://www.naturalearthdata.com/>
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Check NOAA's [GSHHG](http://www.ngdc.noaa.gov/mgg/shorelines/gshhs.html) (Global Self-consistent, Hierarchical, High-resolution Geography Database). The database is constantly being updated and maintained (latest update since Im answering this Q: July 1, 2013) GSHHG is: > > a high-resolution geography data set amalgamated from two data bases > in the public domain: World Vector Shorelines (WVS) and CIA World Data > Bank II (WDBII). The former is our basis for shorelines while the > latter is the basis for lakes, although there are instances where > differences in coastline representations necessitated adding WDBII > islands to GSHHG. The WDBII source also provides all political borders > and rivers. GSHHG data have undergone extensive processing and should > be free of internal inconsistencies such as erratic points and > crossing segments. The shorelines are constructed entirely from > hierarchically arranged closed polygons > > > The data are available in hdf4/3, ESRI Shapefile and binary formats. <http://www.soest.hawaii.edu/pwessel/gshhg/index.html>
Check Out <https://osmdata.openstreetmap.de/> for both WSG84 and Mercator Projection.
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Natural Earth would be the place to go. The data is open source and generalised for use at different scales. <http://naturalearthdata.com/downloads/10m-physical-vectors>
Check NOAA's [GSHHG](http://www.ngdc.noaa.gov/mgg/shorelines/gshhs.html) (Global Self-consistent, Hierarchical, High-resolution Geography Database). The database is constantly being updated and maintained (latest update since Im answering this Q: July 1, 2013) GSHHG is: > > a high-resolution geography data set amalgamated from two data bases > in the public domain: World Vector Shorelines (WVS) and CIA World Data > Bank II (WDBII). The former is our basis for shorelines while the > latter is the basis for lakes, although there are instances where > differences in coastline representations necessitated adding WDBII > islands to GSHHG. The WDBII source also provides all political borders > and rivers. GSHHG data have undergone extensive processing and should > be free of internal inconsistencies such as erratic points and > crossing segments. The shorelines are constructed entirely from > hierarchically arranged closed polygons > > > The data are available in hdf4/3, ESRI Shapefile and binary formats. <http://www.soest.hawaii.edu/pwessel/gshhg/index.html>
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Check out <https://osmdata.openstreetmap.de/> from Jochen Topf. The data has been derived from OpenStreetMap ways tagged with natural=coastline. See also the data update.
GSHHG is definitive your first choice, but depend on the scale you need take also a look at <http://www.naturalearthdata.com/>
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Natural Earth would be the place to go. The data is open source and generalised for use at different scales. <http://naturalearthdata.com/downloads/10m-physical-vectors>
GSHHG is definitive your first choice, but depend on the scale you need take also a look at <http://www.naturalearthdata.com/>
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Check out <https://osmdata.openstreetmap.de/> from Jochen Topf. The data has been derived from OpenStreetMap ways tagged with natural=coastline. See also the data update.
Check Out <https://osmdata.openstreetmap.de/> for both WSG84 and Mercator Projection.
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Natural Earth would be the place to go. The data is open source and generalised for use at different scales. <http://naturalearthdata.com/downloads/10m-physical-vectors>
Check out <https://osmdata.openstreetmap.de/> from Jochen Topf. The data has been derived from OpenStreetMap ways tagged with natural=coastline. See also the data update.
72,774
Im developing an application to print flight paths on a world map. To do this i need a dataset to describe the latitude and longitude of various coastlines (and country borders if possible) of the world so I can plot them. Is there anywhere on the web I can find this kind of data? (preferably free/cheap) Thanks!
2013/09/28
[ "https://gis.stackexchange.com/questions/72774", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/22430/" ]
Natural Earth would be the place to go. The data is open source and generalised for use at different scales. <http://naturalearthdata.com/downloads/10m-physical-vectors>
Check Out <https://osmdata.openstreetmap.de/> for both WSG84 and Mercator Projection.
23,830,568
I'm trying to make a program that puts the reverse of an array into another array. So, I made a function to do the reverse. And passed the array to be reversed by value. And the array used to store the reverse by reference. But when I run the program it crashes. ``` #include <stdio.h> #include <conio.h> #define size 50 void revarr (int num1[], int *num2) { int i; for (i = 0; i<size; i++) { num2[3-1-i] = num1[i]; } } int main() { int num[] = {1,8,1}; int reverse[3],x; revarr(num,reverse); for (x = 0; x<3; ++x) printf("%d ", reverse[x]); getch(); return 0; } ``` How can I fix this?
2014/05/23
[ "https://Stackoverflow.com/questions/23830568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3417785/" ]
DAX has no NULL, it has only BLANK, where BLANK has very different semantics from a SQL NULL. These [two](https://msdn.microsoft.com/en-us/library/ee634820.aspx) [pages](https://msdn.microsoft.com/en-us/library/gg492146.aspx#Anchor_2) might help understanding the semantics of a DAX BLANK. That being said, if you want to count something regardless of the whether [Status] is BLANK, just use COUNTROWS(). This function, aptly named, will count the number of rows in the table, rather than count the non-BLANK rows in a specific field.
So this is a work around and not an answer to the Question "how to display nulls in PowerPivot using DAX" Instead of doing this in PowerPivot, depending on the complexity of your query it may be easier to replace the nulls in the SQL query first using isnull.
23,830,568
I'm trying to make a program that puts the reverse of an array into another array. So, I made a function to do the reverse. And passed the array to be reversed by value. And the array used to store the reverse by reference. But when I run the program it crashes. ``` #include <stdio.h> #include <conio.h> #define size 50 void revarr (int num1[], int *num2) { int i; for (i = 0; i<size; i++) { num2[3-1-i] = num1[i]; } } int main() { int num[] = {1,8,1}; int reverse[3],x; revarr(num,reverse); for (x = 0; x<3; ++x) printf("%d ", reverse[x]); getch(); return 0; } ``` How can I fix this?
2014/05/23
[ "https://Stackoverflow.com/questions/23830568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3417785/" ]
DAX has no NULL, it has only BLANK, where BLANK has very different semantics from a SQL NULL. These [two](https://msdn.microsoft.com/en-us/library/ee634820.aspx) [pages](https://msdn.microsoft.com/en-us/library/gg492146.aspx#Anchor_2) might help understanding the semantics of a DAX BLANK. That being said, if you want to count something regardless of the whether [Status] is BLANK, just use COUNTROWS(). This function, aptly named, will count the number of rows in the table, rather than count the non-BLANK rows in a specific field.
Even if this post is older... I found the following solution helpful: instead of using e.g. averagex over a set that might contain 0 values, I use sumx()/countrows, where countrows count the rows in the table to which I like to aggregate. Then 0 values are considered. While this works fine to calculate an average, I am still struggeling to calculate a standard deviation.
23,830,568
I'm trying to make a program that puts the reverse of an array into another array. So, I made a function to do the reverse. And passed the array to be reversed by value. And the array used to store the reverse by reference. But when I run the program it crashes. ``` #include <stdio.h> #include <conio.h> #define size 50 void revarr (int num1[], int *num2) { int i; for (i = 0; i<size; i++) { num2[3-1-i] = num1[i]; } } int main() { int num[] = {1,8,1}; int reverse[3],x; revarr(num,reverse); for (x = 0; x<3; ++x) printf("%d ", reverse[x]); getch(); return 0; } ``` How can I fix this?
2014/05/23
[ "https://Stackoverflow.com/questions/23830568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3417785/" ]
DAX has no NULL, it has only BLANK, where BLANK has very different semantics from a SQL NULL. These [two](https://msdn.microsoft.com/en-us/library/ee634820.aspx) [pages](https://msdn.microsoft.com/en-us/library/gg492146.aspx#Anchor_2) might help understanding the semantics of a DAX BLANK. That being said, if you want to count something regardless of the whether [Status] is BLANK, just use COUNTROWS(). This function, aptly named, will count the number of rows in the table, rather than count the non-BLANK rows in a specific field.
Just add 0 (+ 0) to expression in the formula bar
231,661
Could you tell me what’s wrong in this phrase: "In this database, there are failed copies of the file." Is it grammatically correct to write "In this database" at the beggining of a sentence? Thank you.
2015/03/04
[ "https://english.stackexchange.com/questions/231661", "https://english.stackexchange.com", "https://english.stackexchange.com/users/112554/" ]
You can keep the original- * In this database there are failed copies of the file. In this database, there are failed copies of the file. > > or can rephrase it to- > > > * The failed copies of the file are in this database. > > >
I think it's fine. Maybe you can drop that comma if you decide to say: "There are failed copies of the file in this database." Otherwise, I believe it's grammatically correct.
138,489
Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? **What doesn't work:** * detaching from the screen session (doesn't respond until file is done outputting) * trying command to move to a different window in the screen session (also doesn't respond) * typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up)
2010/05/06
[ "https://superuser.com/questions/138489", "https://superuser.com", "https://superuser.com/users/5003/" ]
The obviously easy solution would be to not use `cat`. Your shell isn't a text viewer. Use `less` which is designed for this.
If you're using Bash, try `Ctrl`-`z` `kill %` `Enter` What shell are you using? (`Ctrl`-`c` works for me, by the way.)
138,489
Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? **What doesn't work:** * detaching from the screen session (doesn't respond until file is done outputting) * trying command to move to a different window in the screen session (also doesn't respond) * typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up)
2010/05/06
[ "https://superuser.com/questions/138489", "https://superuser.com", "https://superuser.com/users/5003/" ]
The obviously easy solution would be to not use `cat`. Your shell isn't a text viewer. Use `less` which is designed for this.
If you start a new `screen` window via `^A ^C`, you will I think be able to "dodge" the oncoming buffer without having to logout & log back in again. It seems that the bottleneck is transmission of the buffer characters to your local terminal; switching to another window should alleviate the need for this effort. Not exactly a miracle cure, but it could save you some time.
138,489
Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? **What doesn't work:** * detaching from the screen session (doesn't respond until file is done outputting) * trying command to move to a different window in the screen session (also doesn't respond) * typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up)
2010/05/06
[ "https://superuser.com/questions/138489", "https://superuser.com", "https://superuser.com/users/5003/" ]
If you're using Bash, try `Ctrl`-`z` `kill %` `Enter` What shell are you using? (`Ctrl`-`c` works for me, by the way.)
If you start a new `screen` window via `^A ^C`, you will I think be able to "dodge" the oncoming buffer without having to logout & log back in again. It seems that the bottleneck is transmission of the buffer characters to your local terminal; switching to another window should alleviate the need for this effort. Not exactly a miracle cure, but it could save you some time.
11,381,168
My fren wants me to develop a website for him for a face-cake bakery. I am supposed to supply a form for users so that they can order cakes and submit the photo to be printed on the cake.Now I need to make a dynamic imaging system that shows the preview of how the cake would look with the image user submitted.. it would be very helpful if you could suggest some libraries to do so.
2012/07/08
[ "https://Stackoverflow.com/questions/11381168", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1509654/" ]
It sounds like you're trying to do two things: 1. Display a user-selected photo on the client side (before it goes to the server). 2. Transform that image so it appears in the correct position on a cake. Both of these functions traditionally fall in the domain of backend processing, but thanks to the HTML5 and CSS3 draft standards, they are available in at least Chrome and Firefox. The below solutions aren't universal and you **will** have to fall back to a server based solution for non-supporting browsers, but it's worth mentioning: **Displaying a user-selected image** When a user selects a file from `<input type=file>`, it dispatches a `change` event. When this occurs, the DOM object has a `FileList` property called `files` allowing you to read the contents of these files. `URL.createObjectURL` lets you convert the file into a URL you can use for an image source: ``` input.addEventListener('change', function () { preview.src = URL.createObjectURL(this.files[0]); }); preview.addEventListener('load', function () { URL.revokeObjectURL(this.src); }); ``` See <http://jsfiddle.net/bnickel/yP7Bb/> for a demo. **Transforming to look like a cake top** CSS3 3d transforms make it possible to transform a surface to look 3D. I've experimented and found the following very convincing. The 0.7 compresses it vertically and the -0.008 is a rotation to add perspective. ``` -webkit-transform: matrix3d(1, 0, 0, 0, 0, 0.7, 0, -0.008, 0, 0, 1, 0, 0, 0, 0, 1); transform: matrix3d(1, 0, 0, 0, 0, 0.7, 0, -0.008, 0, 0, 1, 0, 0, 0, 0, 1); transform: matrix3d(1, 0, 0, 0, 0, 0.7, 0, -0.008, 0, 0, 1, 0, 0, 0, 0, 1); ``` You can play with values here: <http://jsfiddle.net/bnickel/v9Jat/>
you can use absolute positioning and 3d transform css, border radius circles with overfow hidden with background none and the image as child element with round cakes. Thats about it.
63,906,657
I am new to git and would like to understand what would happen in the following scenario. I have the following branches: 1. Master Branch (currently deployed to Prod) 2. Enhancement #1 - CleanupUntrackedFiles Branch: This was spawned off of Master. I made updates to the gitignore file to not include particular extensions in my repo and I removed those unnecessary files from the repo. 3. Enhancement #2: This was ALSO created off of Master. Code updates are in support of a customer request. At this point I would like to deliver BOTH Enhancement #1 and #2 into production. Is the best approach to merge these 2 branches together before merging with Master? If that is the case, how would Git know that I would like the files that I removed from Enhancement #1 to stay removed? Would the merge request of Enhancement 2 into Enhancement 1 bring in those old files again since Enhancement #2 was created off of the Master branch? Thanks in advance for your help.
2020/09/15
[ "https://Stackoverflow.com/questions/63906657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11659873/" ]
According to the [GitHub documentation](https://docs.github.com/en/developers/apps/scopes-for-oauth-apps#available-scopes), the scope for public repositories is `public_repo`, and for private repositories is `repo`. A token with one of those scopes is the most limited access possible for Git push or pull access; however, that token can access all public (respectively, private) repositories and can also be used for certain API access as well. If that's of concern to you, you can use an SSH key for your personal account, or to restrict it even further, a read-write deploy key for the repo in question.
Although it's been two years since the question was asked, there is now a solution from GitHub. GitHub recently introduced a new feature called "fine-grained personal access tokens". <https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github> > > **Personal access tokens** (classic) are given permissions from a broad set of read and write scopes. **They have access to all of the repositories and organizations that the user could access**, and are allowed to live forever. As an example, the repo scope provides broad access to all data in private repositories the user has access to, in perpetuity. > > > > > **Fine-grained personal access tokens**, by contrast, are given permissions from a set of over 50 granular permissions that control access to GitHub’s organization, user, and repository APIs. Each permission can be granted on a ‘no access’, ‘read’ or ‘read and write’ basis. As an example, you can now create a PAT that can only read issues and do nothing else – not even read the contents of a repository. > > >
57,667,410
I have a rails application running on a digital ocean server with IP xxx.xxx.xxx.xx .the deployed with Capistrano was easy now running with ease.Now I'm thinking to deploy another application to the same server using capistrano, After many research i'm not getting any proper solutions for my doubts or cant find any best tutorials for this. What are the essential steps to look after before deploying the second application to the server? Which nginx port the second application should listen to, 80 is default and the first application is already listening to that.? How to access the second application after if deployed to the same droplet, now i can access the first application using the ip.?
2019/08/27
[ "https://Stackoverflow.com/questions/57667410", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10476417/" ]
For each app, you need to make sure whatever server you are using is listening on a different socket. After that, you have to add another server block in Nginx configurations like below, ``` upstream app_one { # Path to server1 SOCK file } upstream app_two { # Path to server2 SOCK file } server { listen 80; server_name IP; # Application root, as defined previously root /root/app_one/public; try_files $uri/index.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_one; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } server { listen 8080; server_name IP; # Application root, as defined previously root /root/app_two/public; try_files $uri/index.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_two; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } ```
For testing purposes or limited-user-count applications you can have as many domains as you wish. You'd simply add to `/etc/hosts` (assuming you have Linux box) ``` NGINX.IP.ADDR.ESS domain-one.com NGINX.IP.ADDR.ESS domain-two.com ``` Then use these server names in corresponding server blocks in your Nginx config file. In this case you can use the same port number. Other users of these applications should make the same changes on their boxes. Moreover if such users are grouped within the same LAN you can configure fake zones on your DNS and use it instead of `/etc/hosts`.
141,750
SQL Server simple problem - ensure that only 1 batch is active at one time. Each batch has an assigned BatchID# - however that ID# might not be unique if a batch is re-run at some point. However, for a given ID# there should only ever be one "active" batch - that is only one batch with BatchComplete equal to NULL. So the table I have is: ``` CREATE TABLE BatchTable ( BatchNumber INT IDENTITY(1,1) PRIMARY KEY ,BatchID INT NOT NULL ,BatchStart DATETIME2(2) NOT NULL DEFAULT GETDATE() ,BatchComplete DATETIME2(2) NULL ,BatchLastOperation DATETIME2(2) DEFAULT GETDATE() BatchState INT NOT NULL ); ``` Then I have a constraint: ``` ALTER TABLE [dbo].[BatchTable] ADD CONSTRAINT bt_OnlyOneBatchNumber UNIQUE (BatchID, BatchComplete) GO ``` But last night this failed -- two inserts occurred in the same moment and both were inserted - chaos ensued. I am guessing the problem was that `("20160619", NULL) != ("20160619", NULL)` and therefore the rows were seen as unique. So the solution would be to create a non-null "IsComplete" column rather than relying upon the existence of the datetime in BatchComplete column. So my questions are: 1. Is my guess correct? 2. Is there a better way? Surely this kind of pattern has to be pretty common (I have done something similar before though I used triggers rather than a constraint). The BatchState might work rather than IsCompete -- but the constraint would need to be for a given BatchID all *other* rows must have a BatchState greater or equal to 4. Not sure how to make that constraint.
2016/06/20
[ "https://dba.stackexchange.com/questions/141750", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/98290/" ]
I don't have access to SQL Server so I googled the syntax for generated columns, but there's probably syntax errors in here anyhow. The idea is to use a generated column that has a unique value (BatchNumber) whenever BatchComplete is not null, and a non-unique number (BatchId) otherwise: ``` CREATE TABLE BatchTable ( BatchNumber INT IDENTITY(1,1) PRIMARY KEY , BatchID INT NOT NULL , BatchComplete DATETIME2(2) NULL , Active AS (CASE WHEN BatchComplete IS NULL THEN BatchID ELSE BatchNumber END) PERSISTED , CONSTRAINT ONLY_ONE_ACTIVE_AT_ATIME UNIQUE ( BatchID, Active ) ); ``` If BatchComplete IS NULL then UNIQUE (BatchId, BatchId) must hold, i.e only one Active batch at a time. If BatchComplete IS NOT NULL then UNIQUE (BatchId, BatchNumber) must hold, but BatchNumber itself is unique, so this will always be true. As mentioned I can't test this, but you might be able to use the idea one way or another.
You can Use ajax request for checking and in codeigniter has built in function Is\_Unique[databse.table\_name]
24,015
I am looking at a matrix reconstruction algorithm that, given singular values $\sigma\_i$ and quantum states $|u\_i\rangle$ and $|v\_i\rangle$ that are efficiently prepared on a quantum computer, produces a matrix $M$ such that $M=\sum\_i \sigma\_i |u\_i \rangle\langle v\_i|$. It seems like I can do two things: I) Quantum State Tomography on both $|u\_i\rangle$ and $|v\_i\rangle$, and then classically reconstructing the outer product. This seems to take an exponential number of measurements because of the output problem. There exists <https://arxiv.org/abs/2111.11071> which says that you can measure $|u\_i\rangle$ and $|v\_i\rangle$ with a number of bases that is linear in the number of qubits. II) Since $|u\_i\rangle\langle v\_i|$ is rank-1, there might exist an efficient way to reconstruct it altogether, though I haven't seen any literature that supports an efficient algorithm for doing this. Are there any clues that support efficient reconstruction here? Please note that the vectors $|u\_i\rangle$ and $|v\_i\rangle$ have real entries only.
2022/02/09
[ "https://quantumcomputing.stackexchange.com/questions/24015", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/19689/" ]
This outer product is, in general, not Hermitian and so does not correspond directly to a physical observable. Taking a lesson from $2\times 2$ matrices (ie from polarimetry), we can measure the two Hermitian observables $$X=|a\rangle\langle b|+|b\rangle\langle a|$$ and $$Y=-i(|a\rangle\langle b|-|b\rangle\langle a|),$$ then use post-processing to say that the expectation values should satisfy $$\big\langle |a\rangle \langle b|\big\rangle=\big\langle X\big\rangle+i\big\langle Y\big\rangle.$$
If you know $|u\_i\rangle\langle v\_i|$ then you can deduce $|u\_i\rangle$ and $|v\_i\rangle$ as well (up to a global phase). So, reconstructing the product can't be easier then reconstruction of the states. But since $|u\_i\rangle$, $|v\_i\rangle$ are real it's enough to use measurements in the standard basis to reconstruct them.
44,077,177
I can't find this in the docs or by searching, maybe someone has some tips. I'm trying to check how many connections are on a presence channel on the backend. I can check fine on the front-end with Echo like so: ``` Echo.join('chat') .here((users) => { // users.length is the proper count of connections }) ``` But is there a way I can get that same number of connections, but in the backend code somewhere within Laravel?
2017/05/19
[ "https://Stackoverflow.com/questions/44077177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1406734/" ]
If you are using Pusher, the backend can just do the following: ``` $response = $pusher->get( '/channels/presence-channel-name/users' ); if( $response[ 'status'] == 200 ) { // convert to associative array for easier consumption $users = json_decode( $response[ 'body' ], true )[ 'users' ]; } $userCount = count($users); ``` You can read more about it in the pusher [documentation](https://pusher.com/docs/server_api_guide/interact_rest_api#presence-users). The [pusher-http-php](https://pusher.com/docs/server_api_guide/interact_rest_api#presence-users) sdk also has some documentation for this. > > A list of users present on a presence channel can be retrieved by > querying the `/channels/[channel_name]/users` resource where the > `channel_name` is replaced with a valid presence channel name. > > > This is explicitly *only* for presence channels. Additionally, you can keep track of users in channels through [webhooks](https://pusher.com/docs/webhooks#presence). > > Notify your application whenever a user subscribes to or unsubscribes > from a Presence channel. > For example, this allows you to synchronise channel presence state on > your server as well as all your application clients. > > > Pusher will hit your server with information in the following form: ``` { "name": "member_added", // or "member_removed" "channel": "presence-your_channel_name", "user_id": "a_user_id" } ``` This data could potentially be stored in a table in your database or alternatively in redis.
I don't think it's possible. The channels are between the client (website with JS) and the WebSocket-Server (own NodeJS-Server or the Pusher-Servers). Laravel is just pushing events to them but is never pulling. To find a solution we have to know which driver you are using (redis or pusher). It's maybe possible to ask the pusher server with curl how many users are on the server. For pusher this looks interesting: <https://support.pusher.com/hc/en-us/articles/204113596-Showing-who-s-online-with-a-large-number-of-users> and <https://pusher.com/docs/rest_api#method-get-channels> For Redis you could implement some logic inside the NodeJS server to listen to the channels and fire a request to laravel. The best solution to be independent is to fire a request from each client to update the counter in your database: ``` Echo.join('chat') .here((users) => { //request to laravel api with users.length }) ``` The disadvantage of this method is, that it only updates the value when a user connects to the channel
44,077,177
I can't find this in the docs or by searching, maybe someone has some tips. I'm trying to check how many connections are on a presence channel on the backend. I can check fine on the front-end with Echo like so: ``` Echo.join('chat') .here((users) => { // users.length is the proper count of connections }) ``` But is there a way I can get that same number of connections, but in the backend code somewhere within Laravel?
2017/05/19
[ "https://Stackoverflow.com/questions/44077177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1406734/" ]
I don't think it's possible. The channels are between the client (website with JS) and the WebSocket-Server (own NodeJS-Server or the Pusher-Servers). Laravel is just pushing events to them but is never pulling. To find a solution we have to know which driver you are using (redis or pusher). It's maybe possible to ask the pusher server with curl how many users are on the server. For pusher this looks interesting: <https://support.pusher.com/hc/en-us/articles/204113596-Showing-who-s-online-with-a-large-number-of-users> and <https://pusher.com/docs/rest_api#method-get-channels> For Redis you could implement some logic inside the NodeJS server to listen to the channels and fire a request to laravel. The best solution to be independent is to fire a request from each client to update the counter in your database: ``` Echo.join('chat') .here((users) => { //request to laravel api with users.length }) ``` The disadvantage of this method is, that it only updates the value when a user connects to the channel
Or could be this one as response { "user\_id": "a\_user\_id" "name": "member\_added", // or "member\_removed" "channel": "presence-your\_channel\_name", }
44,077,177
I can't find this in the docs or by searching, maybe someone has some tips. I'm trying to check how many connections are on a presence channel on the backend. I can check fine on the front-end with Echo like so: ``` Echo.join('chat') .here((users) => { // users.length is the proper count of connections }) ``` But is there a way I can get that same number of connections, but in the backend code somewhere within Laravel?
2017/05/19
[ "https://Stackoverflow.com/questions/44077177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1406734/" ]
If you are using Pusher, the backend can just do the following: ``` $response = $pusher->get( '/channels/presence-channel-name/users' ); if( $response[ 'status'] == 200 ) { // convert to associative array for easier consumption $users = json_decode( $response[ 'body' ], true )[ 'users' ]; } $userCount = count($users); ``` You can read more about it in the pusher [documentation](https://pusher.com/docs/server_api_guide/interact_rest_api#presence-users). The [pusher-http-php](https://pusher.com/docs/server_api_guide/interact_rest_api#presence-users) sdk also has some documentation for this. > > A list of users present on a presence channel can be retrieved by > querying the `/channels/[channel_name]/users` resource where the > `channel_name` is replaced with a valid presence channel name. > > > This is explicitly *only* for presence channels. Additionally, you can keep track of users in channels through [webhooks](https://pusher.com/docs/webhooks#presence). > > Notify your application whenever a user subscribes to or unsubscribes > from a Presence channel. > For example, this allows you to synchronise channel presence state on > your server as well as all your application clients. > > > Pusher will hit your server with information in the following form: ``` { "name": "member_added", // or "member_removed" "channel": "presence-your_channel_name", "user_id": "a_user_id" } ``` This data could potentially be stored in a table in your database or alternatively in redis.
Or could be this one as response { "user\_id": "a\_user\_id" "name": "member\_added", // or "member\_removed" "channel": "presence-your\_channel\_name", }
20,318,289
I'm trying to use downloaded custom language for the google's Tesseract OCR engine, it has the following files but no traineddata file, please see my code below which gives me an error only when I try to use the new language, can someone help. files: cp27.DangAmbigs cp27.freq-dawg cp27.inttemp cp27.normproto cp27.pffmtablecp cp27.unicharset cp27.user-words cp27.word-dawg Download Link: <http://www.sendspace.com/file/gd7j4i> error message: Failed to initialise tesseract engine ``` Pix test = PixConverter.ToPix(image); try { using (var engine = new TesseractEngine(@"tessdata", @"cp27", EngineMode.Default)) { engine.SetVariable("tessedit_char_whitelist", charset); using (var page = engine.Process(test)) { return(page.GetText()); } } } catch { } ```
2013/12/01
[ "https://Stackoverflow.com/questions/20318289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/987642/" ]
I would probably bind to keyup, then check for space and if it is space I would create a div above the textarea that will have a clickable 'X', something like this (obviously you need some css to make it look nice and you need a container div above your text area with height auto to append your tags to instead of the body tag): ``` $("#foo" ).bind( "keyup", function() { if (event.which == 32) { var thisTagText = $('#foo').val(); $('body').append('<div class="myTag">' + thisTagText + '<div onclick="removeTag();">X</div></div>'); } }); ```
I almost sure I saw such plugin. Google "jQuery plugin" and you will find that it is already implemented for you...
20,318,289
I'm trying to use downloaded custom language for the google's Tesseract OCR engine, it has the following files but no traineddata file, please see my code below which gives me an error only when I try to use the new language, can someone help. files: cp27.DangAmbigs cp27.freq-dawg cp27.inttemp cp27.normproto cp27.pffmtablecp cp27.unicharset cp27.user-words cp27.word-dawg Download Link: <http://www.sendspace.com/file/gd7j4i> error message: Failed to initialise tesseract engine ``` Pix test = PixConverter.ToPix(image); try { using (var engine = new TesseractEngine(@"tessdata", @"cp27", EngineMode.Default)) { engine.SetVariable("tessedit_char_whitelist", charset); using (var page = engine.Process(test)) { return(page.GetText()); } } } catch { } ```
2013/12/01
[ "https://Stackoverflow.com/questions/20318289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/987642/" ]
I almost sure I saw such plugin. Google "jQuery plugin" and you will find that it is already implemented for you...
Did your answer get cut off? Anyway, use something like this: HTML: ``` <div id="tags"></div><input type="text" id="tags-input"/> ``` JavaScript: ``` var tagsInput = $("#tags-input"), tags = $("#tags"); tagsInput.keydown(function(e){ if (e.keyCode == 32) { // when space bar is pressed var tag = $("<span>", {html: this.value, class: "tag"}); var removeTag = $("<span>", {class: "tag-delete"}); tag.append(removeTag); tags.append(tag); // add span to div removeTag.click(function(){ tag.remove(); //delete tag }); this.value = ""; // clear value } }); ``` I'll leave it up to you to style it.
20,318,289
I'm trying to use downloaded custom language for the google's Tesseract OCR engine, it has the following files but no traineddata file, please see my code below which gives me an error only when I try to use the new language, can someone help. files: cp27.DangAmbigs cp27.freq-dawg cp27.inttemp cp27.normproto cp27.pffmtablecp cp27.unicharset cp27.user-words cp27.word-dawg Download Link: <http://www.sendspace.com/file/gd7j4i> error message: Failed to initialise tesseract engine ``` Pix test = PixConverter.ToPix(image); try { using (var engine = new TesseractEngine(@"tessdata", @"cp27", EngineMode.Default)) { engine.SetVariable("tessedit_char_whitelist", charset); using (var page = engine.Process(test)) { return(page.GetText()); } } } catch { } ```
2013/12/01
[ "https://Stackoverflow.com/questions/20318289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/987642/" ]
I would probably bind to keyup, then check for space and if it is space I would create a div above the textarea that will have a clickable 'X', something like this (obviously you need some css to make it look nice and you need a container div above your text area with height auto to append your tags to instead of the body tag): ``` $("#foo" ).bind( "keyup", function() { if (event.which == 32) { var thisTagText = $('#foo').val(); $('body').append('<div class="myTag">' + thisTagText + '<div onclick="removeTag();">X</div></div>'); } }); ```
Did your answer get cut off? Anyway, use something like this: HTML: ``` <div id="tags"></div><input type="text" id="tags-input"/> ``` JavaScript: ``` var tagsInput = $("#tags-input"), tags = $("#tags"); tagsInput.keydown(function(e){ if (e.keyCode == 32) { // when space bar is pressed var tag = $("<span>", {html: this.value, class: "tag"}); var removeTag = $("<span>", {class: "tag-delete"}); tag.append(removeTag); tags.append(tag); // add span to div removeTag.click(function(){ tag.remove(); //delete tag }); this.value = ""; // clear value } }); ``` I'll leave it up to you to style it.
44,630
I read the Wikipedia article on [DNA methylation](https://en.wikipedia.org/wiki/DNA_methylation) Let's say I want to extract and then stock my current DNA methylation marks somewhere so that I can use it safely 20 years in the future for a medical procedure that doesn't exist yet. What method should I use?
2016/03/26
[ "https://biology.stackexchange.com/questions/44630", "https://biology.stackexchange.com", "https://biology.stackexchange.com/users/22779/" ]
To record the current methylation state of your DNA, you can use [bisulfite sequencing](https://en.wikipedia.org/wiki/Bisulfite_sequencing). Basically, you take half of your DNA sample and treat with bisulfite, which deaminates cytosines (C->U) , so they read as T instead of C. Methylated cytosines are protected, so they still read as C. You run two sequencing reactions, one with bisulfite-treated and the other with untreated DNA, so you can tell where the true Cs are in the genome. See the below diagram: [![bisulphite sequencing image](https://i.stack.imgur.com/8UPQt.png)](https://i.stack.imgur.com/8UPQt.png) [Wikimedia commons](https://upload.wikimedia.org/wikipedia/en/c/c9/Wiki_Bisulfite_sequencing_Figure_1_small.png) Note that if you're thinking of doing this from a longevity point of view, that there are [hugely different methylation patterns in different cell types](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3759728/); I'm not sure which cell type you'd want to pick. Also, methylation is not the only covalent modification of DNA that exists, there's also [hydroxymethylation](http://www.ks.uiuc.edu/Research/methylation/), and I'd bet there are more that we haven't discovered yet. There certainly are in bacteria and phage ([glucosyl-hydroxymethylcytosine](http://www.ncbi.nlm.nih.gov/pubmed/26081634), for one). The hard thing about saving things for the future is that we really don't understand the full picture of what's happening now epigenetically, and as such, we don't have assays to detect things we don't know about.
I believe the DNA is going to be as stable in native form as it should be after bisulfite treatment. You can use commercially available methods of storing DNA samples and safely assume it will be intact in 20 years time.
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
You can do remote debugging targeting the Android Browser with the [weinre](http://people.apache.org/~pmuellr/weinre/docs/latest/) project. Or use [Chrome remote debugging](https://developers.google.com/chrome-developer-tools/docs/remote-debugging) with the Chrome for Android browser. As far as I know, you can't target a WebView directly, but targeting the Android Browser should get you close.
It cannot be done, because the debugger backend code is not there. The source code you are referencing to is a copy of webkit source in android, but it is not compiled into android release bits.
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
Now it is easy with Android 4.4. See <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#debugging-webviews>
It cannot be done, because the debugger backend code is not there. The source code you are referencing to is a copy of webkit source in android, but it is not compiled into android release bits.
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
> > **For android >= 4.4 (kitkat)** > > > See [Remote debugging on Android with Chrome](https://developer.chrome.com/devtools/docs/remote-debugging) > > **For android < 4.4 (Lower versions)** > > > Use very good open source tool: [weinre](http://people.apache.org/~pmuellr/weinre/docs/latest/Installing.html). See [this](https://www.youtube.com/watch?v=HEqwnpLYnI0&feature=youtu.be) video for help to use it. If you are familiar with grunt then you can use [grunt-weinre](https://www.npmjs.com/package/grunt-weinre) For quick view: 1. install weinre using npm 2. Do the [configuration](https://github.com/ChrisWren/grunt-weinre#recommended-usage) in your gruntfile. 3. Run the weinre grunt task. 4. Use this script to inject the weinre target code into your web page. 5. Open <http://localhost:8082> in your browser and you will find devices running above script. You can debug all this devices. **NOTE:** if you want to debug webview/browser in your mobile device then you need to replace localhost with your machine's IP running weinre. And yes, all the devices should be on the same network.
It cannot be done, because the debugger backend code is not there. The source code you are referencing to is a copy of webkit source in android, but it is not compiled into android release bits.
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
Now it is easy with Android 4.4. See <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#debugging-webviews>
You can do remote debugging targeting the Android Browser with the [weinre](http://people.apache.org/~pmuellr/weinre/docs/latest/) project. Or use [Chrome remote debugging](https://developers.google.com/chrome-developer-tools/docs/remote-debugging) with the Chrome for Android browser. As far as I know, you can't target a WebView directly, but targeting the Android Browser should get you close.
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
> > **For android >= 4.4 (kitkat)** > > > See [Remote debugging on Android with Chrome](https://developer.chrome.com/devtools/docs/remote-debugging) > > **For android < 4.4 (Lower versions)** > > > Use very good open source tool: [weinre](http://people.apache.org/~pmuellr/weinre/docs/latest/Installing.html). See [this](https://www.youtube.com/watch?v=HEqwnpLYnI0&feature=youtu.be) video for help to use it. If you are familiar with grunt then you can use [grunt-weinre](https://www.npmjs.com/package/grunt-weinre) For quick view: 1. install weinre using npm 2. Do the [configuration](https://github.com/ChrisWren/grunt-weinre#recommended-usage) in your gruntfile. 3. Run the weinre grunt task. 4. Use this script to inject the weinre target code into your web page. 5. Open <http://localhost:8082> in your browser and you will find devices running above script. You can debug all this devices. **NOTE:** if you want to debug webview/browser in your mobile device then you need to replace localhost with your machine's IP running weinre. And yes, all the devices should be on the same network.
You can do remote debugging targeting the Android Browser with the [weinre](http://people.apache.org/~pmuellr/weinre/docs/latest/) project. Or use [Chrome remote debugging](https://developers.google.com/chrome-developer-tools/docs/remote-debugging) with the Chrome for Android browser. As far as I know, you can't target a WebView directly, but targeting the Android Browser should get you close.
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
You can do remote debugging targeting the Android Browser with the [weinre](http://people.apache.org/~pmuellr/weinre/docs/latest/) project. Or use [Chrome remote debugging](https://developers.google.com/chrome-developer-tools/docs/remote-debugging) with the Chrome for Android browser. As far as I know, you can't target a WebView directly, but targeting the Android Browser should get you close.
Pre KitKat [jsHyBugger](https://www.jshybugger.com/#!/) works well ( trial version and annual single user license €29 ) ( I have no connection to the developers / have purchased a license )
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
Now it is easy with Android 4.4. See <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#debugging-webviews>
Pre KitKat [jsHyBugger](https://www.jshybugger.com/#!/) works well ( trial version and annual single user license €29 ) ( I have no connection to the developers / have purchased a license )
13,801,819
I need to inspect javascript execution (webview widget) in an android application, while debugging; through SDK & usb cable and/or http/websockets; from destop computer (e.g. chrome running on desktop). Webkit's sources includes DebuggerServer implementation ( platform\_external\_webkit\Source\WebKit\android\wds\DebugServer.cpp ) accessible at cpp level, and bound if flag WDS is enabled (at build time?) Source\WebKit\android\jni\WebCoreFrameBridge.cpp:#if ENABLE(WDS) Source\WebKit\android\jni\WebCoreFrameBridge.cpp: WDS::server()->addFrame(frame); The default port for server is 9999 The sources show that all is implemented (at Cpp level) to enable the feature, but I have not found any reference searching the web for experiences using live debugging at javascript level in adroid devices automating webkit's inspector interface. 1.- Are the feature present, in binary form, executing in actual android devices? (has adroid's distribution of webkit been built without WDS flag enabled? :-( ) 2.- Can the remote debug feature be enabled/used from javascript or application (at java level) e.g. at startup of app? 3.- In case it is possible to enable the webkit inspector/debugger feature, how to make it possible to interact from remote application ? (e.g. from another javascript app using websockets, or chrome on desktop computers). Some paragraphs explaining the mechanics like <https://developers.google.com/chrome-developer-tools/docs/remote-debugging#remote> would be nice! thanks in advance for any information, or references about this topic. I consider important to enable remote debugging (in the device) at javascript level to make it possible modern development of HTML5 applications and happy debugging experience. cheers, Ale.
2012/12/10
[ "https://Stackoverflow.com/questions/13801819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1717056/" ]
> > **For android >= 4.4 (kitkat)** > > > See [Remote debugging on Android with Chrome](https://developer.chrome.com/devtools/docs/remote-debugging) > > **For android < 4.4 (Lower versions)** > > > Use very good open source tool: [weinre](http://people.apache.org/~pmuellr/weinre/docs/latest/Installing.html). See [this](https://www.youtube.com/watch?v=HEqwnpLYnI0&feature=youtu.be) video for help to use it. If you are familiar with grunt then you can use [grunt-weinre](https://www.npmjs.com/package/grunt-weinre) For quick view: 1. install weinre using npm 2. Do the [configuration](https://github.com/ChrisWren/grunt-weinre#recommended-usage) in your gruntfile. 3. Run the weinre grunt task. 4. Use this script to inject the weinre target code into your web page. 5. Open <http://localhost:8082> in your browser and you will find devices running above script. You can debug all this devices. **NOTE:** if you want to debug webview/browser in your mobile device then you need to replace localhost with your machine's IP running weinre. And yes, all the devices should be on the same network.
Pre KitKat [jsHyBugger](https://www.jshybugger.com/#!/) works well ( trial version and annual single user license €29 ) ( I have no connection to the developers / have purchased a license )
29,901,913
Honestly it is really hard to explain what I am trying to do because I can't think of words that would help me describe it. Anyway, the problem is that I want to sort a list of lists: ``` [['a', [10, 6, 5]], ['b', [7, 4, 2]], ['c', [10, 6, 4]], ['d', [7, 3, 2]]] ``` This is an example of the list I am trying to sort. The strings are pupil names and the lists with three integers in them are the scores. I want to sort this list highest to lowest using the scores but I want to sort it so if two lists have the same first integer (E.G): ``` [7, 4, 2] and [7, 3, 2] ``` then it should decide which one should be higher by the next integer. I have tried many different methods but from what I have attempted none of them work. Its also been hard trying to find an answer as I do not know exactly how to explain my issue, although I hope what I have said above has given enough explanation to my problem. My code if you need to look at it: [MY CODE](http://pastebin.com/bLXFUQtJ)
2015/04/27
[ "https://Stackoverflow.com/questions/29901913", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4838670/" ]
You can use the `key` argument of the `sorted` function, and sort by the second element of each sublist. This will sort from element 0 to element N for each sublist. And the `reverse` argument is so you can sort highest to lowest (instead of the default lowest to highest) ``` >>> l = [['a', [10, 6, 5]], ['b', [7, 4, 2]], ['c', [10, 6, 4]], ['d', [7, 3, 2]]] >>> sorted(l, key = lambda i: i[1], reverse=True) [['a', [10, 6, 5]], ['c', [10, 6, 4]], ['b', [7, 4, 2]], ['d', [7, 3, 2]]] ``` You can also replace the `lambda` with `operator.itemgetter(1)` for the key argument as @ThiefMaster pointed out.
You simply specify a key function that returns all the keys you need. In your case the second element of each list (e.g. `[10, 6, 5]`) does the job since comparing lists compares them element by element. ``` from operator import itemgetter sorted(whatever, key=itemgetter(1), reverse=True) ``` `itemgetter(1)` is the same as `lambda x: x[1]` but more performant (since it's implemented in Python's C code instead of in Python code).
29,901,913
Honestly it is really hard to explain what I am trying to do because I can't think of words that would help me describe it. Anyway, the problem is that I want to sort a list of lists: ``` [['a', [10, 6, 5]], ['b', [7, 4, 2]], ['c', [10, 6, 4]], ['d', [7, 3, 2]]] ``` This is an example of the list I am trying to sort. The strings are pupil names and the lists with three integers in them are the scores. I want to sort this list highest to lowest using the scores but I want to sort it so if two lists have the same first integer (E.G): ``` [7, 4, 2] and [7, 3, 2] ``` then it should decide which one should be higher by the next integer. I have tried many different methods but from what I have attempted none of them work. Its also been hard trying to find an answer as I do not know exactly how to explain my issue, although I hope what I have said above has given enough explanation to my problem. My code if you need to look at it: [MY CODE](http://pastebin.com/bLXFUQtJ)
2015/04/27
[ "https://Stackoverflow.com/questions/29901913", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4838670/" ]
You can use the `key` argument of the `sorted` function, and sort by the second element of each sublist. This will sort from element 0 to element N for each sublist. And the `reverse` argument is so you can sort highest to lowest (instead of the default lowest to highest) ``` >>> l = [['a', [10, 6, 5]], ['b', [7, 4, 2]], ['c', [10, 6, 4]], ['d', [7, 3, 2]]] >>> sorted(l, key = lambda i: i[1], reverse=True) [['a', [10, 6, 5]], ['c', [10, 6, 4]], ['b', [7, 4, 2]], ['d', [7, 3, 2]]] ``` You can also replace the `lambda` with `operator.itemgetter(1)` for the key argument as @ThiefMaster pointed out.
* Sorting in reverse order is supported by the python built-in sorted through a third keyword parameter `reverse`. * To solve the other problem, where in, the data is sorted based on score rather than the name, which is not in then natural order, can be solved by specifying a custom key, which is supported through the second keyword argument `key` * The final problem is to order the scores in the manner it appears. This should be left to the natural ordering of Python list. **Implementation** ``` >>> from operator import itemgetter >>> sorted(seq, key = itemgetter(1), reverse = True) ``` **Output** [['a', [10, 6, 5]], ['c', [10, 6, 4]], ['b', [7, 4, 2]], ['d', [7, 3, 2]]]
17,204,632
I would like to achieve 100% coverage on a module. My problem is that there is a variable (called data) within a method which I am trying to inject data in to test my exception handling. Can this be done with mocking? If not how can i fully test my exception handling? ``` module CSV module Extractor class ConversionError < RuntimeError; end class MalformedCSVError < RuntimeError; end class GenericParseError < RuntimeError; end class DemoModeError < RuntimeError; end def self.open(path) data = `.\\csv2text.exe #{path} -f xml --xml_output_styles 2>&1` case data when /Error: Wrong input filename or path:/ raise MalformedCSVError, "the CSV path with filename '#{path}' is malformed" when /Error: A valid password is required to open/ raise ConversionError, "Wrong password: '#{path}'" when /CSVTron CSV2Text: This page is skipped when running in the demo mode./ raise DemoModeError, "CSV2TEXT.exe in demo mode" when /Error:/ raise GenericParseError, "Generic Error Catch while reading input file" else begin csvObj = CSV::Extractor::Document.new(data) rescue csvObj = nil end return csvObj end end end end ``` Let me know what you think! Thanks ===================== EDIT ======================== I have modified my methods to the design pattern you suggested. This method-"open(path)" is responsible for trapping and raising errors, get\_data(path) just returns data, That's it! But unfortunately in the rspec I am getting "exception was expected to be raise but nothing was raised." I thought maybe we have to call the open method from your stub too? This is what I tried doing but still no error was raised.. ``` it 'should catch wrong path mode' do obj = double(CSV::Extractor) obj.stub!(:get_data).and_return("Error: Wrong input filename or path:") obj.stub!(:open) expect {obj.open("some fake path")}.to raise_error CSV::Extractor::MalformedCSVError end ```
2013/06/20
[ "https://Stackoverflow.com/questions/17204632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1438150/" ]
Extract the code that returns the data to a separate method. Then when you test `open` you can stub out that method to return various strings that will exercise the different branches of the `case` statement. Roughly like this for the setup: ``` def self.get_data(path) `.\\csv2text.exe #{path} -f xml --xml_output_styles 2>&1` end def self.open(path) data = get_data(path) ... ``` And I assume you know how to [stub methods in rspec](https://www.relishapp.com/rspec/rspec-mocks/v/2-14/docs/method-stubs), but the general idea is like this: ``` foo = ... foo.stub(:get_data).and_return("Error: Wrong input filename or path:") expect { foo.get_data() }.to raise_error MalformedCSVError ``` Also see the Rspec documentation on [testing for exceptions](https://www.relishapp.com/rspec/rspec-expectations/v/2-14/docs/built-in-matchers/raise-error-matcher).
Problem with testing your module lies in the way you have designed your code. Think about splitting extractor into two classes (or modules, it's matter of taste -- I'd go with classes as they are a bit easier to test), of which one would read data from external system call, and second would expect this data to be passed as an argument. This way you can easily mock what you currently have in `data` variable, as this would be simply passed as an argument (no need to think about implementation details here!). For easier usage you can later provide some wrapper call, that would create both objects and pass one as argument to another. Please note, that this behavior can also be easily tested.
16,773,728
i'm having a php code used with jscarousel 2 ``` http://www.egrappler.com/jquery-contentthumbnail-slder-v2-0-jscarousel-v2-0/ ``` to display items from the database as a carousel and having links like this ``` <a id="addtocart" product="<?php echo $productID; ?>" href="#addDiv" >add</a> ``` and a hidden div ``` <div style="display:none"> <div id="addDiv" style="width:300px; height:250px; background-color:#969;">test</div> </div> ``` and another link just for testing and it's not inside the carousel just like the previous one the problem is : links inside the carousel don't show the fancybox while the other link outside the carousel shows the fancybox i've tried this ``` $(document).ready(function(e) { $("a#addtocart").fancybox({ 'transitionIn' : 'elastic', 'transitionOut' : 'elastic' }); $(document).on("click","a#addtocart",function(){ $(this).fancybox({ 'transitionIn' : 'elastic', 'transitionOut' : 'elastic' }); }); }); ``` any help please ?
2013/05/27
[ "https://Stackoverflow.com/questions/16773728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1151715/" ]
What exactly do you want to do? I think the problem is, you are working with IDs. Maybe you should add the "click" event you are using to a class. Every ID may be in a html document just once, so you can just have one element with the ID addtocart. ``` <a href="#addDiv" class="fancybox" product="<?php echo $productID; ?>">add</a> ``` Try this jQuery: ``` $(document).ready(function(e) { $("a .fancybox").fancybox({ 'transitionIn' : 'elastic', 'transitionOut' : 'elastic' }); $(document).on("click","a .fancybox", function() { $(this).fancybox({ 'transitionIn' : 'elastic', 'transitionOut' : 'elastic' }); }); ``` When you use the dot (.) instead of the hashkey (#) you are able to use classes instead of ids. If that does not help you, you can find an implementation of jCarousel and Fancybox right here: <http://www.mccran.co.uk/examples/jcarousel/>
the solution : just used this code at the end of the document.ready ``` $("a.add_to_cart").fancybox({ 'transitionIn' : 'elastic', 'transitionOut' : 'elastic' }); ``` and worked normally but after document.ready makes it doesn't work
22,337,536
I wanted to know what was the best practice followed for storing sensitive fields like email and phone number in the database. Lets say you want to search by email and phone number , and the application sends emails and sms to its users as well. Because this data is sensitive you need to encrypt it. Hashing is not an option because you cant unhash it. Encryption standards like Rjindael or AES makes the data secure, but you cannot search the db by it because the encrypted string produced for the same input is always different. So in a case like this do I need to store both the hash as well as the encrypted field in the table ? Or is there some other strong encryption technique deployed for fields like these.
2014/03/11
[ "https://Stackoverflow.com/questions/22337536", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1109467/" ]
**Actually**, encrypting the same message twice with **AES** with the same key and the same initialization vector (IV) will produce the same output - always. **However**, using the same key and the same IV would leak information about the encrypted data. Due to the way **AES** encrypts in blocks of 16 bytes, two email addresses starting with the same 16 bytes and encrypted with the same key and the same IV would also have the same 16 bytes in the start of the encrypted message. Those leaking the information that these two emails start with the same. One of the purposes of the IV is to counter this. **A secure search field** can be created using an encrypted (with same key and same IV) [one-way-hash](https://stackoverflow.com/questions/3624648/php-different-one-way-hashes-for-password-security). The one-way-hash ensures that the encryption don't leak data. Only using a one-way-hash would not be enough for e.g telephone numbers as you can easily brute force all one-way-hash'es for any valid phone numbers.
If you want to encrypt your data, place the table on an encrypted filesystem or use a database that provides a facility for encrypted tables. Encrypting data in the database itself would lead to very poor performance for a number of reasons, the most obvious being that a simple table scan (let's say you're looking for a user by email address) would require a decryption of the whole recordset. Also, your application shouldn't deal with encryption/decryption of data: if it is compromised, then all of your data is too. Moreover, this question probably shouldn't be tagged as 'PHP' question.
22,337,536
I wanted to know what was the best practice followed for storing sensitive fields like email and phone number in the database. Lets say you want to search by email and phone number , and the application sends emails and sms to its users as well. Because this data is sensitive you need to encrypt it. Hashing is not an option because you cant unhash it. Encryption standards like Rjindael or AES makes the data secure, but you cannot search the db by it because the encrypted string produced for the same input is always different. So in a case like this do I need to store both the hash as well as the encrypted field in the table ? Or is there some other strong encryption technique deployed for fields like these.
2014/03/11
[ "https://Stackoverflow.com/questions/22337536", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1109467/" ]
Check out [CipherSweet](https://github.com/paragonie/ciphersweet/). It's a very permissively-licensed open source library that provides searchable encryption in PHP. Its implementation is *similar* to [Ebbe's answer](https://stackoverflow.com/a/22338530/2224584), but with a lot more caveats: 1. CipherSweet automatically handles key splitting, [through a well-defined protocol](https://ciphersweet.paragonie.com/internals/key-hierarchy). 2. CipherSweet supports multiple functional blind indexes (truncated hashes of transformations of the plaintext) to facilitate advanced searching. * More about the security implications of its design are available [here](https://ciphersweet.paragonie.com/security). Furthermore, the API is relatively straightforward: ``` <?php use ParagonIE\CipherSweet\BlindIndex; use ParagonIE\CipherSweet\CipherSweet; use ParagonIE\CipherSweet\CompoundIndex; use ParagonIE\CipherSweet\EncryptedRow; use ParagonIE\CipherSweet\Transformation\LastFourDigits; /** @var CipherSweet $engine */ // Define two fields (one text, one boolean) that will be encrypted $encryptedRow = (new EncryptedRow($engine, 'contacts')) ->addTextField('ssn') ->addBooleanField('hivstatus'); // Add a normal Blind Index on one field: $encryptedRow->addBlindIndex( 'ssn', new BlindIndex( 'contact_ssn_last_four', [new LastFourDigits()], 32 // 32 bits = 4 bytes ) ); // Create/add a compound blind index on multiple fields: $encryptedRow->addCompoundIndex( ( new CompoundIndex( 'contact_ssnlast4_hivstatus', ['ssn', 'hivstatus'], 32, // 32 bits = 4 bytes true // fast hash ) )->addTransform('ssn', new LastFourDigits()) ); ``` Once you have your object instantiated and configured, you can insert rows like so: ``` <?php /* continuing from previous snippet... */ list($encrypted, $indexes) = $encryptedRow->prepareRowForStorage([ 'extraneous' => true, 'ssn' => '123-45-6789', 'hivstatus' => false ]); $encrypted['contact_ssnlast4_hivstatus'] = $indexes['contact_ssnlast4_hivstatus']; $dbh->insert('contacts', $encrypted); ``` Then retrieving rows from the database is as simple as using the blind index in a SELECT query: ``` <?php /* continuing from previous snippet... */ $lookup = $encryptedRow->getBlindIndex( 'contact_ssnlast4_hivstatus', ['ssn' => '123-45-6789', 'hivstatus' => true] ); $results = $dbh->search('contacts', ['contact_ssnlast4_hivstatus' => $lookup]); foreach ($results as $result) { $decrypted = $encryptedRow->decrypt($result); } ``` CipherSweet is currently implemented in [PHP](https://ciphersweet.paragonie.com/php) and [Node.js](https://ciphersweet.paragonie.com/node.js), with additional Java, C#, Rust, and Python implementations coming soon.
If you want to encrypt your data, place the table on an encrypted filesystem or use a database that provides a facility for encrypted tables. Encrypting data in the database itself would lead to very poor performance for a number of reasons, the most obvious being that a simple table scan (let's say you're looking for a user by email address) would require a decryption of the whole recordset. Also, your application shouldn't deal with encryption/decryption of data: if it is compromised, then all of your data is too. Moreover, this question probably shouldn't be tagged as 'PHP' question.
33,416
I have a category with 2 levels. And I want to be able to let the user do a search based on those. The problem is that it needs to be `AND` between all level 1 categories, and `OR` within child categories. So let's say I have a category structure like this: ``` - Elevator - Air condition - View - Ocean - Mountains - Garden ``` Now when a user selects `elevator`, `ocean` and `mountains` I want to return all entries that have `elevator AND (ocean OR mountains)` I tried this: `$entries->categories(['and', 695, ['or', 698, 697]]);`, but that doesn't work, as it seems to just skip the `['or', 698, 697]` part. **Edit for bounty:** In addition to the reason given above, there is also a change in the requirement since we added multiple categories. So for one category group, the operator needs to be `OR`, and some groups should have `AND`.
2020/01/15
[ "https://craftcms.stackexchange.com/questions/33416", "https://craftcms.stackexchange.com", "https://craftcms.stackexchange.com/users/968/" ]
If I understand properly, you should be able to do this: ```php $entries = \craft\elements\Entry::find() ->section('yourSection') ->category(['and', 695]) ->relatedTo([698,698])->all(); ``` Assuming `category` is the name of your relational field, you can use that to pass the `and` condition (though with one element only it wouldn't be needed) then `relatedTo` is an `or` by default ([docs](https://docs.craftcms.com/v3/relations.html#simple-relationships)) And as per your edit, pass `and` condition to your custom fied and `or` conditions to your `relatedTo`. It would be pretty much the same in Twig.
Not able to comment, so trying an answer. Did you tried to build a custom hierarchy by scrolling the content of each level in a custom object ?
3,694
How much contact should I expect to have with my literary agent (from a sizeable London firm, representing my first novel)? Are months of silence to be expected/endured? If so...how many?
2011/08/22
[ "https://writers.stackexchange.com/questions/3694", "https://writers.stackexchange.com", "https://writers.stackexchange.com/users/2503/" ]
I wasn't going to post an answer for this, but I've got to disagree with Joshin. You're in a business relationship with your agent. You're entitled to respectful business communication. There's a vast difference between you needing your hand held and you needing to know what your business partner is up to. Months of silence seem out of line. That is, if you've sent a polite, business-like inquiry, you should have been answered. If you've been nagging him/her daily, then maybe s/he has shut down. But you haven't said anything to suggest that. Rather than Joshin's manufacturer/salesperson analogy, I would try home owner/real estate agent. (You notice the word 'agent' - it has significance). If you allow someone else to try to sell your valuable, unique property, you are entitled to regular updates as to their progress. If you've been reasonable, and they refuse to share this information with you, there's a serious problem with the relationship. There are a lot of great agents out there, and there are, unfortunately, a lot of rookies and/or shysters. If you have absolute confidence (through checking at Editors & Preditors and other relevant sites) that you've got a great agent, you might want to forgive the poor communication. But if you're less than confident, you need communication to know what's going on.
If you are really good they'll be happy if you just send them a check on a regular basis. Think of a clerk in a store. Their job is to make sure that when a buyer comes into the store they find what they need. Rarely (if ever) do they contact the guy in China who is making the tools. If you are making tools in China you want the clerk to sell them in the US of A. If you need hand-holding use part of your royalties to hire someone with hands you like.
1,070,154
I bought a new laptop (Lenovo Thinkpad) which came with Windows 10 installed. I threw out the hard disk, replaced it with an SSD and installed Linux on it. Now I put the old hard disk into an USB disk case and tried to boot, but Windows throws an error while booting (the USB disk was recognized properly and also the Windows partition was detected). The message says that the computer has to be restarted. It gives the following error code: `INACCESSIBLE_BOOT_DEVICE` What do I have to do that I can boot the existing Windows 10 installation from USB? I would like to avoid swapping the SSD again, so solutions which keep the HD in the USB case are preferred.
2016/04/26
[ "https://superuser.com/questions/1070154", "https://superuser.com", "https://superuser.com/users/3052/" ]
Ah, `INACCESSIBLE BOOT DEVICE` seems to be a windows 10 favourite... The first possible solution is outlined below, with an extra twist in case you need it. It seems to work for most people, but as your circumstances are slightly different we will have to see: 1. At the blue screen where the error appears, click Advanced Options 2. Click Troubleshoot 3. Look for an "Advanced Startup" or "Startup Options" menu 4. A list of boot options is displayed 5. Click Restart 6. Upon restarting you'll be taken to the boot options you previously saw in #4 7. Boot into Safe Mode by pressing the appropriate key 8. Once you're back into your desktop in Safe Mode, reboot your PC and everything should be back to normal. If this doesn't work, during the restart that you trigger in step 5 enter the BIOS and change your SATA mode controller to IDE from ACHI, or vice versa, and try safe mode again. If that doesn't work it's time to look at setting up a USB recovery drive to tackle the problem. The issue often seems to arrive from a changed SATA controller mode, which usually is done when changing from an HDD to an SSD, but as I said thing might be a little different here, hope it helps. --- Additions based on comments: I am aware that booting windows from a USB device is not natively supported, however I have myself achieved this in the past using a program which overcomes this limitation: [WintoUSB](http://www.easyuefi.com/wintousb/). I have only ever used this to create "new" installs of Windows, but it may be possible to leverage the same ability to boot an existing Windows installation.
Windows does not support boot from a USB disk. Instead of booting directly from the external disk, you could use VirtualBox to build a virtual machine that boots from the external USB disk. Here is one reference among the many that can be found : [Using a Physical Hard Drive with a VirtualBox VM](http://www.serverwatch.com/server-tutorials/using-a-physical-hard-drive-with-a-virtualbox-vm.html).
1,070,154
I bought a new laptop (Lenovo Thinkpad) which came with Windows 10 installed. I threw out the hard disk, replaced it with an SSD and installed Linux on it. Now I put the old hard disk into an USB disk case and tried to boot, but Windows throws an error while booting (the USB disk was recognized properly and also the Windows partition was detected). The message says that the computer has to be restarted. It gives the following error code: `INACCESSIBLE_BOOT_DEVICE` What do I have to do that I can boot the existing Windows 10 installation from USB? I would like to avoid swapping the SSD again, so solutions which keep the HD in the USB case are preferred.
2016/04/26
[ "https://superuser.com/questions/1070154", "https://superuser.com", "https://superuser.com/users/3052/" ]
Ah, `INACCESSIBLE BOOT DEVICE` seems to be a windows 10 favourite... The first possible solution is outlined below, with an extra twist in case you need it. It seems to work for most people, but as your circumstances are slightly different we will have to see: 1. At the blue screen where the error appears, click Advanced Options 2. Click Troubleshoot 3. Look for an "Advanced Startup" or "Startup Options" menu 4. A list of boot options is displayed 5. Click Restart 6. Upon restarting you'll be taken to the boot options you previously saw in #4 7. Boot into Safe Mode by pressing the appropriate key 8. Once you're back into your desktop in Safe Mode, reboot your PC and everything should be back to normal. If this doesn't work, during the restart that you trigger in step 5 enter the BIOS and change your SATA mode controller to IDE from ACHI, or vice versa, and try safe mode again. If that doesn't work it's time to look at setting up a USB recovery drive to tackle the problem. The issue often seems to arrive from a changed SATA controller mode, which usually is done when changing from an HDD to an SSD, but as I said thing might be a little different here, hope it helps. --- Additions based on comments: I am aware that booting windows from a USB device is not natively supported, however I have myself achieved this in the past using a program which overcomes this limitation: [WintoUSB](http://www.easyuefi.com/wintousb/). I have only ever used this to create "new" installs of Windows, but it may be possible to leverage the same ability to boot an existing Windows installation.
I fell over this article when I needed a copy of one of my physical machines. It had a hardware failure and I raided the disk, but before that I imaged it. When the replacement machine had a problem and I had to RMA it I wound up with a problem and decided to try and resurrect the original machine. Unfortunately with the repurposing of the existing C drive, I only had a disk image left and only a NVMe drive to put it on and only a USB adapter to put that in. After a bit of searching I found the WinToUSB app and used the Windows To Go conversion on my existing disk from my laptop. Once done I tried to boot it. Which wound with with Inaccessible Boot device. Now this is something I'm well used to with my Virtual Machine conversions, it is simply a matter of the USB driver not being set to start at boot (start type of 0), in the registry. I also knew something else from my prior experiences with Windows To Go. If you install to USB2.0, USB3.0 will not boot. I took the drive out of the USB3.0 slot and put it into USB2.0. I then booted into windows and logged in, opened up Regedit and navigated to Computer\HKEY\_LOCAL\_MACHINE\SYSTEM\CurrentControlSet\Services I then took everything which looked like USB3.0 required to boot and set the Start value for each entry to 0. I shut down the PC, swapped the drive from USB2.0 to USB3.0 and it booted fine. One Windows To Go from the existing install. If you only have USB3.0 ports on the machine (some of mine do), you can plug the drive into another computer and use the article [linked here](https://4sysops.com/archives/regedit-as-offline-registry-editor/), then take it out again and reboot it. If you want to get all fancy and have VMWare workstation, you can connect it as a physical drive and boot into Windows to do the same. Best, if you want to take an existing OS, is to change the start value before imaging the machine to USB. Hope this helps and is not too late.
63,679,919
I am using react router and I have the following code: ``` let history = useHistory(); let goToReviewPage = () => history.push(`review/${productId}`); ``` My current url is: `/foo/bar` and calling `goToReviewPage()` will redirect me to `/foo/review/${productId}` instead of `/foo/bar/review/${productId}` I am not sure how to set the base path will pushing the history.
2020/09/01
[ "https://Stackoverflow.com/questions/63679919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1834787/" ]
You can use React Router's `match.url`. Example: ``` const Component = ({ productId }) => { const match = useRouteMatch(); const history = useHistory(); const handleClick = () => { history.push(`${match.url}/review/${productId}`); }; return ( <button onClick={handleClick}>Click me</button> ); }; ```
One way is to use `window.location` to obtain the current path. For example: ``` history.push(window.location.pathname + '/' + `review/${productId}`); ``` [**window.location api**](https://www.w3schools.com/js/js_window_location.asp)
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
`SELECT` statements return the rows of their result sets in an *unpredictable* order unless you give an `ORDER BY` clause. Certain DBMS products give the illusion that their result sets are in a predictable order. But if you rely on that you're bound to be disappointed.
Another solution that I found here. ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by decode(ID, 5002, 1, 5003, 2, 5004, 3, 5005, 4, 5008, 5); ``` order by decode(COLUMN NAME, VALUE, POSITION) \*Note: Only need to repeat the VALUE and POSITION And yah, thanks for all the responds! I am really appreciate it.
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
if you want to keep the order as your in list, you can do something like this: ``` SQL> create type user_va as varray(1000) of number; 2 / Type created. SQL> with users as (select /*+ cardinality(a, 10) */ rownum r, a.column_value user_id 2 from table(user_va(11, 0, 19, 5)) a) 3 select d.user_id, d.username 4 from dba_users d 5 inner join users u 6 on u.user_id = d.user_id 7 order by u.r 8 / USER_ID USERNAME ---------- ------------------------------ 11 OUTLN 0 SYS 19 DIP 5 SYSTEM ``` i.e we put the elements into a varray and assign a rownum prior to merging the set. we can then order by that `r` to maintain the order of our in list. The `cardinality` hint just tells the optimizer how many rows are in the array (doesn't have to be dead on, just in the ballpark..as without this, it will assume 8k rows and may prefer a full scan over an index approach) if you don't have privs to create a type and this is just some adhoc thing, there's a few public ones: ``` select owner, type_name, upper_bound max_elements, length max_size, elem_type_name from all_Coll_types where coll_type = 'VARYING ARRAY' and elem_type_name in ('INTEGER', 'NUMBER'); ```
If your question is about *why the ordering occurs* then the answer is: Do you have an index or primary key defined on the column ID? If yes the database responds to your query with an index scan. That is: it looks up the IDs in the IN clause not in the table itself but in the index defined on your ID-column. Within the index the values are ordered. To get more information about the execution of your query try Oracle's explain plan feature. To get the values in a certain order you have to add an ORDER BY clause. One way of doing this would be ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by case ID when 5004 then 1 when 5003 then 2 ... end; ``` A more general way would be to add an ORDERING column to your table: ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by ORDERING; ```
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
This is one way I've seen in the past using `INSTR`: ``` SELECT * FROM YourTable WHERE ID IN (5004, 5003, 5005, 5002, 5008) ORDER BY INSTR ('5004,5003,5005,5002,5008', id) ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!4/0c283/1) I've also seen use of `CASE` like this: ``` ORDER BY CASE ID WHEN 5004 THEN 1 WHEN 5003 THEN 2 WHEN 5005 THEN 3 WHEN 5002 THEN 4 WHEN 5008 THEN 5 END ```
If your question is about *why the ordering occurs* then the answer is: Do you have an index or primary key defined on the column ID? If yes the database responds to your query with an index scan. That is: it looks up the IDs in the IN clause not in the table itself but in the index defined on your ID-column. Within the index the values are ordered. To get more information about the execution of your query try Oracle's explain plan feature. To get the values in a certain order you have to add an ORDER BY clause. One way of doing this would be ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by case ID when 5004 then 1 when 5003 then 2 ... end; ``` A more general way would be to add an ORDERING column to your table: ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by ORDERING; ```
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
This is one way I've seen in the past using `INSTR`: ``` SELECT * FROM YourTable WHERE ID IN (5004, 5003, 5005, 5002, 5008) ORDER BY INSTR ('5004,5003,5005,5002,5008', id) ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!4/0c283/1) I've also seen use of `CASE` like this: ``` ORDER BY CASE ID WHEN 5004 THEN 1 WHEN 5003 THEN 2 WHEN 5005 THEN 3 WHEN 5002 THEN 4 WHEN 5008 THEN 5 END ```
There is no guarantee of sort order without an ORDER BY clause.
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
`SELECT` statements return the rows of their result sets in an *unpredictable* order unless you give an `ORDER BY` clause. Certain DBMS products give the illusion that their result sets are in a predictable order. But if you rely on that you're bound to be disappointed.
if you want to keep the order as your in list, you can do something like this: ``` SQL> create type user_va as varray(1000) of number; 2 / Type created. SQL> with users as (select /*+ cardinality(a, 10) */ rownum r, a.column_value user_id 2 from table(user_va(11, 0, 19, 5)) a) 3 select d.user_id, d.username 4 from dba_users d 5 inner join users u 6 on u.user_id = d.user_id 7 order by u.r 8 / USER_ID USERNAME ---------- ------------------------------ 11 OUTLN 0 SYS 19 DIP 5 SYSTEM ``` i.e we put the elements into a varray and assign a rownum prior to merging the set. we can then order by that `r` to maintain the order of our in list. The `cardinality` hint just tells the optimizer how many rows are in the array (doesn't have to be dead on, just in the ballpark..as without this, it will assume 8k rows and may prefer a full scan over an index approach) if you don't have privs to create a type and this is just some adhoc thing, there's a few public ones: ``` select owner, type_name, upper_bound max_elements, length max_size, elem_type_name from all_Coll_types where coll_type = 'VARYING ARRAY' and elem_type_name in ('INTEGER', 'NUMBER'); ```
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
if you want to keep the order as your in list, you can do something like this: ``` SQL> create type user_va as varray(1000) of number; 2 / Type created. SQL> with users as (select /*+ cardinality(a, 10) */ rownum r, a.column_value user_id 2 from table(user_va(11, 0, 19, 5)) a) 3 select d.user_id, d.username 4 from dba_users d 5 inner join users u 6 on u.user_id = d.user_id 7 order by u.r 8 / USER_ID USERNAME ---------- ------------------------------ 11 OUTLN 0 SYS 19 DIP 5 SYSTEM ``` i.e we put the elements into a varray and assign a rownum prior to merging the set. we can then order by that `r` to maintain the order of our in list. The `cardinality` hint just tells the optimizer how many rows are in the array (doesn't have to be dead on, just in the ballpark..as without this, it will assume 8k rows and may prefer a full scan over an index approach) if you don't have privs to create a type and this is just some adhoc thing, there's a few public ones: ``` select owner, type_name, upper_bound max_elements, length max_size, elem_type_name from all_Coll_types where coll_type = 'VARYING ARRAY' and elem_type_name in ('INTEGER', 'NUMBER'); ```
There is no guarantee of sort order without an ORDER BY clause.
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
`SELECT` statements return the rows of their result sets in an *unpredictable* order unless you give an `ORDER BY` clause. Certain DBMS products give the illusion that their result sets are in a predictable order. But if you rely on that you're bound to be disappointed.
If your question is about *why the ordering occurs* then the answer is: Do you have an index or primary key defined on the column ID? If yes the database responds to your query with an index scan. That is: it looks up the IDs in the IN clause not in the table itself but in the index defined on your ID-column. Within the index the values are ordered. To get more information about the execution of your query try Oracle's explain plan feature. To get the values in a certain order you have to add an ORDER BY clause. One way of doing this would be ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by case ID when 5004 then 1 when 5003 then 2 ... end; ``` A more general way would be to add an ORDERING column to your table: ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by ORDERING; ```
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
This is one way I've seen in the past using `INSTR`: ``` SELECT * FROM YourTable WHERE ID IN (5004, 5003, 5005, 5002, 5008) ORDER BY INSTR ('5004,5003,5005,5002,5008', id) ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!4/0c283/1) I've also seen use of `CASE` like this: ``` ORDER BY CASE ID WHEN 5004 THEN 1 WHEN 5003 THEN 2 WHEN 5005 THEN 3 WHEN 5002 THEN 4 WHEN 5008 THEN 5 END ```
Another solution that I found here. ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by decode(ID, 5002, 1, 5003, 2, 5004, 3, 5005, 4, 5008, 5); ``` order by decode(COLUMN NAME, VALUE, POSITION) \*Note: Only need to repeat the VALUE and POSITION And yah, thanks for all the responds! I am really appreciate it.
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
This is one way I've seen in the past using `INSTR`: ``` SELECT * FROM YourTable WHERE ID IN (5004, 5003, 5005, 5002, 5008) ORDER BY INSTR ('5004,5003,5005,5002,5008', id) ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!4/0c283/1) I've also seen use of `CASE` like this: ``` ORDER BY CASE ID WHEN 5004 THEN 1 WHEN 5003 THEN 2 WHEN 5005 THEN 3 WHEN 5002 THEN 4 WHEN 5008 THEN 5 END ```
if you want to keep the order as your in list, you can do something like this: ``` SQL> create type user_va as varray(1000) of number; 2 / Type created. SQL> with users as (select /*+ cardinality(a, 10) */ rownum r, a.column_value user_id 2 from table(user_va(11, 0, 19, 5)) a) 3 select d.user_id, d.username 4 from dba_users d 5 inner join users u 6 on u.user_id = d.user_id 7 order by u.r 8 / USER_ID USERNAME ---------- ------------------------------ 11 OUTLN 0 SYS 19 DIP 5 SYSTEM ``` i.e we put the elements into a varray and assign a rownum prior to merging the set. we can then order by that `r` to maintain the order of our in list. The `cardinality` hint just tells the optimizer how many rows are in the array (doesn't have to be dead on, just in the ballpark..as without this, it will assume 8k rows and may prefer a full scan over an index approach) if you don't have privs to create a type and this is just some adhoc thing, there's a few public ones: ``` select owner, type_name, upper_bound max_elements, length max_size, elem_type_name from all_Coll_types where coll_type = 'VARYING ARRAY' and elem_type_name in ('INTEGER', 'NUMBER'); ```
15,090,419
May I know is there any solution to get the result without ordering in Oracle? It is because when I execute the query as follows, it seems to automatically helps me to sort it by ID field. ``` SELECT ID FROM USER WHERE ID IN (5004, 5003, 5005, 5002, 5008); Actual results Expected results ---5002 ---5004 ---5003 ---5003 ---5004 ---5005 ---5005 ---5002 ---5008 ---5008 ``` Million thanks if you guys have solutions on this.
2013/02/26
[ "https://Stackoverflow.com/questions/15090419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111420/" ]
if you want to keep the order as your in list, you can do something like this: ``` SQL> create type user_va as varray(1000) of number; 2 / Type created. SQL> with users as (select /*+ cardinality(a, 10) */ rownum r, a.column_value user_id 2 from table(user_va(11, 0, 19, 5)) a) 3 select d.user_id, d.username 4 from dba_users d 5 inner join users u 6 on u.user_id = d.user_id 7 order by u.r 8 / USER_ID USERNAME ---------- ------------------------------ 11 OUTLN 0 SYS 19 DIP 5 SYSTEM ``` i.e we put the elements into a varray and assign a rownum prior to merging the set. we can then order by that `r` to maintain the order of our in list. The `cardinality` hint just tells the optimizer how many rows are in the array (doesn't have to be dead on, just in the ballpark..as without this, it will assume 8k rows and may prefer a full scan over an index approach) if you don't have privs to create a type and this is just some adhoc thing, there's a few public ones: ``` select owner, type_name, upper_bound max_elements, length max_size, elem_type_name from all_Coll_types where coll_type = 'VARYING ARRAY' and elem_type_name in ('INTEGER', 'NUMBER'); ```
Another solution that I found here. ``` select ID from USER where ID in (5004, 5003, 5005, 5002, 5008) order by decode(ID, 5002, 1, 5003, 2, 5004, 3, 5005, 4, 5008, 5); ``` order by decode(COLUMN NAME, VALUE, POSITION) \*Note: Only need to repeat the VALUE and POSITION And yah, thanks for all the responds! I am really appreciate it.
256,359
I read in an American Accent book that there is no break between sibilants adjoining each other between words. For example, this phrase: > > I was starting to worry. > > > The words *was* + *starting* sound like [wə**zs**tɑrt̬ɪŋ] with no break. Am I right? I marked the sibilant sounds with bold. I used the schwa sound in *was* because it's a function word and we usually give stress to content words and reduce the function words. I also used the tapped T in the word started.
2015/07/01
[ "https://english.stackexchange.com/questions/256359", "https://english.stackexchange.com", "https://english.stackexchange.com/users/111955/" ]
Well, there is never a break between words unless you make one. You could say first the word "was", stop making sounds, then say "starting", if you chose to make that break. If there is no period of silence between them, the [zs] pronunciation is just a [z] followed immediately by [s]. If you happen to be an English speaker who customarily devoices word-final obstruents, like /z/, then the last sound of "was" will be voiceless, but it would have been anyway, regardless of whether "starting" followed. Details of how exactly word final /z/ is pronounced in various English dialects are interesting in themselves, but so far as I know, they have nothing to do with whether the next word starts with /s/.
You are right. Was, despite ending with an 's' sounds like a 'z' with an American speaker no mater where you are. Because of that [wəstɑrt̬ɪŋ] isn't correct. That would only work if both the 's' in "was" and "starting" were pronounced the same. As for the lack of break, that would be correct as well. Even in a lengthened cadence such as in a Texan Drawl, it is really hard to separate them out into two distinct words and have it sound right.
62,490
Does Microsoft Virtual PC have any scripting capabilities? I'm trying to automatically launch a web browser inside the VM and have it go to url specified by a parameter.
2009/10/29
[ "https://superuser.com/questions/62490", "https://superuser.com", "https://superuser.com/users/2582/" ]
For "basic" viruses you'll want to boot up in safe mode and delete the executable (you could use Process Explorer, or task manager from Vista onwards, to locate the file), but for more advanced types I always find using Sysinternal's Autoruns to remove their ability to start up (along with any helper applications they have) does it. Anything that lives through that might take a little more poking to fix.
99% CPU you say? Sounds like a poorly coded keylogger. Try checking `msconfig` for suspicious startup entries first and foremost. You can use [Process Explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) to find information about the process itself, and if you have the location of the program, kill it with process explorer then delete it. It's wishful thinking to assume something this simple will work, but it's worth a try. If that doesn't work, back up your friends documents and do a clean install, for me there is no safer feeling than a fresh OS install (if it's a possibility). If that isn't an option, download [Avira](http://www.avira.com/en/pages/index.php) and [Malwarebytes anti-malware](http://www.malwarebytes.org/mbam.php) and do a full system scan **in safe mode**, that should take care of it.
62,490
Does Microsoft Virtual PC have any scripting capabilities? I'm trying to automatically launch a web browser inside the VM and have it go to url specified by a parameter.
2009/10/29
[ "https://superuser.com/questions/62490", "https://superuser.com", "https://superuser.com/users/2582/" ]
For "basic" viruses you'll want to boot up in safe mode and delete the executable (you could use Process Explorer, or task manager from Vista onwards, to locate the file), but for more advanced types I always find using Sysinternal's Autoruns to remove their ability to start up (along with any helper applications they have) does it. Anything that lives through that might take a little more poking to fix.
My advice would be not to search on Google directly for the executable, but rather to Google for a reliable anti-malware site (e.g. your antivirus vendor, or the other big ones like Symantec etc), then search within their site to see if it's a known issue.
9,364
This will perhaps look like a very basic and trivial question. But I find it confusing. As an experience, when you are travelling in non-AC car in summer, have people felt if putting on or putting off the windowpane affect how much heat it feels inside to passengers. On one hand, I feel that heat can still come inside thru glass particles (This makes me ask if heat needs media to travel?) and so putting on windowpane should not reduce heat felt inside. But psychologically it feels like by closing the window, you are blocking heat. Also the windowpane will at least block the heated air from coming inside. But does this heated air really brings much heat apart from the heat coming through anyway or it actually in practical acts more like a ventillator thus reducing heat felt inside. I realize that this question may appear strange, but it pops up to my head everytime I travel in non-AC car in summer. I have tried to articulate it as well as I could. Do insulator windowpane glasses reduce heat felt inside? because they will transmit lesser heat inside. How do black shade glasses work?
2011/05/02
[ "https://physics.stackexchange.com/questions/9364", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1093/" ]
A couple of different effects here. A glass windowed box will heat up to considerably more than the outside temperature. Radiation from the sun, as visible and near infrared light, passes easily through the glass - hits the black interior surfaces and heats them up, this in turn heats the air and the glass stops the warm air escaping. Opening a window will let the hot air escape and the inside cool to approximately the outside air temperature - it can't cool below this without some power source (eg AC). Putting your hand out of a moving car window will strongly cool your hand. Water from your hand evaporates, cooling it, the wind blows this warm wet air away quickly allowing more fresh moisture to leave your hand. This is the wind-chill which although it can only cool your body to the air temperature makes you feel very cold as your body tries to maintain it's normal temperature.
Keeping your windows closed increases the temperature inside the car because of a greenhouse-like effect. The inside of the car continually receives radiation from the sun, re-emits it to warm the air inside the car but doesn't allow the heated air to leave the car by convection. Hence the car will be considerably hotter inside if the windows are kept closed. See <http://en.wikipedia.org/wiki/Greenhouse_effect#Real_greenhouses> Black shade glasses simply reduce the amount of radiation getting in thus making things slightly cooler.
9,364
This will perhaps look like a very basic and trivial question. But I find it confusing. As an experience, when you are travelling in non-AC car in summer, have people felt if putting on or putting off the windowpane affect how much heat it feels inside to passengers. On one hand, I feel that heat can still come inside thru glass particles (This makes me ask if heat needs media to travel?) and so putting on windowpane should not reduce heat felt inside. But psychologically it feels like by closing the window, you are blocking heat. Also the windowpane will at least block the heated air from coming inside. But does this heated air really brings much heat apart from the heat coming through anyway or it actually in practical acts more like a ventillator thus reducing heat felt inside. I realize that this question may appear strange, but it pops up to my head everytime I travel in non-AC car in summer. I have tried to articulate it as well as I could. Do insulator windowpane glasses reduce heat felt inside? because they will transmit lesser heat inside. How do black shade glasses work?
2011/05/02
[ "https://physics.stackexchange.com/questions/9364", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1093/" ]
A couple of different effects here. A glass windowed box will heat up to considerably more than the outside temperature. Radiation from the sun, as visible and near infrared light, passes easily through the glass - hits the black interior surfaces and heats them up, this in turn heats the air and the glass stops the warm air escaping. Opening a window will let the hot air escape and the inside cool to approximately the outside air temperature - it can't cool below this without some power source (eg AC). Putting your hand out of a moving car window will strongly cool your hand. Water from your hand evaporates, cooling it, the wind blows this warm wet air away quickly allowing more fresh moisture to leave your hand. This is the wind-chill which although it can only cool your body to the air temperature makes you feel very cold as your body tries to maintain it's normal temperature.
If the car is in the sun closed windows will trap heat due to two mechanisms. As dbrane says the greenhouse effect is named after onr of these this effects, the radiation comes in at shortwave lengths, and the reradiated thermal energy is partially blocked by the glass. But any dark object in direct sunlight will get surprisingly hot, 150-200 F is not uncommon, so leaving the windows open a crack allows some airflow, bringing the inside temperature closer to ambient. Also in dry heat, opening your windows for the first couple of minutes of driving to drive out the superheated air is more efficient then immediately turning on the AC.
16,228,509
I'm exploring the boost asio offerings the client sends a header of 1 byte indicating the length of bytes to follow. relevant server code: ``` enum {max_length=1}; void handle_read(const boost::system::error_code & error, const size_t & bytes_transferred){ if (! error){ ++ctr; std::string inc_data_str(this->inc_data.begin(),this->inc_data.end()); std::cout<<"received string: "<<inc_data_str<<" with size "<<inc_data_str.size() <<" bytes_transferred: "<<bytes_transferred<<" ctr: "<<ctr<<std::endl; int size_inc_next = boost::lexical_cast<int>(inc_data_str); int offset = 0; //std::cout<<"incoming integer of size "<<size_inc_next<<" processed from string: "<<inc_data_str<<std::endl; std::vector<char> next_inc_data(size_inc_next+offset); boost::asio::read(this->socket,boost::asio::buffer(next_inc_data),boost::asio::transfer_exactly(size_inc_next+offset)); std::string int_recvd(next_inc_data.begin(),next_inc_data.begin()+size_inc_next); //std::cout<<boost::posix_time::microsec_clock::local_time()<<std::endl; //std::cout<<"received integer: "<<int_recvd<<" from string "<<int_recvd<<" of size "<<int_recvd.size()<<std::endl; this->process_connection(); } // ! error } // handle_read void process_connection(){ boost::asio::async_read(this->socket,boost::asio::buffer(this->inc_data),boost::asio::transfer_exactly(max_length), boost::bind(&Connection::handle_read,shared_from_this(),boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } ``` relevant client code: ``` void on_write(const boost::system::error_code & error_code){ if (! error_code){ std::string transfer_data("15"); std::vector<char> v_td(transfer_data.begin(),transfer_data.end()); ++ctr; for (std::vector<char>::iterator iter = v_td.begin(); iter != v_td.end(); ++iter) std::cout<<*iter; std::cout<<" ctr: "<<ctr; std::endl(std::cout); boost::asio::async_write(this->socket,boost::asio::buffer(v_td),boost::asio::transfer_exactly(2), boost::bind(&Client::on_write,shared_from_this(), boost::asio::placeholders::error)); } } ``` Expected example printing output for Server process: ``` received string: 1 with size 1 bytes_transferred: 1 ctr: 159685 ``` Expected example printing output for Client process: ``` 15 ctr: 356293 ``` Such expected output is produced for a while, but say after say 356293 client iteration (this ctr number is non-deterministic to the naked eye from repeated trials of the processeses), the server breaks with the following error: ``` received string: with size 1 bytes_transferred: 1 ctr: 159686 terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::bad_lexical_cast> >' what(): bad lexical cast: source type value could not be interpreted as target ``` Aborted (core dumped) Note that the received string is "blank". On occasions it also breaks with the alternative message: ``` received string: X with size 1 bytes_transferred: 1 ctr: 159686 ``` What's going on here and why and how do i sort it out ? Further EDIT after strace: Client trace: ``` sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"15", 2}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 2 epoll_wait(4, {}, 128, 0) = 0 write(1, "15 ctr: 204441\n", 1515 ctr: 204441) = 15 sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"15", 2}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 2 epoll_wait(4, {}, 128, 0) = 0 write(1, "15 ctr: 204442\n", 1515 ctr: 204442) = 15 sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"15", 2}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = -1 EAGAIN (Resource temporarily \ unavailable) epoll_wait(4, {{EPOLLOUT, {u32=167539936, u64=167539936}}}, 128, -1) = 1 sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0", 2}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 2 write(1, "15 ctr: 204443\n", 1515 ctr: 204443) = 15 sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"15", 2}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 2 epoll_wait(4, {}, 128, 0) = 0 write(1, "15 ctr: 204444\n", 1515 ctr: 204444) = 15 sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"15", 2}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 2 epoll_wait(4, {}, 128, 0) = 0 write(1, "15 ctr: 204445\n", 1515 ctr: 204445) = 15 ``` Server trace: ``` write(1, "received string: 1 with size 1 b"..., 64received string: 1 with size 1 bytes_transferred: 1 ctr: 204441) = 64 write(1, "incoming integer of size 1 proce"..., 52incoming integer of size 1 processed from string: 1) = 52 recvmsg(7, {msg_name(0)=NULL, msg_iov(1)=[{"5", 1}], msg_controllen=0, msg_flags=0},0) = 1 write(1, "received integer: 5 from string "..., 44received integer: 5 from string 5 of size 1) = 44 recvmsg(7, {msg_name(0)=NULL, msg_iov(1)=[{"1", 1}], msg_controllen=0, msg_flags=0},0) = 1 epoll_wait(4, {}, 128, 0) = 0 write(1, "received string: 1 with size 1 b"..., 64received string: 1 with size 1 bytes_transferred: 1 ctr: 204442) = 64 write(1, "incoming integer of size 1 proce"..., 52incoming integer of size 1 processed from string: 1) = 52 recvmsg(7, {msg_name(0)=NULL, msg_iov(1)=[{"5", 1}], msg_controllen=0, msg_flags=0}, 0) = 1 write(1, "received integer: 5 from string "..., 44received integer: 5 from string 5 of size 1) = 44 recvmsg(7, {msg_name(0)=NULL, msg_iov(1)=[{"\0", 1}], msg_controllen=0, msg_flags=0}, 0) = 1 epoll_wait(4, {}, 128, 0) = 0 write(1, "received string: \0 with size 1 b"..., 64received string: ^@ with size 1 bytes_transferred: 1 ctr: 204443) = 64 futex(0xb76640fc, FUTEX_WAKE_PRIVATE, 2147483647) = 0 write(1, "inc_data_str\n", 13inc_data_str) = 13 ``` For the client process, the epoll\_wait before the erroneous "\0\0" send is different (u32=...., u64=....) than from the other epoll\_wait calls ... don't know what that means though To sum up the baffling part, the strace indicates nulls being transferred, yet the strace the next line indicates a write system call to standard output with the literal "15" which means that's what was in the transfer\_data vector Re-EDIT: Finally I inserted a ``` boost::this_thread::sleep(boost::posix_time::microseconds(200)); ``` just before the write statement in the client on\_write function. With this, no problems were faced. So what sort of concurrency issue could this be with the asio objects ? is it the socket?
2013/04/26
[ "https://Stackoverflow.com/questions/16228509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1411750/" ]
Your client is broken because of buffer lifetime ``` void on_write( const boost::system::error_code& error_code ) { if ( !error_code ) { std::string transfer_data("15"); std::vector<char> v_td(transfer_data.begin(), transfer_data.end()); // ^ // \------ goes out of scope before async_write() returns boost::asio::async_write( this->socket, boost::asio::buffer(v_td), boost::asio::transfer_exactly(2), boost::bind( &Client::on_write, shared_from_this(), boost::asio::placeholders::error ) ); } } ``` You need to ensure the buffer given to [`async_write()`](http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/async_write/overload1.html) remains valid until the completion handler is invoked: > > **buffers** One or more buffers containing the data to be written. Although the buffers object may be copied as necessary, ownership of > the underlying memory blocks is retained by the caller, which must > guarantee that they remain valid until the handler is called. handler > > > ---
Well, the exception message says, there is a bad lexical cast. I see one in your server code: ``` int size_inc_next = boost::lexical_cast<int>(inc_data_str); ``` Maybe you should put a breakpoint there and debug. When you get the message > > received string: X with size 1 bytes\_transferred: 1 ctr: 159686 > > > it's obviously the output of the line before the `lexical_cast`. Then `inc_data_str` seems to be "X" - and casting that to an int should trigger that bad cast exception. Why there is an "X" coming from the other side of the connection I can't tell.
65,102,314
I am trying to connect to Salesforce using node js / jsforce library and use promises. Unfortunately one of the methods is executing prior to getting connection. i have method A : makeconnection which returns the connection i have method B : which loads data from Salesforce based on the connection reference from method A I have method C : which gets dependencies from Salesforce based on connection from method A I would like the following order to be executed A ==> B ==> C Unfortunately C seems to run first followed by A and B so the connection is null and it fails roughly this is the code ``` let jsforce = require("jsforce"); const sfdcSoup = require("sfdc-soup"); const fs = require("fs"); let _ = require("lodash"); let trgarr = []; let clsarr = []; let entityarr = []; function makeConnection() { return new Promise((resolve,reject) => { const conn = new jsforce.Connection({ loginUrl: "https://test.salesforce.com", instanceUrl: "salesforce.com", serverUrl: "xxx", version: "50.0" }); conn.login(username, password, function (err, userInfo) { if (err) { return console.error(err); } // console.log(conn.accessToken); //console.log(conn.instanceUrl); //console.log("User ID: " + userInfo.id); //console.log("Org ID: " + userInfo.organizationId); console.log("logged in"); }); resolve(conn); }); } function loadClasses(conn) { return new Promise((resolve,reject) => { const querystr = "select apiVersion,name,body from apexClass where NamespacePrefix = null"; let query = conn .query(querystr) .on("record", function (rec) { clsarr.push(rec); }) .on("end", function () { console.log("number of class is " + clsarr.length); console.log("loaded all classes"); }); resolve(conn,clsarr); }); } async function getDependencies(conn) { return new Promise((resolve,reject) => { let entryPoint = { name: "xxx", type: "CustomField", id: yyy }; let connection = { token: conn.accessToken, url: "abc.com", apiVersion: "50.0" }; let usageApi = sfdcSoup.usageApi(connection, entryPoint); usageApi.getUsage().then((response) => { console.log(response.stats); console.log(response.csv); }); }); } async function run() { makeConnection().then(conn => loadClasses(conn)).then(conn=>getDependencies(conn)); } run(); ``` I keep getting an error that says **UnhandledPromiseRejectionWarning: Error: Access token and URL are required on the connection object** The reason is connection needs to be obtained from method A and sent to Method C , which is not happening. Can you please guide where i might be wrong? Also why is method C getting executed before A and B. **why does my promise chaining not work as promised**? I am running the code in Vscode and using Node 14
2020/12/02
[ "https://Stackoverflow.com/questions/65102314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14746367/" ]
Here is your fix : ``` <div> <label class="radio" v-for="singleGender in genders"> <input type="radio" v-model="gender" v-bind:value="singleGender.code"> {{singleGender.description}} </label> </div> <div>{{gender}}</div> ``` And here is your data : ``` data: { gender: "M", genders: [ { code: "F", description: "Female" }, { code: "M", description: "Male" } ] } ``` There is no need to use click event for store it's value to Model. Note : maybe in template selector, html can't render in DOM until you render it manually.
Here is the solution: ``` <template> <div> <label class="radio" v-for="(gender, index) in genders" :key="index"> <input type="radio" :value="gender.code" v-model="selGender" @click="selectedGender(gender.code)" /> {{ gender.description }} </label> </div> </template> <script> export default { name: "App", components: {}, data() { return { selGender: "M", genders: [ { code: "F", description: "Female", }, { code: "M", description: "Male", }, ], }; }, methods: { selectedGender(option) { this.selGender = option; }, }, }; </script> ```
65,102,314
I am trying to connect to Salesforce using node js / jsforce library and use promises. Unfortunately one of the methods is executing prior to getting connection. i have method A : makeconnection which returns the connection i have method B : which loads data from Salesforce based on the connection reference from method A I have method C : which gets dependencies from Salesforce based on connection from method A I would like the following order to be executed A ==> B ==> C Unfortunately C seems to run first followed by A and B so the connection is null and it fails roughly this is the code ``` let jsforce = require("jsforce"); const sfdcSoup = require("sfdc-soup"); const fs = require("fs"); let _ = require("lodash"); let trgarr = []; let clsarr = []; let entityarr = []; function makeConnection() { return new Promise((resolve,reject) => { const conn = new jsforce.Connection({ loginUrl: "https://test.salesforce.com", instanceUrl: "salesforce.com", serverUrl: "xxx", version: "50.0" }); conn.login(username, password, function (err, userInfo) { if (err) { return console.error(err); } // console.log(conn.accessToken); //console.log(conn.instanceUrl); //console.log("User ID: " + userInfo.id); //console.log("Org ID: " + userInfo.organizationId); console.log("logged in"); }); resolve(conn); }); } function loadClasses(conn) { return new Promise((resolve,reject) => { const querystr = "select apiVersion,name,body from apexClass where NamespacePrefix = null"; let query = conn .query(querystr) .on("record", function (rec) { clsarr.push(rec); }) .on("end", function () { console.log("number of class is " + clsarr.length); console.log("loaded all classes"); }); resolve(conn,clsarr); }); } async function getDependencies(conn) { return new Promise((resolve,reject) => { let entryPoint = { name: "xxx", type: "CustomField", id: yyy }; let connection = { token: conn.accessToken, url: "abc.com", apiVersion: "50.0" }; let usageApi = sfdcSoup.usageApi(connection, entryPoint); usageApi.getUsage().then((response) => { console.log(response.stats); console.log(response.csv); }); }); } async function run() { makeConnection().then(conn => loadClasses(conn)).then(conn=>getDependencies(conn)); } run(); ``` I keep getting an error that says **UnhandledPromiseRejectionWarning: Error: Access token and URL are required on the connection object** The reason is connection needs to be obtained from method A and sent to Method C , which is not happening. Can you please guide where i might be wrong? Also why is method C getting executed before A and B. **why does my promise chaining not work as promised**? I am running the code in Vscode and using Node 14
2020/12/02
[ "https://Stackoverflow.com/questions/65102314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14746367/" ]
Instead of using loops & click event handlers, you can simply use `v-model` to bind your radio options `value` attribute to your vue data's `gender` variable: ``` <div> <label for="male"> <input type="radio" id="male" value="M" v-model="gender">Male </label> <label for="female"> <input type="radio" id="female" value="F" v-model="gender">Female </label> </div> data: { gender: "M", } ```
Here is the solution: ``` <template> <div> <label class="radio" v-for="(gender, index) in genders" :key="index"> <input type="radio" :value="gender.code" v-model="selGender" @click="selectedGender(gender.code)" /> {{ gender.description }} </label> </div> </template> <script> export default { name: "App", components: {}, data() { return { selGender: "M", genders: [ { code: "F", description: "Female", }, { code: "M", description: "Male", }, ], }; }, methods: { selectedGender(option) { this.selGender = option; }, }, }; </script> ```
662,914
I have a very strange issue with PFsense as router running in KVM with CentOS 7. https connections are incredibly slow (10KB/s or less), and uploads over https simply don't work; for example using <https://imgur.com> over https loads, but uploading an image will take minutes, after which it says it failed. I have a dual-wan setup with a 192.168.178.x/24 subnet between the PFsense VM and the 2 ADSL router/modems. The router/modem's NAT functionality can not be turned off, so I've simply put them in the same subnet and connected them to eachother with only 1 DHCP server active, the first router sitting on .1 and the second on .2. The PFsense box sits on .5. The private network behind pfsense is 172.16.x.x/16. The PFsense virtual machine runs on a CentOS 7 KVM hypervisor with 2 intel Gbe NICs, bridged using a linux bridge with the VM network cards, using virtIO drivers, if it makes any difference. I do have a Squidproxy, however it is not enabled for https connections, and https accesses do not appear in Squid's logs, and turning off or removing Squid does not make a difference. Moving myself into the 192.168.178.x/24 subnet before PFsense DOES make a difference however, as suddenly everything runs smoothly again, and any https content loads instantly. Does anyone have a clue what could be going on? Anything I could try to diagnose? I've tried wireshark and aside for the slowness I don't see anything unusual.. Any suggestions are welcome! edit: I'm currently running memtest86+ inside a VM (those shouldn't give errors either right?), and I have 1 error so far, although it seems to be outside the range of memory I've granted the VM so I'm a bit confused.. I will update once I have more info. Might run a full memtest on the host later if I can clear users off the host for a moment.
2015/01/28
[ "https://serverfault.com/questions/662914", "https://serverfault.com", "https://serverfault.com/users/126695/" ]
It's entirely possible that if you're using pfSense 2.2 or later, you're being affected by [this](https://forum.pfsense.org/index.php?topic=88467.0). Symptoms would include: 1. Slowness for other VMs hosted on the KVM platform if they need to access a network resource which is on the other side of one of the router interfaces on the pfSense router 2. Physical machines which need to access something across the router are perfectly fast I am no expert, but my current understanding is that checksums are not correctly calculated for packets that move from one VM to another VM, so either the pfSense router discards them, or the recipient on the other end of the connection discards them, because they believe the packets were mangled in transport (which, I guess, they technically were). There's lots of discussion about it in the thread I linked above, and also in [this](https://forum.pfsense.org/index.php?topic=85797.0) thread. To resolve, you'll need to probably disable at least TX checksum offloading on the virtual NICs of the pfSense VM. I'm not sure of the procedure to do that in KVM, since I'm a Xen man, myself. Happy hunting!
How is the dual-WAN configured ? Is it in redundancy mode, load balancing or distributed ? This may be the problem if at a point a switch happens between the WANs and the secondary/backup one is a lot slower.
662,914
I have a very strange issue with PFsense as router running in KVM with CentOS 7. https connections are incredibly slow (10KB/s or less), and uploads over https simply don't work; for example using <https://imgur.com> over https loads, but uploading an image will take minutes, after which it says it failed. I have a dual-wan setup with a 192.168.178.x/24 subnet between the PFsense VM and the 2 ADSL router/modems. The router/modem's NAT functionality can not be turned off, so I've simply put them in the same subnet and connected them to eachother with only 1 DHCP server active, the first router sitting on .1 and the second on .2. The PFsense box sits on .5. The private network behind pfsense is 172.16.x.x/16. The PFsense virtual machine runs on a CentOS 7 KVM hypervisor with 2 intel Gbe NICs, bridged using a linux bridge with the VM network cards, using virtIO drivers, if it makes any difference. I do have a Squidproxy, however it is not enabled for https connections, and https accesses do not appear in Squid's logs, and turning off or removing Squid does not make a difference. Moving myself into the 192.168.178.x/24 subnet before PFsense DOES make a difference however, as suddenly everything runs smoothly again, and any https content loads instantly. Does anyone have a clue what could be going on? Anything I could try to diagnose? I've tried wireshark and aside for the slowness I don't see anything unusual.. Any suggestions are welcome! edit: I'm currently running memtest86+ inside a VM (those shouldn't give errors either right?), and I have 1 error so far, although it seems to be outside the range of memory I've granted the VM so I'm a bit confused.. I will update once I have more info. Might run a full memtest on the host later if I can clear users off the host for a moment.
2015/01/28
[ "https://serverfault.com/questions/662914", "https://serverfault.com", "https://serverfault.com/users/126695/" ]
It's entirely possible that if you're using pfSense 2.2 or later, you're being affected by [this](https://forum.pfsense.org/index.php?topic=88467.0). Symptoms would include: 1. Slowness for other VMs hosted on the KVM platform if they need to access a network resource which is on the other side of one of the router interfaces on the pfSense router 2. Physical machines which need to access something across the router are perfectly fast I am no expert, but my current understanding is that checksums are not correctly calculated for packets that move from one VM to another VM, so either the pfSense router discards them, or the recipient on the other end of the connection discards them, because they believe the packets were mangled in transport (which, I guess, they technically were). There's lots of discussion about it in the thread I linked above, and also in [this](https://forum.pfsense.org/index.php?topic=85797.0) thread. To resolve, you'll need to probably disable at least TX checksum offloading on the virtual NICs of the pfSense VM. I'm not sure of the procedure to do that in KVM, since I'm a Xen man, myself. Happy hunting!
Have you perhaps enabled a setting that penalises "unknown" or encrypted traffic? This setting is usually intended to punish or make the network unusable for file sharing and p2p users, but perhaps pfsense is seeing that https is encrypted and penalising it accordingly.
662,914
I have a very strange issue with PFsense as router running in KVM with CentOS 7. https connections are incredibly slow (10KB/s or less), and uploads over https simply don't work; for example using <https://imgur.com> over https loads, but uploading an image will take minutes, after which it says it failed. I have a dual-wan setup with a 192.168.178.x/24 subnet between the PFsense VM and the 2 ADSL router/modems. The router/modem's NAT functionality can not be turned off, so I've simply put them in the same subnet and connected them to eachother with only 1 DHCP server active, the first router sitting on .1 and the second on .2. The PFsense box sits on .5. The private network behind pfsense is 172.16.x.x/16. The PFsense virtual machine runs on a CentOS 7 KVM hypervisor with 2 intel Gbe NICs, bridged using a linux bridge with the VM network cards, using virtIO drivers, if it makes any difference. I do have a Squidproxy, however it is not enabled for https connections, and https accesses do not appear in Squid's logs, and turning off or removing Squid does not make a difference. Moving myself into the 192.168.178.x/24 subnet before PFsense DOES make a difference however, as suddenly everything runs smoothly again, and any https content loads instantly. Does anyone have a clue what could be going on? Anything I could try to diagnose? I've tried wireshark and aside for the slowness I don't see anything unusual.. Any suggestions are welcome! edit: I'm currently running memtest86+ inside a VM (those shouldn't give errors either right?), and I have 1 error so far, although it seems to be outside the range of memory I've granted the VM so I'm a bit confused.. I will update once I have more info. Might run a full memtest on the host later if I can clear users off the host for a moment.
2015/01/28
[ "https://serverfault.com/questions/662914", "https://serverfault.com", "https://serverfault.com/users/126695/" ]
It's entirely possible that if you're using pfSense 2.2 or later, you're being affected by [this](https://forum.pfsense.org/index.php?topic=88467.0). Symptoms would include: 1. Slowness for other VMs hosted on the KVM platform if they need to access a network resource which is on the other side of one of the router interfaces on the pfSense router 2. Physical machines which need to access something across the router are perfectly fast I am no expert, but my current understanding is that checksums are not correctly calculated for packets that move from one VM to another VM, so either the pfSense router discards them, or the recipient on the other end of the connection discards them, because they believe the packets were mangled in transport (which, I guess, they technically were). There's lots of discussion about it in the thread I linked above, and also in [this](https://forum.pfsense.org/index.php?topic=85797.0) thread. To resolve, you'll need to probably disable at least TX checksum offloading on the virtual NICs of the pfSense VM. I'm not sure of the procedure to do that in KVM, since I'm a Xen man, myself. Happy hunting!
Please check if you don't have an incorrect ldap/radius server configuration. To test please backup your settings and remove the server. That was the problem in my case.
57,611
I am amused by the story of Beethoven dueling another composer in Vienna, in which he takes the other composer's music, puts it upside down, and proceeds to humiliate him. I feel as though keyboardists with that kind of talent must exist nowadays, but they are perhaps not as popular as they once were. Do we have such "Rockstar" pianists in the modern age? Who are they?
2017/05/18
[ "https://music.stackexchange.com/questions/57611", "https://music.stackexchange.com", "https://music.stackexchange.com/users/33712/" ]
The type of keyboard music typically composed during the classical age was so much less complex, and the number of professional musicians was so much smaller, that the comparison cannot be made in any reasonable way. A lot of present-day pianists could probably sight-read the music of a second-rate composer from 1800. But that was a time when the music of Liszt, Alkan, Ligeti, Messiaen etc. etc. did not exist, and would have been considered absurd and obviously unplayable. It is impossible to say whether there are living musicians who are as phenomenal by current standards as Beethoven was by *his era's* standards.
[Jordan Rudess](http://www.jordanrudess.com/) (Dream Theater) is an impressive keyboarder and a true wizard, maybe not quite as famous as Beethoven. He's not only an accomplished piano virtuoso, he's also constantly pushing the boundaries of his instrument, e.g. by controlling and designing sounds in innovative ways such as with the Haken Continuum and his iPad apps.
57,611
I am amused by the story of Beethoven dueling another composer in Vienna, in which he takes the other composer's music, puts it upside down, and proceeds to humiliate him. I feel as though keyboardists with that kind of talent must exist nowadays, but they are perhaps not as popular as they once were. Do we have such "Rockstar" pianists in the modern age? Who are they?
2017/05/18
[ "https://music.stackexchange.com/questions/57611", "https://music.stackexchange.com", "https://music.stackexchange.com/users/33712/" ]
The type of keyboard music typically composed during the classical age was so much less complex, and the number of professional musicians was so much smaller, that the comparison cannot be made in any reasonable way. A lot of present-day pianists could probably sight-read the music of a second-rate composer from 1800. But that was a time when the music of Liszt, Alkan, Ligeti, Messiaen etc. etc. did not exist, and would have been considered absurd and obviously unplayable. It is impossible to say whether there are living musicians who are as phenomenal by current standards as Beethoven was by *his era's* standards.
You should definitely check out keyboardists from progressive rock bands. Prog rock musicians really introduced synthesizers to the popular music/rock scene and elicited really unconventional sounds from those synthesizers. Listen to what they'd done with synthesizers in the 70s, and you'd be amazed. That was 50 years ago! The most famous keyboard giants of prog rock are probably Keith Emerson of the band Emerson, Lake and Palmer, known for his flamboyant live performances and unconventional techniques (e.g. sticking a knife into a keyboard), and Rick Wakeman of the band Yes, who was much less flamboyant but nevertheless a great keyboardist who created many interesting sounds. My favorite, however, is Rick Wright of the venerable Pink Floyd; he wasn't into doing crazy, virtuosic things with his keyboards, but he was really good at creating sounds and conveying moods and atmospheres with his synths.
57,611
I am amused by the story of Beethoven dueling another composer in Vienna, in which he takes the other composer's music, puts it upside down, and proceeds to humiliate him. I feel as though keyboardists with that kind of talent must exist nowadays, but they are perhaps not as popular as they once were. Do we have such "Rockstar" pianists in the modern age? Who are they?
2017/05/18
[ "https://music.stackexchange.com/questions/57611", "https://music.stackexchange.com", "https://music.stackexchange.com/users/33712/" ]
Keep in mind that the Beethoven "duel" is legend. By legend I don't mean that it didn't happen, only that it's one of those famous stories passed down through the generations, and stories like that only come along once or twice in a lifetime. We certainly still have "rock star" pianists today that you might want to check out: * [Lang Lang](https://youtu.be/jYO9gTmCJTE?t=7m46s) * [Denis Matsuev](https://www.youtube.com/watch?v=_tdBFx37Mps) * [Yuja Wang](https://www.youtube.com/watch?v=8alxBofd_eQ) * And one example from an earlier era: [Sviatoslav Richter](https://www.youtube.com/watch?v=GQ-NAgDpRVs) And there's something to be said about someone that uses their encore to play [that piece that all the six year olds learn](https://www.youtube.com/watch?v=xTVF8e5pbNY).
[Jordan Rudess](http://www.jordanrudess.com/) (Dream Theater) is an impressive keyboarder and a true wizard, maybe not quite as famous as Beethoven. He's not only an accomplished piano virtuoso, he's also constantly pushing the boundaries of his instrument, e.g. by controlling and designing sounds in innovative ways such as with the Haken Continuum and his iPad apps.
57,611
I am amused by the story of Beethoven dueling another composer in Vienna, in which he takes the other composer's music, puts it upside down, and proceeds to humiliate him. I feel as though keyboardists with that kind of talent must exist nowadays, but they are perhaps not as popular as they once were. Do we have such "Rockstar" pianists in the modern age? Who are they?
2017/05/18
[ "https://music.stackexchange.com/questions/57611", "https://music.stackexchange.com", "https://music.stackexchange.com/users/33712/" ]
Keep in mind that the Beethoven "duel" is legend. By legend I don't mean that it didn't happen, only that it's one of those famous stories passed down through the generations, and stories like that only come along once or twice in a lifetime. We certainly still have "rock star" pianists today that you might want to check out: * [Lang Lang](https://youtu.be/jYO9gTmCJTE?t=7m46s) * [Denis Matsuev](https://www.youtube.com/watch?v=_tdBFx37Mps) * [Yuja Wang](https://www.youtube.com/watch?v=8alxBofd_eQ) * And one example from an earlier era: [Sviatoslav Richter](https://www.youtube.com/watch?v=GQ-NAgDpRVs) And there's something to be said about someone that uses their encore to play [that piece that all the six year olds learn](https://www.youtube.com/watch?v=xTVF8e5pbNY).
You should definitely check out keyboardists from progressive rock bands. Prog rock musicians really introduced synthesizers to the popular music/rock scene and elicited really unconventional sounds from those synthesizers. Listen to what they'd done with synthesizers in the 70s, and you'd be amazed. That was 50 years ago! The most famous keyboard giants of prog rock are probably Keith Emerson of the band Emerson, Lake and Palmer, known for his flamboyant live performances and unconventional techniques (e.g. sticking a knife into a keyboard), and Rick Wakeman of the band Yes, who was much less flamboyant but nevertheless a great keyboardist who created many interesting sounds. My favorite, however, is Rick Wright of the venerable Pink Floyd; he wasn't into doing crazy, virtuosic things with his keyboards, but he was really good at creating sounds and conveying moods and atmospheres with his synths.
57,611
I am amused by the story of Beethoven dueling another composer in Vienna, in which he takes the other composer's music, puts it upside down, and proceeds to humiliate him. I feel as though keyboardists with that kind of talent must exist nowadays, but they are perhaps not as popular as they once were. Do we have such "Rockstar" pianists in the modern age? Who are they?
2017/05/18
[ "https://music.stackexchange.com/questions/57611", "https://music.stackexchange.com", "https://music.stackexchange.com/users/33712/" ]
One of my favorite up-and-coming pianist/composers is Aysedeniz Gokcin. Not yet 30 years old, she's someone who combines a marvelous piano technique with impressive compositional skills. Here's her "Pink Floyd Lisztified," an arrangement of some of Pink Floyd's songs in the manner of Liszt:
[Jordan Rudess](http://www.jordanrudess.com/) (Dream Theater) is an impressive keyboarder and a true wizard, maybe not quite as famous as Beethoven. He's not only an accomplished piano virtuoso, he's also constantly pushing the boundaries of his instrument, e.g. by controlling and designing sounds in innovative ways such as with the Haken Continuum and his iPad apps.
57,611
I am amused by the story of Beethoven dueling another composer in Vienna, in which he takes the other composer's music, puts it upside down, and proceeds to humiliate him. I feel as though keyboardists with that kind of talent must exist nowadays, but they are perhaps not as popular as they once were. Do we have such "Rockstar" pianists in the modern age? Who are they?
2017/05/18
[ "https://music.stackexchange.com/questions/57611", "https://music.stackexchange.com", "https://music.stackexchange.com/users/33712/" ]
You should definitely check out keyboardists from progressive rock bands. Prog rock musicians really introduced synthesizers to the popular music/rock scene and elicited really unconventional sounds from those synthesizers. Listen to what they'd done with synthesizers in the 70s, and you'd be amazed. That was 50 years ago! The most famous keyboard giants of prog rock are probably Keith Emerson of the band Emerson, Lake and Palmer, known for his flamboyant live performances and unconventional techniques (e.g. sticking a knife into a keyboard), and Rick Wakeman of the band Yes, who was much less flamboyant but nevertheless a great keyboardist who created many interesting sounds. My favorite, however, is Rick Wright of the venerable Pink Floyd; he wasn't into doing crazy, virtuosic things with his keyboards, but he was really good at creating sounds and conveying moods and atmospheres with his synths.
[Jordan Rudess](http://www.jordanrudess.com/) (Dream Theater) is an impressive keyboarder and a true wizard, maybe not quite as famous as Beethoven. He's not only an accomplished piano virtuoso, he's also constantly pushing the boundaries of his instrument, e.g. by controlling and designing sounds in innovative ways such as with the Haken Continuum and his iPad apps.
57,611
I am amused by the story of Beethoven dueling another composer in Vienna, in which he takes the other composer's music, puts it upside down, and proceeds to humiliate him. I feel as though keyboardists with that kind of talent must exist nowadays, but they are perhaps not as popular as they once were. Do we have such "Rockstar" pianists in the modern age? Who are they?
2017/05/18
[ "https://music.stackexchange.com/questions/57611", "https://music.stackexchange.com", "https://music.stackexchange.com/users/33712/" ]
One of my favorite up-and-coming pianist/composers is Aysedeniz Gokcin. Not yet 30 years old, she's someone who combines a marvelous piano technique with impressive compositional skills. Here's her "Pink Floyd Lisztified," an arrangement of some of Pink Floyd's songs in the manner of Liszt:
You should definitely check out keyboardists from progressive rock bands. Prog rock musicians really introduced synthesizers to the popular music/rock scene and elicited really unconventional sounds from those synthesizers. Listen to what they'd done with synthesizers in the 70s, and you'd be amazed. That was 50 years ago! The most famous keyboard giants of prog rock are probably Keith Emerson of the band Emerson, Lake and Palmer, known for his flamboyant live performances and unconventional techniques (e.g. sticking a knife into a keyboard), and Rick Wakeman of the band Yes, who was much less flamboyant but nevertheless a great keyboardist who created many interesting sounds. My favorite, however, is Rick Wright of the venerable Pink Floyd; he wasn't into doing crazy, virtuosic things with his keyboards, but he was really good at creating sounds and conveying moods and atmospheres with his synths.
39,764,579
I have an output that looks like this: ``` colA | colB | Value A | a | 46 A | b | 8979 A | C | 684168468 B | a | 68546841 B | b | 456846 B | c | 468468 C | a | 684684 . | . | . . | . | . ``` The list goes on and on. Colb repeats a sequence of a,b,c, and there could be duplicate values, but I guess it doesn't matter, since it will have different values for colB. I want to make it look like this ``` col A | a | b | c A | 46 | 8979 | 684168468 B | 68546841 | 456846 | 468468 C | 684684 ``` I know I can do it with pivot table, but I want to do it in sql server, because I have so many rows. Can anyone tell me how to accomplish this?
2016/09/29
[ "https://Stackoverflow.com/questions/39764579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1658855/" ]
Simple PIVOT: ``` SELECT * FROM YourTable PIVOT (SUM(Value) FOR colB IN (a,b,c)) P ``` --- **Example** ``` WITH Src AS ( SELECT * FROM (VALUES ('A', 'a', 46 ), ('A', 'b', 8979 ), ('A', 'C', 684168468), ('B', 'a', 68546841 ), ('B', 'b', 456846 ), ('B', 'c', 468468 ), ('C', 'a', 684684 )) T(colA, colB, Value) ) SELECT * FROM Src PIVOT (SUM(Value) FOR colB IN (a,b,c)) P ``` **Result:** ``` colA a b c ---- ----------- ----------- ----------- A 46 8979 684168468 B 68546841 456846 468468 C 684684 NULL NULL ```
It is also possible to improve Pawel's solution by using [dynamic SQL pivot query](http://www.kodyaz.com/articles/t-sql-dynamic-pivot-table-example-code.aspx) as follows This will take task of you to create the column list from possible data values from second column ``` DECLARE @values varchar(max) SELECT @values = STUFF( ( select distinct ',[' + col2 + ']' from tblData for xml path('') ), 1,1,'') DECLARE @SQL nvarchar(max) SELECT @SQL = N' select * from tblData PIVOT ( sum(val) FOR col2 IN ( ' + @values + ' ) ) PivotTable ' --print @SQL exec sp_executesql @SQL ``` Output is as seen in below screenshot [![enter image description here](https://i.stack.imgur.com/36agA.png)](https://i.stack.imgur.com/36agA.png) For sample data: ``` create table tblData (col1 varchar(5),col2 varchar(5),val bigint) insert into tblData values ('A', 'a', 46 ), ('A', 'b', 8979 ), ('A', 'C', 684168468), ('B', 'a', 68546841 ), ('B', 'b', 456846 ), ('B', 'c', 468468 ), ('C', 'a', 684684 ) ```
14,394,255
I am reading this article - ~~<http://www.robertsosinski.com/2009/04/28/binding-scope-in-javascript/>~~ - where a custom bind function is made. ``` Function.prototype.bind = function(scope) { var _function = this; return function() { return _function.apply(scope, arguments); } } alice = { name: "alice" } eve = { talk: function(greeting) { console.log(greeting + ", my name is " + this.name); }.bind(alice) // <- bound to "alice" } eve.talk("hello"); // hello, my name is alice ``` My question is this line in particlar ``` return function() { return _function.apply(scope, arguments); } ``` Why is the **return** in \_function.apply(scope, arguments); there? And what is it doing and what is being returned? I removed that return and it still works.
2013/01/18
[ "https://Stackoverflow.com/questions/14394255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/798635/" ]
``` Why is the return in _function.apply(scope, arguments); there? And what is it doing and what is being returned? I removed that return and it still works. ``` This is there in case you want to return a value. Currently your talk function is not returning any value so you don't need it. if you change your talk function to ``` eve = { talk: function(greeting) { return ( greeting + ", my name is " + this.name) ; }.bind(alice) // <- bound to "alice" } console.log(eve.talk("hello")); ``` Now you will realize why return is required
It returns the result of applying the original function (the one being bound). When you make `_function.apply`, the `_function` will be called with `scope` as the context, so inside the function `this` will always refer to the `scope`. The second parameter `arguments` is there to pass all the arguments to the original function. And the `return` statement is there to make sure that the value returned from the original function call will also be returned from the bound function call.
14,394,255
I am reading this article - ~~<http://www.robertsosinski.com/2009/04/28/binding-scope-in-javascript/>~~ - where a custom bind function is made. ``` Function.prototype.bind = function(scope) { var _function = this; return function() { return _function.apply(scope, arguments); } } alice = { name: "alice" } eve = { talk: function(greeting) { console.log(greeting + ", my name is " + this.name); }.bind(alice) // <- bound to "alice" } eve.talk("hello"); // hello, my name is alice ``` My question is this line in particlar ``` return function() { return _function.apply(scope, arguments); } ``` Why is the **return** in \_function.apply(scope, arguments); there? And what is it doing and what is being returned? I removed that return and it still works.
2013/01/18
[ "https://Stackoverflow.com/questions/14394255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/798635/" ]
``` Why is the return in _function.apply(scope, arguments); there? And what is it doing and what is being returned? I removed that return and it still works. ``` This is there in case you want to return a value. Currently your talk function is not returning any value so you don't need it. if you change your talk function to ``` eve = { talk: function(greeting) { return ( greeting + ", my name is " + this.name) ; }.bind(alice) // <- bound to "alice" } console.log(eve.talk("hello")); ``` Now you will realize why return is required
In the return you simply return new function. You close the `scope` and `_function` in the scope of the returned anonymous function. It's called closure - all variables visible in the parent function (the one which return the anonymous one) are visible in the returned function. Here is your example: ``` Function.prototype.bind = function(scope) { var _function = this; return function() { return _function.apply(scope, arguments); } }; function foo() { console.log(this.foobar); } var bar = { foobar: 'baz' }; foo = foo.bind(bar); ``` So now step by step: `foo.bind(bar);` returns the function: ``` function() { return _function.apply(scope, arguments); } ``` `_function` is `foo`, `scope` is the `bind` argument - `bar`. `arguments` is something like an array (not exactly) which contains all arguments of a function, so by: `foo()`, your `this` will be the scope provided as first argument of `apply`. If you use `foo(1,2,3)` arguments will contain `1,2,3`. The logged result will be `baz`.
14,394,255
I am reading this article - ~~<http://www.robertsosinski.com/2009/04/28/binding-scope-in-javascript/>~~ - where a custom bind function is made. ``` Function.prototype.bind = function(scope) { var _function = this; return function() { return _function.apply(scope, arguments); } } alice = { name: "alice" } eve = { talk: function(greeting) { console.log(greeting + ", my name is " + this.name); }.bind(alice) // <- bound to "alice" } eve.talk("hello"); // hello, my name is alice ``` My question is this line in particlar ``` return function() { return _function.apply(scope, arguments); } ``` Why is the **return** in \_function.apply(scope, arguments); there? And what is it doing and what is being returned? I removed that return and it still works.
2013/01/18
[ "https://Stackoverflow.com/questions/14394255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/798635/" ]
It returns the result of applying the original function (the one being bound). When you make `_function.apply`, the `_function` will be called with `scope` as the context, so inside the function `this` will always refer to the `scope`. The second parameter `arguments` is there to pass all the arguments to the original function. And the `return` statement is there to make sure that the value returned from the original function call will also be returned from the bound function call.
In the return you simply return new function. You close the `scope` and `_function` in the scope of the returned anonymous function. It's called closure - all variables visible in the parent function (the one which return the anonymous one) are visible in the returned function. Here is your example: ``` Function.prototype.bind = function(scope) { var _function = this; return function() { return _function.apply(scope, arguments); } }; function foo() { console.log(this.foobar); } var bar = { foobar: 'baz' }; foo = foo.bind(bar); ``` So now step by step: `foo.bind(bar);` returns the function: ``` function() { return _function.apply(scope, arguments); } ``` `_function` is `foo`, `scope` is the `bind` argument - `bar`. `arguments` is something like an array (not exactly) which contains all arguments of a function, so by: `foo()`, your `this` will be the scope provided as first argument of `apply`. If you use `foo(1,2,3)` arguments will contain `1,2,3`. The logged result will be `baz`.
51,593
Every time I open Windows Explorer it is not maximized so I have to double click on the window header. Is there some way to have it open maximized by default?
2009/10/06
[ "https://superuser.com/questions/51593", "https://superuser.com", "https://superuser.com/users/13336/" ]
**Tip #1 - Maximize Window Explorer window from shortcut** ![alt text](https://i.stack.imgur.com/GJwL6.png) **Tip #2 Registry Setting** A little snooping in the registry turns up the MaximizeApps key. I don’t know if this works on all apps or just Windows Explorer but I solved my problem. Find this Key ``` * HKEY_CURRENT_USER\Software\Microsoft\Windows\ * CurrentVersion\Explorer ``` Add a new DWORD value named “MaximizeApps” ``` * Set the value to 1 ``` **Tip #3 use key short cut** Open explorer, maximize and close it with `Shift` pressed...
Eusing's **[Auto Window Manager](http://www.eusing.com/WindowManager/WindowManager.htm)** will do that (and a lot more useful things) for you. > > **Automatically maximize** or minimize or normal **all new windows you > specify**. > > > Automatically minimize all new windows > you specify to system tray. > > > Automatically keep all new windows you > specify always on top. > > > Adds several menu items to standard > window system menu. > > > Minimize the current window to system > tray. > > > Apply transparency effect from 0% > (fully transparent) to 100% (solid) > rate to any program in Windows > 2000/XP/2003/Vista. > > > Show window property of application, > such as window handle, class name, > process id etc. > > > Auto Refresh Internet explorer at > custom intervals. > > > *AWM is freeware.*
51,593
Every time I open Windows Explorer it is not maximized so I have to double click on the window header. Is there some way to have it open maximized by default?
2009/10/06
[ "https://superuser.com/questions/51593", "https://superuser.com", "https://superuser.com/users/13336/" ]
**Tip #1 - Maximize Window Explorer window from shortcut** ![alt text](https://i.stack.imgur.com/GJwL6.png) **Tip #2 Registry Setting** A little snooping in the registry turns up the MaximizeApps key. I don’t know if this works on all apps or just Windows Explorer but I solved my problem. Find this Key ``` * HKEY_CURRENT_USER\Software\Microsoft\Windows\ * CurrentVersion\Explorer ``` Add a new DWORD value named “MaximizeApps” ``` * Set the value to 1 ``` **Tip #3 use key short cut** Open explorer, maximize and close it with `Shift` pressed...
A VBscript that will do the tip #3 of @joe automatically (Windows 8+) ``` Dim WshShell Set WshShell = createObject("Wscript.Shell") WshShell.Run "explorer.exe" Wscript.Sleep 2000 WshShell.AppActivate "File explorer" WshShell.Sleep 2000 WshShell.SendKeys "+%{F4}" Set WshShell = Nothing ``` Set `WshShell.AppActivate` to "This PC" if it is set be opened there.
51,593
Every time I open Windows Explorer it is not maximized so I have to double click on the window header. Is there some way to have it open maximized by default?
2009/10/06
[ "https://superuser.com/questions/51593", "https://superuser.com", "https://superuser.com/users/13336/" ]
Eusing's **[Auto Window Manager](http://www.eusing.com/WindowManager/WindowManager.htm)** will do that (and a lot more useful things) for you. > > **Automatically maximize** or minimize or normal **all new windows you > specify**. > > > Automatically minimize all new windows > you specify to system tray. > > > Automatically keep all new windows you > specify always on top. > > > Adds several menu items to standard > window system menu. > > > Minimize the current window to system > tray. > > > Apply transparency effect from 0% > (fully transparent) to 100% (solid) > rate to any program in Windows > 2000/XP/2003/Vista. > > > Show window property of application, > such as window handle, class name, > process id etc. > > > Auto Refresh Internet explorer at > custom intervals. > > > *AWM is freeware.*
A VBscript that will do the tip #3 of @joe automatically (Windows 8+) ``` Dim WshShell Set WshShell = createObject("Wscript.Shell") WshShell.Run "explorer.exe" Wscript.Sleep 2000 WshShell.AppActivate "File explorer" WshShell.Sleep 2000 WshShell.SendKeys "+%{F4}" Set WshShell = Nothing ``` Set `WshShell.AppActivate` to "This PC" if it is set be opened there.
37,744,695
I am trying to get the content of an email I receive using SES. But the response I get is crazy ``` { "notificationType": "Received", "receipt": { "timestamp": "2015-09-11T20:32:33.936Z", "processingTimeMillis": 222, "recipients": [ "[email protected]" ], "spamVerdict": { "status": "PASS" }, "virusVerdict": { "status": "PASS" }, "spfVerdict": { "status": "PASS" }, "dkimVerdict": { "status": "PASS" }, "action": { "type": "SNS", "topicArn": "arn:aws:sns:us-east-1:012345678912:example-topic" } }, "mail": { "timestamp": "2015-09-11T20:32:33.936Z", "source": "[email protected]", "messageId": "d6iitobk75ur44p8kdnnp7g2n800", "destination": [ "[email protected]" ], "headersTruncated": false, "headers": [ { "name": "Return-Path", "value": "<0000014fbe1c09cf-7cb9f704-7531-4e53-89a1-5fa9744f5eb6-000000@amazonses.com>" }, { "name": "Received", "value": "from a9-183.smtp-out.amazonses.com (a9-183.smtp-out.amazonses.com [54.240.9.183]) by inbound-smtp.us-east-1.amazonaws.com with SMTP id d6iitobk75ur44p8kdnnp7g2n800 for [email protected]; Fri, 11 Sep 2015 20:32:33 +0000 (UTC)" }, { "name": "DKIM-Signature", "value": "v=1; a=rsa-sha256; q=dns/txt; c=relaxed/simple; s=ug7nbtf4gccmlpwj322ax3p6ow6yfsug; d=amazonses.com; t=1442003552; h=From:To:Subject:MIME-Version:Content-Type:Content-Transfer-Encoding:Date:Message-ID:Feedback-ID; bh=DWr3IOmYWoXCA9ARqGC/UaODfghffiwFNRIb2Mckyt4=; b=p4ukUDSFqhqiub+zPR0DW1kp7oJZakrzupr6LBe6sUuvqpBkig56UzUwc29rFbJF hlX3Ov7DeYVNoN38stqwsF8ivcajXpQsXRC1cW9z8x875J041rClAjV7EGbLmudVpPX 4hHst1XPyX5wmgdHIhmUuh8oZKpVqGi6bHGzzf7g=" }, { "name": "From", "value": "[email protected]" }, { "name": "To", "value": "[email protected]" }, { "name": "Subject", "value": "Example subject" }, { "name": "MIME-Version", "value": "1.0" }, { "name": "Content-Type", "value": "text/plain; charset=UTF-8" }, { "name": "Content-Transfer-Encoding", "value": "7bit" }, { "name": "Date", "value": "Fri, 11 Sep 2015 20:32:32 +0000" }, { "name": "Message-ID", "value": "<[email protected]>" }, { "name": "X-SES-Outgoing", "value": "2015.09.11-54.240.9.183" }, { "name": "Feedback-ID", "value": "1.us-east-1.Krv2FKpFdWV+KUYw3Qd6wcpPJ4Sv/pOPpEPSHn2u2o4=:AmazonSES" } ], "commonHeaders": { "returnPath": "0000014fbe1c09cf-7cb9f704-7531-4e53-89a1-5fa9744f5eb6-000000@amazonses.com", "from": [ "[email protected]" ], "date": "Fri, 11 Sep 2015 20:32:32 +0000", "to": [ "[email protected]" ], "messageId": "<[email protected]>", "subject": "Example subject" } }, "content": "Return-Path: <[email protected]>\r\nReceived: from a9-183.smtp-out.amazonses.com (a9-183.smtp-out.amazonses.com [54.240.9.183])\r\n by inbound-smtp.us-east-1.amazonaws.com with SMTP id d6iitobk75ur44p8kdnnp7g2n800\r\n for [email protected];\r\n Fri, 11 Sep 2015 20:32:33 +0000 (UTC)\r\nDKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/simple;\r\n\ts=ug7nbtf4gccmlpwj322ax3p6ow6yfsug; d=amazonses.com; t=1442003552;\r\n\th=From:To:Subject:MIME-Version:Content-Type:Content-Transfer-Encoding:Date:Message-ID:Feedback-ID;\r\n\tbh=DWr3IOmYWoXCA9ARqGC/UaODfghffiwFNRIb2Mckyt4=;\r\n\tb=p4ukUDSFqhqiub+zPR0DW1kp7oJZakrzupr6LBe6sUuvqpBkig56UzUwc29rFbJF\r\n\thlX3Ov7DeYVNoN38stqwsF8ivcajXpQsXRC1cW9z8x875J041rClAjV7EGbLmudVpPX\r\n\t4hHst1XPyX5wmgdHIhmUuh8oZKpVqGi6bHGzzf7g=\r\nFrom: [email protected]\r\nTo: [email protected]\r\nSubject: Example subject\r\nMIME-Version: 1.0\r\nContent-Type: text/plain; charset=UTF-8\r\nContent-Transfer-Encoding: 7bit\r\nDate: Fri, 11 Sep 2015 20:32:32 +0000\r\nMessage-ID: <[email protected]>\r\nX-SES-Outgoing: 2015.09.11-54.240.9.183\r\nFeedback-ID: 1.us-east-1.Krv2FKpFdWV+KUYw3Qd6wcpPJ4Sv/pOPpEPSHn2u2o4=:AmazonSES\r\n\r\nExample content\r\n" } ``` I can easily get the subject, but I can't believe the message content is that hard to extract, the content is `Example content` and the only place I can find it is in the `content` key but it's very hard to parse
2016/06/10
[ "https://Stackoverflow.com/questions/37744695", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1820957/" ]
Yes it is ! As an example if you use gmail or another email client, go to view the raw data or source of your mail you'll see about the same thing (a bit different as here its a JSON representation) but emails' content are based on [RFC2045](https://www.rfc-editor.org/rfc/rfc2045) The [Contents of Notifications for Amazon SES Email Receiving](http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-notifications-contents.html) are documented and from the Top-Level JSON Object > > **content** : String that contains the raw, unmodified email, which is > typically in Multipurpose Internet Mail Extensions (MIME) format. For > more information about MIME format, see RFC 2045. > > > *Note* This field is present only if the notification was triggered by > an SNS action. Notifications triggered by all other actions do not > contain this field. > > >
This POJO might help: ``` public class SESMessageData { public SESMessageData() { } private String notificationType; private Receipt receipt; private Mail mail; public String getNotificationType() { return notificationType; } public void setNotificationType(String notificationType) { this.notificationType = notificationType; } public Receipt getReceipt() { return receipt; } public void setReceipt(Receipt receipt) { this.receipt = receipt; } public Mail getMail() { return mail; } public void setMail(Mail mail) { this.mail = mail; } public static class Receipt { private Action action; private Verdict dkimVerdict; private String processingTimeMillis; private Verdict spamVerdict; private Verdict spfVerdict; private String timestamp; private Verdict virusVerdict; private List<String> recipients; /** * @return A list of the recipient addresses for this delivery. */ public List<String> getRecipients() { return recipients; } public void setRecipients(List<String> recipients) { this.recipients = recipients; } /** * @return Object that encapsulates information about the action that was executed. */ public Action getAction() { return action; } public void setAction(Action action) { this.action = action; } /** * @return Object that indicates whether the DomainKeys Identified Mail (DKIM) check passed. */ public Verdict getDkimVerdict() { return dkimVerdict; } public void setDkimVerdict(Verdict dkimVerdict) { this.dkimVerdict = dkimVerdict; } /** * @return String that specifies the period, in milliseconds, from the time Amazon SES received the message to the time it triggered the action. */ public String getProcessingTimeMillis() { return processingTimeMillis; } public void setProcessingTimeMillis(String processingTimeMillis) { this.processingTimeMillis = processingTimeMillis; } /** * @return Object that indicates whether the message is spam. */ public Verdict getSpamVerdict() { return spamVerdict; } public void setSpamVerdict(Verdict spamVerdict) { this.spamVerdict = spamVerdict; } /** * @return Object that indicates whether the Sender Policy Framework (SPF) check passed. */ public Verdict getSpfVerdict() { return spfVerdict; } public void setSpfVerdict(Verdict spfVerdict) { this.spfVerdict = spfVerdict; } /** * @return String that specifies when the action was triggered, in ISO8601 format. */ public String getTimestamp() { return timestamp; } public void setTimestamp(String timestamp) { this.timestamp = timestamp; } /** * @return Object that indicates whether the message contains a virus. */ public Verdict getVirusVerdict() { return virusVerdict; } public void setVirusVerdict(Verdict virusVerdict) { this.virusVerdict = virusVerdict; } /** * Object that encapsulates information about the action that was executed. */ public class Action { private String type; private String topicArn; private String bucketName; private String objectKey; private String smtpReplyCode; private String statusCode; private String message; private String sender; private String functionArn; private String invocationType; private String organizationArn; /** * @return String that indicates the type of action that was executed. Possible values are S3, SNS, Bounce, Lambda, Stop, and WorkMail. */ public String getType() { return type; } public void setType(String type) { this.type = type; } /** * @return String that contains the Amazon Resource Name (ARN) of the Amazon SNS topic to which the notification was published. */ public String getTopicArn() { return topicArn; } public void setTopicArn(String topicArn) { this.topicArn = topicArn; } /** * @return String that contains the name of the Amazon S3 bucket to which the message was published. Present only for the S3 action type. */ public String getBucketName() { return bucketName; } public void setBucketName(String bucketName) { this.bucketName = bucketName; } /** * @return String that contains a name that uniquely identifies the email in the Amazon S3 bucket. This is the same as the messageId in the mail object. Present only for the S3 action type. */ public String getObjectKey() { return objectKey; } public void setObjectKey(String objectKey) { this.objectKey = objectKey; } /** * @return String that contains the SMTP reply code, as defined by RFC 5321. Present only for the bounce action type. */ public String getSmtpReplyCode() { return smtpReplyCode; } public void setSmtpReplyCode(String smtpReplyCode) { this.smtpReplyCode = smtpReplyCode; } /** * @return String that contains the SMTP enhanced status code, as defined by RFC 3463. Present only for the bounce action type. */ public String getStatusCode() { return statusCode; } public void setStatusCode(String statusCode) { this.statusCode = statusCode; } public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } public String getSender() { return sender; } public void setSender(String sender) { this.sender = sender; } public String getFunctionArn() { return functionArn; } public void setFunctionArn(String functionArn) { this.functionArn = functionArn; } public String getInvocationType() { return invocationType; } public void setInvocationType(String invocationType) { this.invocationType = invocationType; } public String getOrganizationArn() { return organizationArn; } public void setOrganizationArn(String organizationArn) { this.organizationArn = organizationArn; } @Override public String toString() { return "Action{" + "type='" + type + '\'' + ", topicArn='" + topicArn + '\'' + ", bucketName='" + bucketName + '\'' + ", objectKey='" + objectKey + '\'' + ", smtpReplyCode='" + smtpReplyCode + '\'' + ", statusCode='" + statusCode + '\'' + ", message='" + message + '\'' + ", sender='" + sender + '\'' + ", functionArn='" + functionArn + '\'' + ", invocationType='" + invocationType + '\'' + ", organizationArn='" + organizationArn + '\'' + '}'; } } public class Verdict { private String status; public String getStatus() { return status; } public void setStatus(String status) { this.status = status; } @Override public String toString() { return "Verdict{" + "status='" + status + '\'' + '}'; } } @Override public String toString() { return "Receipt{" + "action=" + action + ", dkimVerdict=" + dkimVerdict + ", processingTimeMillis='" + processingTimeMillis + '\'' + ", spamVerdict=" + spamVerdict + ", spfVerdict=" + spfVerdict + ", timestamp='" + timestamp + '\'' + ", virusVerdict=" + virusVerdict + ", recipients=" + recipients + '}'; } } public static class Mail { private List<String> destination; private String messageId; private String source; private String timestamp; private List<Header> headers; private CommonHeader commonHeaders; private String headersTruncated; /** * @return A list of email addresses that are recipients of the email. */ public List<String> getDestination() { return destination; } public void setDestination(List<String> destination) { this.destination = destination; } public String getMessageId() { return messageId; } public void setMessageId(String messageId) { this.messageId = messageId; } public String getSource() { return source; } public void setSource(String source) { this.source = source; } public String getTimestamp() { return timestamp; } public void setTimestamp(String timestamp) { this.timestamp = timestamp; } public List<Header> getHeaders() { return headers; } public void setHeaders(List<Header> headers) { this.headers = headers; } public CommonHeader getCommonHeaders() { return commonHeaders; } public void setCommonHeaders(CommonHeader commonHeaders) { this.commonHeaders = commonHeaders; } public String getHeadersTruncated() { return headersTruncated; } public void setHeadersTruncated(String headersTruncated) { this.headersTruncated = headersTruncated; } public class Header { private String name; private String value; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getValue() { return value; } public void setValue(String value) { this.value = value; } @Override public String toString() { return "Header{" + "name='" + name + '\'' + ", value='" + value + '\'' + '}'; } } public class CommonHeader { private String returnPath; private List<String> from; private List<String> to; private String messageId; private String subject; public String getReturnPath() { return returnPath; } public void setReturnPath(String returnPath) { this.returnPath = returnPath; } public List<String> getFrom() { return from; } public void setFrom(List<String> from) { this.from = from; } public List<String> getTo() { return to; } public void setTo(List<String> to) { this.to = to; } public String getMessageId() { return messageId; } public void setMessageId(String messageId) { this.messageId = messageId; } public String getSubject() { return subject; } public void setSubject(String subject) { this.subject = subject; } @Override public String toString() { return "CommonHeader{" + "returnPath='" + returnPath + '\'' + ", from=" + from + ", to=" + to + ", messageId='" + messageId + '\'' + ", subject='" + subject + '\'' + '}'; } } @Override public String toString() { return "Mail{" + "destination=" + destination + ", messageId='" + messageId + '\'' + ", source='" + source + '\'' + ", timestamp='" + timestamp + '\'' + ", headers=" + headers + ", commonHeaders=" + commonHeaders + ", headersTruncated='" + headersTruncated + '\'' + '}'; } } @Override public String toString() { return "SESMessageData{" + "notificationType='" + notificationType + '\'' + ", receipt=" + receipt + ", mail=" + mail + '}'; } ``` }
67,701,715
I need to run tests in different environments : `DEV`, `STAGING`, `PRODUCTION`. And needless to say, the environment variables/secrets for the above environments would obviously be different. I quick solution would be to have an env file for each environment like `dev.env`, `staging.env` & `prod.env` But according to the docs of popular dotEnv npm package and 12 Factor app, it is not recommended to have multiple `.env` files in your repo. Please give me a practical solution of managing env vars for multiple environments. * <https://github.com/motdotla/dotenv#should-i-have-multiple-env-files> * <https://12factor.net/config>
2021/05/26
[ "https://Stackoverflow.com/questions/67701715", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13889044/" ]
If I understand correctly what they're writing here: **Should I have multiple .env files?** > > No. We strongly recommend against having a "main" .env file and an "environment" .env file like .env.test. Your config should vary between deploys, and you should not be sharing values between environments. > > > This doesn't mean that you shouldn't have multiple env files, but rather that you shouldn't have one `main.env` file with all the default configuration and additional env files (one per environment) that inherit from `main.env` and override certain values. The reason why it's not recommended is that with such a configuration it's difficult to understand "where a specific value is coming from?" (from which one of the following: main-env-file, specific-env-file, env-variable, code-default and etc). That said, if you create multiple env files without such a "main" this means that you'll need to duplicate many of the values all over the different env files, which is better because of explicitness, but has the downside of duplication/verbosity. Configuration is not trivial IMO and while you have only a small project it doesn't matter much how you choose to implement, but if we're talking about something more critical like a company's product, then there are many solutions available out there, some are open-source and free, some cost money, but it's worth doing your research and figure out which one provides you the benefits that are more meaningful to your use-case. Some of the more popular tools are: [Puppet](https://puppet.com/), [Ansible](https://www.ansible.com/), and [Chef](https://www.chef.io/products/chef-infra).
In my opinion, it's better to have multiple .env in your project. For example in Symfony, we can configure multiple environment for our project, example: local, dev, prod. it's purpose is to have a more clean and readable code. I'm not sure about over technologies but you can read a little bit about .env in this Symfony article(no need to be a crack in symfony): <https://symfony.com/doc/current/configuration.html#config-dot-env>
7,327
Rolled aluminum descending rings like the [SMC Descending ring](http://www.innermountainoutfitters.com/descending-ring/) are light *(11g)* and cheap *($3)* which makes them great for carrying and leaving behind on long descents, but in my mind they are disposable pro, you use them once for retrieving your rope after a rappel and then then leave them to rot. More than once I've seen these rings used with mallions on bolted anchors at high-traffic rappels, but I never use them, some I've replaced with strong 50kN [steel rings](http://www.mec.ca/product/4007-356/fixe-descending-ring/). They're rated for 14kN, which is more than enough for body weight on a nice controlled rappel, but are they suitable for multiple uses? What's the rule of thumb for this style of descender ring? ![SMC Descending rings](https://i.stack.imgur.com/cVegt.jpg)
2014/12/09
[ "https://outdoors.stackexchange.com/questions/7327", "https://outdoors.stackexchange.com", "https://outdoors.stackexchange.com/users/4148/" ]
No == It is ***not*** ok to use that type of descending ring for fixed anchors. SMC Descending Rings are a one-piece aluminum ring which are intended to be placed at the top of a pull down rappel in place of a carabiner in order to facilitate recovery of ropes. SMC [issues the following](http://www.rocknrescue.com/acatalog/SMC81600DescendingRing.pdf) for care, maintenance and retirement schedule needs of their descender rings: > > **CARE, MAINTENANCE and RETIREMENT SCHEDULE NEEDS** > > > Always inspect Descending Rings before each use and periodically while > in storage. The user, depending upon their specific environment and > storage methods, must determine the period between inspections. > Inspect for cracks, warping, deep gouges and worn areas, making sure > that what may appear to be a scratch is not actually a crack. Look for > sharp edges or rough areas that might abrade a rope. After each use, > remove all dirt and allow Descending Rings to dry in a warm place > before storing. SMC Descending Rings will continue to provide > reliable performance only when used safely and properly maintained and > stored. It is also suggested that the user maintain a permanent > record listing the date and results of every usage inspection. > > > *We recommend the regular inspection of all rescue equipment **and > strongly suggest retiring gear when ANY of the following applies:*** > > > 1. Regular inspection reveals warping, cracks, deep gouges or any wear. > 2. It is physically damaged or no longer functions as when new. > 3. It has been subjected to an abnormally high loads, such as in a fall or exposed to heat sufficient to alter its surface appearance. > 4. You are not completely satisfied that it meets the needs of its intended use. > 5. **The history of the gear is unknown or otherwise in question.** > > > Source: [[PDF] SMC Descending Rings #81600](http://www.rocknrescue.com/acatalog/SMC81600DescendingRing.pdf), Seattle Manufacturing Corporation **The history of any SMC descending rings that you may find attached to an anchor is *unknown* and therefore it is strongly suggested by the manufacturer that you retire those rings.** Do your fellow abseilers a favour and remove those rings from any anchors you may find them attached to. Either destroy them, or mark them as retired and use them to hang your hammock. Lightweight aluminium descender rings are designed for quick descents of alpine routes, they are not intended to be used for heavy duty applications. The improper use of *any* piece of equipment is unsafe: ![Bad rap rings](https://i.stack.imgur.com/Trz0d.jpg) **Use the right tool for the right job!** Thin alloy rap rings are *not* intended for heavy-duty use, carefully inspect descender rings and retire if they show any sign of wear.
SMC's site doesn't say anything very helpful but, keep in mind that 14kN is enough to lift a medium sized car. In a rappel only situation, it should last for years. That being said....using 'left gear' is a judgment you should make for yourself. Even if you can't see any defects in a piece of metal, you never know if it has an invisible crack from taking a big fall, freeze-thaw damage (that thing is hollow, right?) or previously being to used pull a car out of a ditch. For myself, I would want a backup.
193,497
My friend recorded some videos on his camcorder, and the one I want to convert is 1.66gb, 1440x1080, 6 audio channels. I want to shrink this down to something like 480x360, and 2 audio channels... or anything small enough that I can upload it to facebook/youtube. I also want to crop out a section of it. I tried ffmpeg, winff, 'any video converter', virtualdub, and picasa. ffmpeg and winff *really* don't like the 6 audio channels, and throw weird errors when I try slicing the video, 'any video converter' did convert the movie but de-synced the audio, virtualdub refuses to open the file (bad file format), and ... well picasa has to crop functionality, but can't crop it until i convert it (plus, I haven't the slightest clue where it outputs the files to). There are about 18 billion video converters out there when I search, and they all have free downloads... but which ones will (a) actually convert my file into a usable format, and (b) not install spamware/only convert the 10 seconds, or something else stupid? I'm running windows 7.
2010/09/28
[ "https://superuser.com/questions/193497", "https://superuser.com", "https://superuser.com/users/5106/" ]
I'm using the heck out of [HandBrake](http://handbrake.fr/). It's my favorite by far. it will convert MTS. I'm running Win8 /8.1 now. Used it all the time in 7 as well.
Give [MediaCoder](http://www.mediacoderhq.com/) a shot. It should do what you want with a few clicks.
125,759
I've carried out a couple of repairs on my iPhone 5 over the last fortnight, including replacing the screen. I detached the home button from its mounting in the original screen, and placed it in the new screen. It works fine, but I need to glue it to the contact that it sits on top of. What glue does Apple use to do this? Where can I buy this, or a suitable alternative?
2014/03/27
[ "https://apple.stackexchange.com/questions/125759", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/30215/" ]
I realize this is an older question, but a common problem for several iphone models. Might help someone. The whole assembly is larger than the home button hole. Problem is the round plastic button does come loose from the whole assembly. It is glued in place and can break free. As long as everything is functioning correctly, it wouldn't hurt to put a dab of glue on and stick it back in. Be careful not to use too much, as it could cause more problems! Good luck
I can't see any mention of glue on the [iFixIt guide to replace the Home button on the iPhone 5](https://www.ifixit.com/Guide/iPhone+5+Home+Button+Replacement/10594), nor on the comments there, or on the list of products or related products on the replacement button ([plastic](https://www.ifixit.com/Store/Parts/iPhone-5-Home-Button/IF118-008-1) or [hardware](https://www.ifixit.com/Store/iPhone/iPhone-5-Home-Button-Ribbon-Cable/IF118-006-1)). It appears to me from the pictures that the button stays in place because it's bigger than the hole on the screen: ![iPhone 5 Home button - from iFixIt](https://i.stack.imgur.com/o5a37.jpg) So no glue would be needed.
125,759
I've carried out a couple of repairs on my iPhone 5 over the last fortnight, including replacing the screen. I detached the home button from its mounting in the original screen, and placed it in the new screen. It works fine, but I need to glue it to the contact that it sits on top of. What glue does Apple use to do this? Where can I buy this, or a suitable alternative?
2014/03/27
[ "https://apple.stackexchange.com/questions/125759", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/30215/" ]
I realize this is an older question, but a common problem for several iphone models. Might help someone. The whole assembly is larger than the home button hole. Problem is the round plastic button does come loose from the whole assembly. It is glued in place and can break free. As long as everything is functioning correctly, it wouldn't hurt to put a dab of glue on and stick it back in. Be careful not to use too much, as it could cause more problems! Good luck
I use 0,5mm thick vhb tape that I cut around the edge of the button on each side. This settles the button in its lodgement, and prevent casual wobbling.
43,291,879
I have a `DatagridTemplateColumn` using a `ComboBox`, but the `ItemSource` will not fill with the bound collection. I should mention that the `DataGrid` is being correctly bound and any other collections in the veiwModel is working, it is just this `ComboBox` in the datagrid does not work. This is the MCVE sample code: ``` <UserControl d:DataContext="{d:DesignInstance d:Type=viewModels:StaffInfoDetailViewModel, IsDesignTimeCreatable=False}"> <DataGrid Grid.Column="0" Grid.ColumnSpan="6" AutoGenerateColumns="False" ItemsSource="{Binding SectionStaffMasterDisplay}" Grid.Row="4" Grid.RowSpan="2" AlternationCount="2" CanUserAddRows="True" CanUserDeleteRows="True" GridLinesVisibility="None" VerticalAlignment="Top" CanUserSortColumns="False"> <DataGridTemplateColumn Width="190" Header="資格"> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <ComboBox DisplayMemberPath="ItemName" SelectedValuePath="ItemName" SelectedItem="{Binding Path=Name, UpdateSourceTrigger=LostFocus}" ItemsSource="{Binding Path=LicenceComboBox}" /> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> </DataGridTemplateColumn> .....more XAML ``` And the `ViewModel` ``` public class StaffInfoDetailViewModel : CollectionViewModel<StaffInfoDetailWrapper> { public StaffInfoDetailViewModel() { LicenceComboBoxItems(); MasterDataDisplay(); } public void LicenceComboBoxItems() { foreach (var item in DataProvider.StartUpSection) { LicenceComboBox.Add(item); } } private ObservableCollection<Licence> _licenceComboBox = new ObservableCollection<Licence>(); public ObservableCollection<Licence> LicenceComboBox { get { return _licenceComboBox; } set { _licenceComboBox = value; OnPropertyChanged(); } } private string _name; public string Name { get { return _name; } set { _name = value; OnPropertyChanged(); } } ``` The Model class: ``` public partial class Licence { public System.Guid Id { get; set; } // ID (Primary key) public string ItemName { get; set; } // ItemName (length: 50) public string Section { get; set; } // Section (length: 50) public Licence() { InitializePartial(); } partial void InitializePartial(); } ``` The datagrid collection. ``` private ObservableCollectionEx<StaffInfoDetail> _sectionStaffMasterDisplay = new ObservableCollectionEx<StaffInfoDetail>(); public ObservableCollectionEx<StaffInfoDetail> SectionStaffMasterDisplay { get { return _sectionStaffMasterDisplay; } set { if (value != _sectionStaffMasterDisplay) { _sectionStaffMasterDisplay = value; OnPropertyChanged(); } } } ``` The Entity class that the collection is filled by, ``` public partial class StaffInfoDetail { public System.Guid Id { get; set; } // ID (Primary key) public byte[] Image { get; set; } // Image (length: 2147483647) public int? StaffNo { get; set; } // StaffNo public string SecondName { get; set; } // SecondName (length: 50) public string FirstName { get; set; } // FirstName (length: 50) public string Section { get; set; } // Section (length: 50) public string SubSection { get; set; } // SubSection (length: 50) public string Licence { get; set; } // Licence (length: 50) public System.DateTime? StartDate { get; set; } // StartDate public System.DateTime? EndDate { get; set; } // EndDate public long? NightShiftOne { get; set; } // NightShiftOne public long? NightShiftTwo { get; set; } // NightShiftTwo public long? Lunch { get; set; } // Lunch public long? Unplesant { get; set; } // Unplesant public string JobType { get; set; } // JobType (length: 50) public bool Kaizen { get; set; } // Kaizen public int KaizenPercentage { get; set; } // KaizenPercentage public bool? CurrentStaff { get; set; } // CurrentStaff public string Notes { get; set; } // Notes (length: 4000) public StaffInfoDetail() { InitializePartial(); } partial void InitializePartial(); } ``` And the method that fills the collection, I added the caller to the original code from `public StaffInfoDetailViewModel()`: ``` public void MasterDataDisplay() { SectionStaffMasterDisplay.AddRange(DataProvider.StaffInfos.Where(p => p.CurrentStaff == true && DataProvider.StartUpSection.Contains(p.Section)).ToObservable()); } ``` I can't see an issue with the `DataContext`,but why won't this bind correctly when every other property is working? And stepping through the code shows `LicenceComboBox` to be correctly filled.
2017/04/08
[ "https://Stackoverflow.com/questions/43291879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6283475/" ]
I fixed doing this: I Try adding: import {CUSTOM\_ELEMENTS\_SCHEMA} from '@angular/core'; to app.module.ts then i add it to the NgModule Schemas: @NgModule({ schemas: [ CUSTOM\_ELEMENTS\_SCHEMA ]}) that will fix it. Good luck!
Try adding `import {CUSTOM_ELEMENTS_SCHEMA} from '@angular/core';` to app.module.ts
84,537
***EDIT***: I'm settling for a simpler system: no solar thermal panels; just a small 50 liter water heater, AC powered, run through the inverter. (So, practically no hot water in the winter time.) I'm planning to build an off-grid summer house in southern Finland. The plan includes: * electricity: PV panels + inverters + batteries, enough for a 2000 W induction stove (single burner). * water: a well for drinking/kitchen/shower water, an electric pump * heating: a body of water (or a non-toxic propylene glycol mixture) is heated in the heat tank. The water from the well would be heated in a coil of the tank, thus the water we use will never *stay* in the heater. The tank would be heated by solar thermal panels and/or a water heating fireplace. A small pump is needed for the thermal panels and another one the fireplace. The cabin would mostly be used during the summertime (June to August), when the building doesn't need to be heated. For May and September-October, some heating will be required, and for the winter months, a lot of heating. The cabin is 70 m² (750 sq ft). Since there is no external electricity and the outside temperature gets below 0°C (32°F) for several months, some measures must be taken. The lowest temperatures are -30°C (-22°F), which would mean 50 % glycol. Batteries will survive only if they are fully charged for the winter. The tank (and all pipes) must either be emptied for the winter, or it must contain non-freezing glycol mixture. I think I now need to decide between these two options: **1) Fill the tank with water, and empty it for the wintertime. Heat it with solar heating panels.** * Pros: + simple system + cheaper + the fireplace is not connected to the tank, so it’s easier to use it for heating the cabin in the winter (without the risk of overheating the tank) * Cons: + no possibility for water in the wintertime + obligatory autumn maintenance **2) Fill the tank with propylene glycol. Heat it with a water heating fireplace, and possibly solar thermal panels.** * Pros: + less maintenance? + Possibility for occasional hot water even in the wintertime? (Must empty the water pipes every time, though) * Cons: + more expensive + have to add radiators (using the same glycol) to be able to heat the cabin in the wintertime, distributing the heat from the fireplace The second option raises some important questions: * Is 50/50 glycol too thick for the pumps? Can 50/50 transfer enough heat? * How often must the glycol be changed? I need 500 litres (130 gallons), so the expense is non-trivial. * Most important: **is this feasible?** Does anyone have this kind of setup? I found very little information on the subject.
2016/02/16
[ "https://diy.stackexchange.com/questions/84537", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/50309/" ]
It's quite common, it can be pumped, it's a bit less efficient to pump particularly when cold and somewhat poorer at heat transfer than pure water. Primary impact on service life is minimizing oxygen exposure (sealed system) and choosing a PPG product with a suitable pH/corrosion control additive package.
Since this is a new-build, I would focus really hard on passive solar design. This is a new concept of building, with different materials and practices - it's not a glue-on afterthought to a conventionally built house). This type of design is likely to be earth sheltered, heavily insulated, have huge thermal mass inside the insulation envelope, windows with southern exposure slurping up every bit of sunlight. Snow and drifting have to be thought about. This is hard, but extremely worth it, because the house literally heats itself. Don't even try to use photovoltaic power (solar panels) to make electricity to make heat. **That is a complete thermodynamic "net lose"** - you can't make enough heat with PV electricity to make a difference. Go ahead and spec it out, and you will see. If there's any way you can get natural-gas or propane to the site, even in portable containers, that's the way to go for as-needed use such as cooking or drying. Wind is not a great way to get electricity for heat either, but it's sure better than solar PV. To get heat from the sun, go solar-thermal. Obviously that's not much use for heating or clothes drying. There is one heating method that's not so bad: heat pumps. Those can transfer 2-3 times as much heat as the electricity they consume. To store heat with solar-thermal, you need **lots** of thermal mass. The heat-transfer fluid (glycol) will obviously be some of that mass. **But it doesn't need to be all of it!** You can fill the storage tank with rocks, (note a misconception I'll describe at the bottom) as long as the glycol can still circulate through it. If your home has high thermal mass as part of a passive solar design, that mass counts too. There's nothing wrong with pumping extra heat into the house while the sun is out, especially if you're absent. You will also need ways to keep **snow off the collectors** during your extended absences. Given the northern clime it may be worth putting the panels straight vertical - think about that option when laying out the site - but blowing and sticking snow may still be a concern. And then, you will need to think about coping with potential **long periods of no useful sun** owing to weather. It may be worth thinking about a fuel "backup heater" of some kind but you will need a very substantial tank if you won't be around to resupply it. --- There is an interesting misconception about thermal mass, which is the term-of-art used within the industry. **The term is actually wrong.** Mass doesn't store temperature - atoms do, at *roughly* the same amount *per atom*. And in solids, *roughly* the same number of atoms fit in the same space... so most solids (volumewise) are *roughly* equivalent in heat storage. Ranging from 1.8 to 3.0 J/cc/degK of `volumetric heat capacity`. It's best to work in volume, because people design homes in terms of volume (dimensions) not mass. Given the narrow spread, material X vs material Y won't perform *that* differently, but *cost* certainly varies. And that's always a factor in the capitalist real world. They say the best things in life are free. That'd be plain water at at 4.1 J/cc/°K. It is literally the perfect thermal "mass" if you can engineer away the freezing problem (i.e. drain the thermal-storage calandria when you're away). If cost is no object, use 50/50 glycol at 3.8, not only dense but pumpable. If cost is an object, well, that's where common materials come in, even if their performance is as little as half of water. It's worth minding their values, but unit conversions throw a lot of people (and [crash space probes](http://www.wired.com/2010/11/1110mars-climate-observer-report/)) - stick with volume-based units: **joules per cubic centimeter per °K** and direct SI equivalents (MJ/m3/°C etc.) Here are some sources. * [Engineering Toolbox](http://www.engineeringtoolbox.com/sensible-heat-storage-d_1217.html) * [Wikipedia](https://en.wikipedia.org/wiki/Heat_capacity#Table_of_specific_heat_capacities) * [Australia government](http://www.yourhome.gov.au/passive-design/thermal-mass) * [Canada government](http://energyeducation.ca/encyclopedia/Thermal_mass) * [US Geological Survey](https://pubs.usgs.gov/of/1988/0441/report.pdf) see the technical charts starting on page 79
11,917,086
I have created a JavaScript function which will return data containing 2 URL's. And I would like to insert these into the HTML code. e.g. the return URL values are JavsScript file ``` http://www.domain.com/javascript.js ``` CSS file ``` http://www.domain.com/buttons.css ``` I need to insert these into the code as: ``` <link rel="stylesheet" type="text/css" href="return URL value" /> <script type="text/javascript" src="return URL value"></script> ``` How can I do that? Thanks
2012/08/11
[ "https://Stackoverflow.com/questions/11917086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/511666/" ]
One of the ways you can do it is suggested in the snippet below obtained from <http://www.javascriptkit.com/javatutors/loadjavascriptcss.shtml> --- To load a .js or .css file dynamically, in a nutshell, it means using DOM methods to first create a swanky new "SCRIPT" or "LINK" element, assign it the appropriate attributes, and finally, use element.appendChild() to add the element to the desired location within the document tree. It sounds a lot more fancy than it really is. Lets see how it all comes together: ``` function loadjscssfile(filename, filetype){ if (filetype=="js"){ //if filename is a external JavaScript file var fileref=document.createElement('script') fileref.setAttribute("type","text/javascript") fileref.setAttribute("src", filename) } else if (filetype=="css"){ //if filename is an external CSS file var fileref=document.createElement("link") fileref.setAttribute("rel", "stylesheet") fileref.setAttribute("type", "text/css") fileref.setAttribute("href", filename) } if (typeof fileref!="undefined") document.getElementsByTagName("head")[0].appendChild(fileref) } loadjscssfile("myscript.js", "js") //dynamically load and add this .js file loadjscssfile("javascript.php", "js") //dynamically load "javascript.php" as a JavaScript file loadjscssfile("mystyle.css", "css") ////dynamically load and add this .css file ``` --- you will need to adapt this to your own code, obviously, as you haven't provided the actual `js` code that you use. But basically, you will need to call the `loadjscssfile()` function twice when you get the 2 `URL`'s.
You can easily add nodes to the head element using javascript: ``` link=document.createElement('link'); link.href= your_css_href_function(); link.rel='stylesheet'; link.type='text/css'; document.getElementsByTagName('head')[0].appendChild(link); ``` And the same for your javascript tag: ``` script=document.createElement('script'); script.src= your_javascript_src_function(); script.type='text/javascript'; document.getElementsByTagName('head')[0].appendChild(script); ```
11,917,086
I have created a JavaScript function which will return data containing 2 URL's. And I would like to insert these into the HTML code. e.g. the return URL values are JavsScript file ``` http://www.domain.com/javascript.js ``` CSS file ``` http://www.domain.com/buttons.css ``` I need to insert these into the code as: ``` <link rel="stylesheet" type="text/css" href="return URL value" /> <script type="text/javascript" src="return URL value"></script> ``` How can I do that? Thanks
2012/08/11
[ "https://Stackoverflow.com/questions/11917086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/511666/" ]
You can load external css (and also js) files dynamically using javascript. Just create the appropriate `<link>` element using javascript. ``` var url = computeUrl(); /* Obtain url */ var link = document.createElement('link'); /* Create the link element */ link.setAttribute('rel', 'stylesheet'); /* Set the rel attribute */ link.setAttribute('type', 'text/css'); /* Set the type attribute */ link.setAttribute('href', url); /* Set the href to your url */ ``` At the moment, we have just created element ``` <link rel="stylesheet" type="text/css" href="your url"> ``` And we have stored it in the variable `var link`. It is not over, the `<link>` is not part of the `DOM` yet. We need to append it ``` var head = document.getElementsByTagName('head')[0]; /* Obtain the <head> element */ head.appendChild(link); /* Append link at the end of head */ ``` And it is done. In the very similar way, you can dynamically add external javascript resource. Just use `<script>` tag instead of `<link>` tag.
One of the ways you can do it is suggested in the snippet below obtained from <http://www.javascriptkit.com/javatutors/loadjavascriptcss.shtml> --- To load a .js or .css file dynamically, in a nutshell, it means using DOM methods to first create a swanky new "SCRIPT" or "LINK" element, assign it the appropriate attributes, and finally, use element.appendChild() to add the element to the desired location within the document tree. It sounds a lot more fancy than it really is. Lets see how it all comes together: ``` function loadjscssfile(filename, filetype){ if (filetype=="js"){ //if filename is a external JavaScript file var fileref=document.createElement('script') fileref.setAttribute("type","text/javascript") fileref.setAttribute("src", filename) } else if (filetype=="css"){ //if filename is an external CSS file var fileref=document.createElement("link") fileref.setAttribute("rel", "stylesheet") fileref.setAttribute("type", "text/css") fileref.setAttribute("href", filename) } if (typeof fileref!="undefined") document.getElementsByTagName("head")[0].appendChild(fileref) } loadjscssfile("myscript.js", "js") //dynamically load and add this .js file loadjscssfile("javascript.php", "js") //dynamically load "javascript.php" as a JavaScript file loadjscssfile("mystyle.css", "css") ////dynamically load and add this .css file ``` --- you will need to adapt this to your own code, obviously, as you haven't provided the actual `js` code that you use. But basically, you will need to call the `loadjscssfile()` function twice when you get the 2 `URL`'s.