qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
48,425
Our new D7 site was running very slowly so I did some profiling with devel/xhprof and found that D7 was doing millions of preg\_grep function calls within drupal\_find\_theme\_functions() in theme.inc. This happened on every page load. It's my understanding that the drupal\_find\_theme\_functions function should only be called when the theme registry is being rebuilt - am I correct in that? I made sure that "rebuild theme registry" was off in devel, and then disabled devel entirely. However, I'm still seeing those millions of function calls to preg\_grep in drupal\_find\_theme\_functions on every page load (I added a piece of code to theme.inc to log when the function was called). We're using a theme based on Omega and if we are rebuilding the theme registry each time I'm not seeing how/why it's happening. We're turning off as many contrib modules as we can in the meantime. Any help/advice would be MUCH appreciated!!
2012/10/22
[ "https://drupal.stackexchange.com/questions/48425", "https://drupal.stackexchange.com", "https://drupal.stackexchange.com/users/10862/" ]
Try this code: ``` /** * Implements hook_preprocess_taxonomy_term(). */ function YOURTHEME_preprocess_taxonomy_term(&$variables) { if ($variables->vid == YOUR_VOCABULARY_ID) { $variables['theme_hook_suggestions'][] = 'taxonomy_term__authors'; } } ```
You can use the [Taxonomy Views Integrator](http://drupal.org/project/tvi), or its working [patched version](http://drupal.org/node/1817570). Navigate to yoursite.com/admin/structure/taxonomy/yourtaxonomyvocabulary/edit and choose your view and view display for every vocabulary you have.
48,425
Our new D7 site was running very slowly so I did some profiling with devel/xhprof and found that D7 was doing millions of preg\_grep function calls within drupal\_find\_theme\_functions() in theme.inc. This happened on every page load. It's my understanding that the drupal\_find\_theme\_functions function should only be called when the theme registry is being rebuilt - am I correct in that? I made sure that "rebuild theme registry" was off in devel, and then disabled devel entirely. However, I'm still seeing those millions of function calls to preg\_grep in drupal\_find\_theme\_functions on every page load (I added a piece of code to theme.inc to log when the function was called). We're using a theme based on Omega and if we are rebuilding the theme registry each time I'm not seeing how/why it's happening. We're turning off as many contrib modules as we can in the meantime. Any help/advice would be MUCH appreciated!!
2012/10/22
[ "https://drupal.stackexchange.com/questions/48425", "https://drupal.stackexchange.com", "https://drupal.stackexchange.com/users/10862/" ]
Try this code: ``` /** * Implements hook_preprocess_taxonomy_term(). */ function YOURTHEME_preprocess_taxonomy_term(&$variables) { if ($variables->vid == YOUR_VOCABULARY_ID) { $variables['theme_hook_suggestions'][] = 'taxonomy_term__authors'; } } ```
Each term has link taxonomy/term/TERM\_ID and if you want vocabulary based theme(customize HTML markup) on terms pages then you can use "Taxonomy Views Integrator" module. On particular vocabulary edit page you can select views template. And in that views template you can write customize html so, that pages with taxonomy/term/TERM\_ID get themed.
48,425
Our new D7 site was running very slowly so I did some profiling with devel/xhprof and found that D7 was doing millions of preg\_grep function calls within drupal\_find\_theme\_functions() in theme.inc. This happened on every page load. It's my understanding that the drupal\_find\_theme\_functions function should only be called when the theme registry is being rebuilt - am I correct in that? I made sure that "rebuild theme registry" was off in devel, and then disabled devel entirely. However, I'm still seeing those millions of function calls to preg\_grep in drupal\_find\_theme\_functions on every page load (I added a piece of code to theme.inc to log when the function was called). We're using a theme based on Omega and if we are rebuilding the theme registry each time I'm not seeing how/why it's happening. We're turning off as many contrib modules as we can in the meantime. Any help/advice would be MUCH appreciated!!
2012/10/22
[ "https://drupal.stackexchange.com/questions/48425", "https://drupal.stackexchange.com", "https://drupal.stackexchange.com/users/10862/" ]
You can have full control over the taxonomy/term/ID page by intercepting the menu call with hook\_menu\_alter() and let the content be generated by a custom function. This allows you to completely customize the taxonomy term page. You can check for a certain vocabulary ("Authors") and have a custom generated page for that vocab and leave the others untouched. You can have fine grained control over how results are displayed using view modes and templates. No contributed modules are needed. Here is an example to be used in a custom module. The custom content is generated using Drupals [EntityFieldQuery](https://api.drupal.org/api/drupal/includes!entity.inc/class/EntityFieldQuery/7) because it is a lazy and robust way of generating a list of content, but anything goes here. . ``` /** * * Implements hook_menu_alter(). */ function YOURMODULE_menu_alter(&$menu) { if (isset($menu['taxonomy/term/%taxonomy_term'])) { $menu['taxonomy/term/%taxonomy_term']['page callback'] = 'YOURMODULE_taxonomy_term_page'; $menu['taxonomy/term/%taxonomy_term']['access arguments'] = array(2); } } /** * Callback function for taxonomy/term/%taxonomy_term. * * @param $taxonomy_term object * @return * Themed page for a taxonomy term, specific to the term's vocabulary. */ function YOURMODULE_taxonomy_term_page($term) { $voc = taxonomy_vocabulary_load($term->vid); switch ($voc->machine_name) { case 'SOMEVOCUBALARY': // here you generate the actual content of the page // could be done e.g. with an entityfieldquery as follows $query = new EntityFieldQuery(); $query->entityCondition('entity_type', 'node') ->fieldCondition('field_gallery', 'tid', $term->tid) ->propertyOrderBy('sticky', 'DESC') ->propertyCondition('status', 1); $result = $query->execute(); if (!empty($result['node'])) { $build['content']['nodes'] = node_view_multiple(node_load_multiple(array_keys($result['node'])), 'teaser'); // output the node teasers. you can control the markup of the 'teaser' view mode with a template in the theme folder } else { $build['content']['status']['#markup'] = t('No results found for term ID !tid.', array('!tid' => $term->tid)); } return $build; // If the term page is for an other vocabulary then use Drupal's default taxonomy page default: module_load_include('inc', 'taxonomy', 'taxonomy.pages'); return taxonomy_term_page($term); } } ```
Try this code: ``` /** * Implements hook_preprocess_taxonomy_term(). */ function YOURTHEME_preprocess_taxonomy_term(&$variables) { if ($variables->vid == YOUR_VOCABULARY_ID) { $variables['theme_hook_suggestions'][] = 'taxonomy_term__authors'; } } ```
48,425
Our new D7 site was running very slowly so I did some profiling with devel/xhprof and found that D7 was doing millions of preg\_grep function calls within drupal\_find\_theme\_functions() in theme.inc. This happened on every page load. It's my understanding that the drupal\_find\_theme\_functions function should only be called when the theme registry is being rebuilt - am I correct in that? I made sure that "rebuild theme registry" was off in devel, and then disabled devel entirely. However, I'm still seeing those millions of function calls to preg\_grep in drupal\_find\_theme\_functions on every page load (I added a piece of code to theme.inc to log when the function was called). We're using a theme based on Omega and if we are rebuilding the theme registry each time I'm not seeing how/why it's happening. We're turning off as many contrib modules as we can in the meantime. Any help/advice would be MUCH appreciated!!
2012/10/22
[ "https://drupal.stackexchange.com/questions/48425", "https://drupal.stackexchange.com", "https://drupal.stackexchange.com/users/10862/" ]
You can have full control over the taxonomy/term/ID page by intercepting the menu call with hook\_menu\_alter() and let the content be generated by a custom function. This allows you to completely customize the taxonomy term page. You can check for a certain vocabulary ("Authors") and have a custom generated page for that vocab and leave the others untouched. You can have fine grained control over how results are displayed using view modes and templates. No contributed modules are needed. Here is an example to be used in a custom module. The custom content is generated using Drupals [EntityFieldQuery](https://api.drupal.org/api/drupal/includes!entity.inc/class/EntityFieldQuery/7) because it is a lazy and robust way of generating a list of content, but anything goes here. . ``` /** * * Implements hook_menu_alter(). */ function YOURMODULE_menu_alter(&$menu) { if (isset($menu['taxonomy/term/%taxonomy_term'])) { $menu['taxonomy/term/%taxonomy_term']['page callback'] = 'YOURMODULE_taxonomy_term_page'; $menu['taxonomy/term/%taxonomy_term']['access arguments'] = array(2); } } /** * Callback function for taxonomy/term/%taxonomy_term. * * @param $taxonomy_term object * @return * Themed page for a taxonomy term, specific to the term's vocabulary. */ function YOURMODULE_taxonomy_term_page($term) { $voc = taxonomy_vocabulary_load($term->vid); switch ($voc->machine_name) { case 'SOMEVOCUBALARY': // here you generate the actual content of the page // could be done e.g. with an entityfieldquery as follows $query = new EntityFieldQuery(); $query->entityCondition('entity_type', 'node') ->fieldCondition('field_gallery', 'tid', $term->tid) ->propertyOrderBy('sticky', 'DESC') ->propertyCondition('status', 1); $result = $query->execute(); if (!empty($result['node'])) { $build['content']['nodes'] = node_view_multiple(node_load_multiple(array_keys($result['node'])), 'teaser'); // output the node teasers. you can control the markup of the 'teaser' view mode with a template in the theme folder } else { $build['content']['status']['#markup'] = t('No results found for term ID !tid.', array('!tid' => $term->tid)); } return $build; // If the term page is for an other vocabulary then use Drupal's default taxonomy page default: module_load_include('inc', 'taxonomy', 'taxonomy.pages'); return taxonomy_term_page($term); } } ```
You can use the [Taxonomy Views Integrator](http://drupal.org/project/tvi), or its working [patched version](http://drupal.org/node/1817570). Navigate to yoursite.com/admin/structure/taxonomy/yourtaxonomyvocabulary/edit and choose your view and view display for every vocabulary you have.
48,425
Our new D7 site was running very slowly so I did some profiling with devel/xhprof and found that D7 was doing millions of preg\_grep function calls within drupal\_find\_theme\_functions() in theme.inc. This happened on every page load. It's my understanding that the drupal\_find\_theme\_functions function should only be called when the theme registry is being rebuilt - am I correct in that? I made sure that "rebuild theme registry" was off in devel, and then disabled devel entirely. However, I'm still seeing those millions of function calls to preg\_grep in drupal\_find\_theme\_functions on every page load (I added a piece of code to theme.inc to log when the function was called). We're using a theme based on Omega and if we are rebuilding the theme registry each time I'm not seeing how/why it's happening. We're turning off as many contrib modules as we can in the meantime. Any help/advice would be MUCH appreciated!!
2012/10/22
[ "https://drupal.stackexchange.com/questions/48425", "https://drupal.stackexchange.com", "https://drupal.stackexchange.com/users/10862/" ]
You can have full control over the taxonomy/term/ID page by intercepting the menu call with hook\_menu\_alter() and let the content be generated by a custom function. This allows you to completely customize the taxonomy term page. You can check for a certain vocabulary ("Authors") and have a custom generated page for that vocab and leave the others untouched. You can have fine grained control over how results are displayed using view modes and templates. No contributed modules are needed. Here is an example to be used in a custom module. The custom content is generated using Drupals [EntityFieldQuery](https://api.drupal.org/api/drupal/includes!entity.inc/class/EntityFieldQuery/7) because it is a lazy and robust way of generating a list of content, but anything goes here. . ``` /** * * Implements hook_menu_alter(). */ function YOURMODULE_menu_alter(&$menu) { if (isset($menu['taxonomy/term/%taxonomy_term'])) { $menu['taxonomy/term/%taxonomy_term']['page callback'] = 'YOURMODULE_taxonomy_term_page'; $menu['taxonomy/term/%taxonomy_term']['access arguments'] = array(2); } } /** * Callback function for taxonomy/term/%taxonomy_term. * * @param $taxonomy_term object * @return * Themed page for a taxonomy term, specific to the term's vocabulary. */ function YOURMODULE_taxonomy_term_page($term) { $voc = taxonomy_vocabulary_load($term->vid); switch ($voc->machine_name) { case 'SOMEVOCUBALARY': // here you generate the actual content of the page // could be done e.g. with an entityfieldquery as follows $query = new EntityFieldQuery(); $query->entityCondition('entity_type', 'node') ->fieldCondition('field_gallery', 'tid', $term->tid) ->propertyOrderBy('sticky', 'DESC') ->propertyCondition('status', 1); $result = $query->execute(); if (!empty($result['node'])) { $build['content']['nodes'] = node_view_multiple(node_load_multiple(array_keys($result['node'])), 'teaser'); // output the node teasers. you can control the markup of the 'teaser' view mode with a template in the theme folder } else { $build['content']['status']['#markup'] = t('No results found for term ID !tid.', array('!tid' => $term->tid)); } return $build; // If the term page is for an other vocabulary then use Drupal's default taxonomy page default: module_load_include('inc', 'taxonomy', 'taxonomy.pages'); return taxonomy_term_page($term); } } ```
Each term has link taxonomy/term/TERM\_ID and if you want vocabulary based theme(customize HTML markup) on terms pages then you can use "Taxonomy Views Integrator" module. On particular vocabulary edit page you can select views template. And in that views template you can write customize html so, that pages with taxonomy/term/TERM\_ID get themed.
5,882,957
> > **Possible Duplicate:** > > [Java tree data-structure?](https://stackoverflow.com/questions/3522454/java-tree-data-structure) > > > Is there any Java data structure implementation similar to tree and graph?
2011/05/04
[ "https://Stackoverflow.com/questions/5882957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236501/" ]
Not in the `java.util` Collections API. You can use the [DefaultTreeModel](http://download.oracle.com/javase/1.4.2/docs/api/javax/swing/tree/DefaultTreeModel.html) from Swing for trees. [Jung](http://jung.sourceforge.net/) is a graph framework.
As I [answered for a similar question](https://stackoverflow.com/questions/4978487/why-java-collection-framework-doesnt-contain-tree-and-graph/4979522#4979522), the Java API contains no general API for trees/graphs, since there is no unique set of features needed in every usecase. There are quite some tree/graph-like APIs for special cases, though. And in principle it is easy to make your own graph - one could even say that every object is in fact a node in a graph, with the values of its reference type fields being the (outgoing) neighbors.
61,554,287
I need to move some text from `demoBoxA` to `demoBoxB`. The `demoBoxA` parent element has an id selector, but the child element below it has no identifiable selector. Is it possible to select the text content directly? Then move it into the `demoBoxB` sub-element (the `demoBoxB` sub-element has an id selector) There are 2 difficulties with this issue. 1. The content of demoBoxA is dynamically generated by the program and the sort is not fixed. There are no identifiable selectors for the subelements. 2. only need to select part of the content. For example, in the example below, just move the phone model text of "Google", "Huawei", "BlackBerry". Any help, thanks in advance! ``` <div class="container" id="demoBoxA"> <div class="row"> <div class="col-md-6">Samsung</div> <div class="col-md-6">Galaxy S10</div> </div> <div class="row"> <div class="col-md-6">Google</div> <div class="col-md-6">Pixel 4</div> </div> <div class="row"> <div class="col-md-6">Sony</div> <div class="col-md-6">Xperia 5</div> </div> <div class="row"> <div class="col-md-6">Huawei</div> <div class="col-md-6">Mate 30 5G</div> </div> <div class="row"> <div class="col-md-6">BlackBerry</div> <div class="col-md-6">KEY2</div> </div> <div class="row"> <div class="col-md-6">Apple</div> <div class="col-md-6">iPhone 8</div> </div> </div> <div class="container" id="demoBoxB"> <div class="row"> <div class="col-md-6">Google</div> <div class="col-md-6" id="pixel"></div> </div> <div class="row"> <div class="col-md-6">Huawei</div> <div class="col-md-6" id="mate"></div> </div> <div class="row"> <div class="col-md-6">BlackBerry</div> <div class="col-md-6" id="key2"></div> </div> </div> ```
2020/05/02
[ "https://Stackoverflow.com/questions/61554287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can chain selectors like this: ``` var rows = document.querySelectorAll("#demoBoxA > .row"); ``` That will return a list of all rows inside of demoBoxA. If you need more info about chaining selectors, you can read about it [here](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll). Then, to move the rows you can do this: ``` var demoBoxB = document.getElementById('demoBoxB'); rows.forEach((row) => { demoBoxB.appendChild(row); }); ``` If you just want the text inside each of the columns, you can do this: ``` var columns = document.querySelectorAll("#demoBoxA > .col-md-6"); var texts = []; columns.forEach((column) => { texts.push(column.innerText); }); ``` Now, `texts` is an array of the text contents of each column. If you want to select the cellphone models for each brand, you can do this: ``` var cols = Array.from(document.querySelectorAll("#demoBoxA > .col-md-6")); var samsungCol = cols.find((col) => { return col.textContent == "Samsung"; }); var samsungPhones = []; samsungCol.parentNode.childNodes.forEach((col) => { if (col != samsungCol) { samsungPhones.push(col); } }); ``` Now, `samsungPhones` is a list of columns, one for each Samsung phone (for example).
You can use html drag api . Just add `draggable=true` for elements you want to drag and add event listeners for `dragstart` and `dragend` html ``` <div class="container" id="demoBoxA"> <div class="row " draggable="true"> <div class="col-md-6">Samsung</div> <div class="col-md-6">Galaxy S10</div> </div> <div class="row" draggable="true"> <div class="col-md-6">Google</div> <div class="col-md-6">Pixel 4</div> </div> <div class="row" draggabble="true"> <div class="col-md-6">Sony</div> <div class="col-md-6">Xperia 5</div> </div> </div> <div class="container " id="demoBoxB"> <div class="row " draggable="true"> <div class="col-md-6">Google</div> <div class="col-md-6" id="pixel"></div> </div> <div class="row" draggable="true"> <div class="col-md-6">Huawei</div> <div class="col-md-6" id="mate"></div> </div> <div class="row" draggable="true"> <div class="col-md-6">BlackBerry</div> <div class="col-md-6" id="key2"></div> </div> </div> ``` js ``` document.addEventListener('dragstart', function(e) { item = e.target; }, false); document.addEventListener('dragend', function(e) { document.getElementById("demoBoxB").appendChild(item) }, false); ``` Note : you might have to add conditions to check whether the drop is actually happening in demoboxB
69,629,358
I am converting a list to set and back to list. I know that `set(list)` takes O(n) time, but I am converting it back to list in same line `list(set(list))`. Since both these operations take O(n) time, would the time complexity be O(n^2) now? Logic 1: ``` final = list(set(list1)-set(list2)) ``` Logic 2: ``` s = set(list1)-set(list2) final = list(s) ``` Do both these implementations have different time complexities, and if they do which of them is more efficient?
2021/10/19
[ "https://Stackoverflow.com/questions/69629358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13878064/" ]
There are normally a few ways to do this, here are some suggestions: 1. Set menu to null when creating the window: `const mainWin = new BrowserWindow({menu: null});` 2. Remove menu after window object has been created (Set a blank menu with MacOS): `const { Menu } = require('electron');` `process.platform === "win32" && mainWin.removeMenu();` `process.platform === "darwin" && Menu.setApplicationMenu(Menu.buildFromTemplate([]));` 3. Using the Menu module from electron: `const { Menu } = require('electron');` `Menu.setApplicationMenu(null);`
You could try this ```js const { Menu } = require('electron'); Menu.setApplicationMenu(null); ```
57,385,570
To return a big struct "MODULEENTRY32" from WINAPI I want to use a pointer, but need to allocate memory in the heap inside the function without deleting it. Then, out of the function when I don't want to use that struct anymore I know that I should use the keyword delete to free memory. ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> MODULEENTRY32* GetModuleEntry(const char *module_name, int PID) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { MODULEENTRY32 *moduleEntry = new MODULEENTRY32; // Remember to delete if don't want leak moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { return moduleEntry; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return nullptr; } int main() { int myRandomPID = 123; MODULEENTRY32* p = GetModuleEntry("mymodule.dll", myRandomPID); if (!p) { std::cout << "Obviously you didn't found your random module of your random PID " << std::endl; } delete p; // I just don't want to do this return 0; } ``` How could I avoid having to free memory in main function? **unique\_ptr**? **EDIT: Possible solution** ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> bool GetModuleEntry(const char *module_name, int PID, MODULEENTRY32* moduleEntry) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { CloseHandle(moduleSnapshot); return true; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return false; } int main() { int myRandomPID = 123; MODULEENTRY32 moduleEntry; if (!GetModuleEntry("mymodule.dll", 123, &moduleEntry)) { std::cout << "Obviously you didn't find your random module of your random PID " << std::endl; } return 0; } ```
2019/08/07
[ "https://Stackoverflow.com/questions/57385570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The error happens because [control flow analysis is difficult](https://github.com/microsoft/TypeScript/issues/9998), especially when the information you want to keep track of cannot be represented in the type system. In the general case, the compiler really cannot figure out much about what happens when functions accept callbacks that mutate variables. The control flow inside such functions is a complete mystery; maybe the callback will be called immediately and exactly once. Maybe the callback will never be called. Maybe the callback will be called a million times. Or maybe it will be called asynchronously, far in the future. Since the general case is so hopeless, the compiler doesn't even really try. It uses some heuristics which work for a lot of cases, and which also necessarily fail for a lot of cases. You picked one of the failures. The heuristic used here is that inside of a callback, all narrowings which occurred in the wider scope are reset. That does reasonable things for code like this: ``` // who knows when this actually calls its callback? declare function mysteryCallbackCaller(cb: () => void): void; let a: string | undefined = "hey"; mysteryCallbackCaller(() => a.charAt(0)); // error! a may be undefined a = undefined; ``` The compiler doesn't know when or if `() => a.charAt(0)` gets invoked. If it gets invoked immediately when `mysteryCallbackCaller()` is called, then `a` will be defined. But if it gets called sometime later, `a` may be undefined. Since the compiler cannot guarantee safety here, it reports an error. --- So what can we do to address this issue in your case? There are two main solutions I can think of. One is to just tell the compiler that it's wrong and that you are sure that `obj` will be defined. This can be done using the [`!` non-null assertion operator](https://github.com/Microsoft/TypeScript/wiki/What's-new-in-TypeScript#non-null-assertion-operator): ``` map2.forEach(v => { obj!.field1 += "," + v; // okay now }); ``` This works with no compile time error. The caveat to this solution is that the responsibility for ensuring `obj` is defined is now only yours and not the compiler's. If you change the preceding code and `obj` truly is possibly undefined, then the type assertion will still suppress the error, and you'll have issues at runtime. --- The other solution is to change what you're doing so that the compiler *can* verify that your callback is safe. The easiest way to do that is to use a new variable: ``` // over here the compiler knows obj is defined const constObj = obj; // type is inferred as TestIF map2.forEach(v => { constObj.field1 += "," + v; // okay, constObj is TestIF, so this works }); ``` All I've done here is assign `obj` to `constObj`. But at the time this assignment takes place, `obj` cannot be `undefined`. Thus `constObj` is just a `TestIF`, and not a `TestIF | undefined`. And since `constObj` is never reassigned and cannot be `undefined`, the rest of the code works. --- Okay, hope that helps. Good luck! [Link to code](https://www.typescriptlang.org/play/#code/PTAEHcAsHtQawHbXAZwpApg0AXSBLNAQwGMcBXIgGyoE9QTqq18c1GaAjUuAfgCgAJhhJUiAJwygAZuQRl80bAFtaKHBnG0Awk24k4umpoAUJTgC5QJgJSgAvAD5QAN2j5BNq248BufvxUGDigRFbq4vgIAOagAD6gcsLSURiCDqAARJi0mf6q6po6ejxGQeImtg7ORAB0JJASAII4JgAMNja+oCCgmuLQ4gCEoKGgykT0nFJJGCkIafxEGbPzaf78URri0qRSACoY6gCSAGKgAN78oykYVIIAjOE4kTH+AL4BGAAeAA6DIVERBQaAAsjoxCDLtdcEdWl5XO50ldRqMSEp1OMiL8nqBQdiADwRKLRAA0oEOJ1OznsoAW4Dx2Ns-lRDAxIQmvwATFZ8b8iS8SeTiTEaXSMAy+cyAqz0QhMXAMPRaZlOYrcizUUEQtBOAArDKch61aLBEzqrow0b4aTWXUG+y0hDkGh2FGs0D2jIXGT4O6PKyZDTqYM4TKgd6a1lG2ooM3q8n2y2sz5WrHc2rSQYAUVIkBMLmq0I9nv1Q0zfvuD1AAGoVaTwzXXN1etA4JM6cg0+9LWnWy5NKBMJJcJg2cpfvhyvAkKhSwbCKBkqlBGm5Zj1zgAPL6jL2ltgHC0X5SRdRaT9NKhNCUnBnNOcrmZnN5gtF92y9k7vUV-3VutZA2tbNj0YBtpM5Kbt+oCLreZzCrAeCLuAgxwCg3bJhG-CfEAA)
The Map's `get` method is defined as `Map<string, TestIF>.get(key: string): TestIF | undefined`, so when you set `obj`, it's type is `TestIF | undefined`. When you (re-set) `obj`'s type inside the `if` block, it's in a different scope. When you read `obj` inside the `forEach`, it's also in another scope. The TypeScript compiler is unable to establish the correct type in the changed scopes. Consider this (working) code: ```js const key = 'mapkey'; let obj: TestIF; // Create variable with a Type if (map1.has(key)) { // We know (with certainty) that obj exists! obj = map1.get(key) as TestIF; // We use 'as' because we know it can't be Undefined } else { obj = { field1: 'testtest' }; map1.set(key, obj); } ``` Even though Map.get() will always return `V | undefined`, when we used `as`, we forced TypeScript to treat is as `V`. I use `as` with caution, but in this case we know it exists as we have called `Map.has()` to check it's existence. Also, I want to stress that `(obj === undefined)` is much better than `(obj == null)`, which just checks for falsyness. [[more info]](https://codeburst.io/javascript-null-vs-undefined-20f955215a2)
57,385,570
To return a big struct "MODULEENTRY32" from WINAPI I want to use a pointer, but need to allocate memory in the heap inside the function without deleting it. Then, out of the function when I don't want to use that struct anymore I know that I should use the keyword delete to free memory. ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> MODULEENTRY32* GetModuleEntry(const char *module_name, int PID) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { MODULEENTRY32 *moduleEntry = new MODULEENTRY32; // Remember to delete if don't want leak moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { return moduleEntry; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return nullptr; } int main() { int myRandomPID = 123; MODULEENTRY32* p = GetModuleEntry("mymodule.dll", myRandomPID); if (!p) { std::cout << "Obviously you didn't found your random module of your random PID " << std::endl; } delete p; // I just don't want to do this return 0; } ``` How could I avoid having to free memory in main function? **unique\_ptr**? **EDIT: Possible solution** ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> bool GetModuleEntry(const char *module_name, int PID, MODULEENTRY32* moduleEntry) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { CloseHandle(moduleSnapshot); return true; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return false; } int main() { int myRandomPID = 123; MODULEENTRY32 moduleEntry; if (!GetModuleEntry("mymodule.dll", 123, &moduleEntry)) { std::cout << "Obviously you didn't find your random module of your random PID " << std::endl; } return 0; } ```
2019/08/07
[ "https://Stackoverflow.com/questions/57385570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The error happens because [control flow analysis is difficult](https://github.com/microsoft/TypeScript/issues/9998), especially when the information you want to keep track of cannot be represented in the type system. In the general case, the compiler really cannot figure out much about what happens when functions accept callbacks that mutate variables. The control flow inside such functions is a complete mystery; maybe the callback will be called immediately and exactly once. Maybe the callback will never be called. Maybe the callback will be called a million times. Or maybe it will be called asynchronously, far in the future. Since the general case is so hopeless, the compiler doesn't even really try. It uses some heuristics which work for a lot of cases, and which also necessarily fail for a lot of cases. You picked one of the failures. The heuristic used here is that inside of a callback, all narrowings which occurred in the wider scope are reset. That does reasonable things for code like this: ``` // who knows when this actually calls its callback? declare function mysteryCallbackCaller(cb: () => void): void; let a: string | undefined = "hey"; mysteryCallbackCaller(() => a.charAt(0)); // error! a may be undefined a = undefined; ``` The compiler doesn't know when or if `() => a.charAt(0)` gets invoked. If it gets invoked immediately when `mysteryCallbackCaller()` is called, then `a` will be defined. But if it gets called sometime later, `a` may be undefined. Since the compiler cannot guarantee safety here, it reports an error. --- So what can we do to address this issue in your case? There are two main solutions I can think of. One is to just tell the compiler that it's wrong and that you are sure that `obj` will be defined. This can be done using the [`!` non-null assertion operator](https://github.com/Microsoft/TypeScript/wiki/What's-new-in-TypeScript#non-null-assertion-operator): ``` map2.forEach(v => { obj!.field1 += "," + v; // okay now }); ``` This works with no compile time error. The caveat to this solution is that the responsibility for ensuring `obj` is defined is now only yours and not the compiler's. If you change the preceding code and `obj` truly is possibly undefined, then the type assertion will still suppress the error, and you'll have issues at runtime. --- The other solution is to change what you're doing so that the compiler *can* verify that your callback is safe. The easiest way to do that is to use a new variable: ``` // over here the compiler knows obj is defined const constObj = obj; // type is inferred as TestIF map2.forEach(v => { constObj.field1 += "," + v; // okay, constObj is TestIF, so this works }); ``` All I've done here is assign `obj` to `constObj`. But at the time this assignment takes place, `obj` cannot be `undefined`. Thus `constObj` is just a `TestIF`, and not a `TestIF | undefined`. And since `constObj` is never reassigned and cannot be `undefined`, the rest of the code works. --- Okay, hope that helps. Good luck! [Link to code](https://www.typescriptlang.org/play/#code/PTAEHcAsHtQawHbXAZwpApg0AXSBLNAQwGMcBXIgGyoE9QTqq18c1GaAjUuAfgCgAJhhJUiAJwygAZuQRl80bAFtaKHBnG0Awk24k4umpoAUJTgC5QJgJSgAvAD5QAN2j5BNq248BufvxUGDigRFbq4vgIAOagAD6gcsLSURiCDqAARJi0mf6q6po6ejxGQeImtg7ORAB0JJASAII4JgAMNja+oCCgmuLQ4gCEoKGgykT0nFJJGCkIafxEGbPzaf78URri0qRSACoY6gCSAGKgAN78oykYVIIAjOE4kTH+AL4BGAAeAA6DIVERBQaAAsjoxCDLtdcEdWl5XO50ldRqMSEp1OMiL8nqBQdiADwRKLRAA0oEOJ1OznsoAW4Dx2Ns-lRDAxIQmvwATFZ8b8iS8SeTiTEaXSMAy+cyAqz0QhMXAMPRaZlOYrcizUUEQtBOAArDKch61aLBEzqrow0b4aTWXUG+y0hDkGh2FGs0D2jIXGT4O6PKyZDTqYM4TKgd6a1lG2ooM3q8n2y2sz5WrHc2rSQYAUVIkBMLmq0I9nv1Q0zfvuD1AAGoVaTwzXXN1etA4JM6cg0+9LWnWy5NKBMJJcJg2cpfvhyvAkKhSwbCKBkqlBGm5Zj1zgAPL6jL2ltgHC0X5SRdRaT9NKhNCUnBnNOcrmZnN5gtF92y9k7vUV-3VutZA2tbNj0YBtpM5Kbt+oCLreZzCrAeCLuAgxwCg3bJhG-CfEAA)
I was suffering with the same problem now, and the solution to me was (adapted to your case): ``` if (!obj) { return ( ... ); } ```
57,385,570
To return a big struct "MODULEENTRY32" from WINAPI I want to use a pointer, but need to allocate memory in the heap inside the function without deleting it. Then, out of the function when I don't want to use that struct anymore I know that I should use the keyword delete to free memory. ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> MODULEENTRY32* GetModuleEntry(const char *module_name, int PID) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { MODULEENTRY32 *moduleEntry = new MODULEENTRY32; // Remember to delete if don't want leak moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { return moduleEntry; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return nullptr; } int main() { int myRandomPID = 123; MODULEENTRY32* p = GetModuleEntry("mymodule.dll", myRandomPID); if (!p) { std::cout << "Obviously you didn't found your random module of your random PID " << std::endl; } delete p; // I just don't want to do this return 0; } ``` How could I avoid having to free memory in main function? **unique\_ptr**? **EDIT: Possible solution** ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> bool GetModuleEntry(const char *module_name, int PID, MODULEENTRY32* moduleEntry) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { CloseHandle(moduleSnapshot); return true; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return false; } int main() { int myRandomPID = 123; MODULEENTRY32 moduleEntry; if (!GetModuleEntry("mymodule.dll", 123, &moduleEntry)) { std::cout << "Obviously you didn't find your random module of your random PID " << std::endl; } return 0; } ```
2019/08/07
[ "https://Stackoverflow.com/questions/57385570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The error happens because [control flow analysis is difficult](https://github.com/microsoft/TypeScript/issues/9998), especially when the information you want to keep track of cannot be represented in the type system. In the general case, the compiler really cannot figure out much about what happens when functions accept callbacks that mutate variables. The control flow inside such functions is a complete mystery; maybe the callback will be called immediately and exactly once. Maybe the callback will never be called. Maybe the callback will be called a million times. Or maybe it will be called asynchronously, far in the future. Since the general case is so hopeless, the compiler doesn't even really try. It uses some heuristics which work for a lot of cases, and which also necessarily fail for a lot of cases. You picked one of the failures. The heuristic used here is that inside of a callback, all narrowings which occurred in the wider scope are reset. That does reasonable things for code like this: ``` // who knows when this actually calls its callback? declare function mysteryCallbackCaller(cb: () => void): void; let a: string | undefined = "hey"; mysteryCallbackCaller(() => a.charAt(0)); // error! a may be undefined a = undefined; ``` The compiler doesn't know when or if `() => a.charAt(0)` gets invoked. If it gets invoked immediately when `mysteryCallbackCaller()` is called, then `a` will be defined. But if it gets called sometime later, `a` may be undefined. Since the compiler cannot guarantee safety here, it reports an error. --- So what can we do to address this issue in your case? There are two main solutions I can think of. One is to just tell the compiler that it's wrong and that you are sure that `obj` will be defined. This can be done using the [`!` non-null assertion operator](https://github.com/Microsoft/TypeScript/wiki/What's-new-in-TypeScript#non-null-assertion-operator): ``` map2.forEach(v => { obj!.field1 += "," + v; // okay now }); ``` This works with no compile time error. The caveat to this solution is that the responsibility for ensuring `obj` is defined is now only yours and not the compiler's. If you change the preceding code and `obj` truly is possibly undefined, then the type assertion will still suppress the error, and you'll have issues at runtime. --- The other solution is to change what you're doing so that the compiler *can* verify that your callback is safe. The easiest way to do that is to use a new variable: ``` // over here the compiler knows obj is defined const constObj = obj; // type is inferred as TestIF map2.forEach(v => { constObj.field1 += "," + v; // okay, constObj is TestIF, so this works }); ``` All I've done here is assign `obj` to `constObj`. But at the time this assignment takes place, `obj` cannot be `undefined`. Thus `constObj` is just a `TestIF`, and not a `TestIF | undefined`. And since `constObj` is never reassigned and cannot be `undefined`, the rest of the code works. --- Okay, hope that helps. Good luck! [Link to code](https://www.typescriptlang.org/play/#code/PTAEHcAsHtQawHbXAZwpApg0AXSBLNAQwGMcBXIgGyoE9QTqq18c1GaAjUuAfgCgAJhhJUiAJwygAZuQRl80bAFtaKHBnG0Awk24k4umpoAUJTgC5QJgJSgAvAD5QAN2j5BNq248BufvxUGDigRFbq4vgIAOagAD6gcsLSURiCDqAARJi0mf6q6po6ejxGQeImtg7ORAB0JJASAII4JgAMNja+oCCgmuLQ4gCEoKGgykT0nFJJGCkIafxEGbPzaf78URri0qRSACoY6gCSAGKgAN78oykYVIIAjOE4kTH+AL4BGAAeAA6DIVERBQaAAsjoxCDLtdcEdWl5XO50ldRqMSEp1OMiL8nqBQdiADwRKLRAA0oEOJ1OznsoAW4Dx2Ns-lRDAxIQmvwATFZ8b8iS8SeTiTEaXSMAy+cyAqz0QhMXAMPRaZlOYrcizUUEQtBOAArDKch61aLBEzqrow0b4aTWXUG+y0hDkGh2FGs0D2jIXGT4O6PKyZDTqYM4TKgd6a1lG2ooM3q8n2y2sz5WrHc2rSQYAUVIkBMLmq0I9nv1Q0zfvuD1AAGoVaTwzXXN1etA4JM6cg0+9LWnWy5NKBMJJcJg2cpfvhyvAkKhSwbCKBkqlBGm5Zj1zgAPL6jL2ltgHC0X5SRdRaT9NKhNCUnBnNOcrmZnN5gtF92y9k7vUV-3VutZA2tbNj0YBtpM5Kbt+oCLreZzCrAeCLuAgxwCg3bJhG-CfEAA)
This happens because the `forEach()` function accepts a callback that can mutate `obj` (i.e. you could do `obj = null;` inside the loop). For this reason Typescript cannot assume that `obj` is not null. An easy solution is to use a `for...of` loop instead, which doesn't required a callback: ``` for (const v of map2) { obj.field1 += "," + v; } ``` I personally try to use modern language alternatives that do not use callbacks when possible precisely because it really helps with Typescript type inference. Some examples that come to mind: * Use `for...of` instead of `forEach(callback)`. * Use `[... ] (spread syntax)` instead of `map(callback)`.
57,385,570
To return a big struct "MODULEENTRY32" from WINAPI I want to use a pointer, but need to allocate memory in the heap inside the function without deleting it. Then, out of the function when I don't want to use that struct anymore I know that I should use the keyword delete to free memory. ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> MODULEENTRY32* GetModuleEntry(const char *module_name, int PID) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { MODULEENTRY32 *moduleEntry = new MODULEENTRY32; // Remember to delete if don't want leak moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { return moduleEntry; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return nullptr; } int main() { int myRandomPID = 123; MODULEENTRY32* p = GetModuleEntry("mymodule.dll", myRandomPID); if (!p) { std::cout << "Obviously you didn't found your random module of your random PID " << std::endl; } delete p; // I just don't want to do this return 0; } ``` How could I avoid having to free memory in main function? **unique\_ptr**? **EDIT: Possible solution** ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> bool GetModuleEntry(const char *module_name, int PID, MODULEENTRY32* moduleEntry) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { CloseHandle(moduleSnapshot); return true; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return false; } int main() { int myRandomPID = 123; MODULEENTRY32 moduleEntry; if (!GetModuleEntry("mymodule.dll", 123, &moduleEntry)) { std::cout << "Obviously you didn't find your random module of your random PID " << std::endl; } return 0; } ```
2019/08/07
[ "https://Stackoverflow.com/questions/57385570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The Map's `get` method is defined as `Map<string, TestIF>.get(key: string): TestIF | undefined`, so when you set `obj`, it's type is `TestIF | undefined`. When you (re-set) `obj`'s type inside the `if` block, it's in a different scope. When you read `obj` inside the `forEach`, it's also in another scope. The TypeScript compiler is unable to establish the correct type in the changed scopes. Consider this (working) code: ```js const key = 'mapkey'; let obj: TestIF; // Create variable with a Type if (map1.has(key)) { // We know (with certainty) that obj exists! obj = map1.get(key) as TestIF; // We use 'as' because we know it can't be Undefined } else { obj = { field1: 'testtest' }; map1.set(key, obj); } ``` Even though Map.get() will always return `V | undefined`, when we used `as`, we forced TypeScript to treat is as `V`. I use `as` with caution, but in this case we know it exists as we have called `Map.has()` to check it's existence. Also, I want to stress that `(obj === undefined)` is much better than `(obj == null)`, which just checks for falsyness. [[more info]](https://codeburst.io/javascript-null-vs-undefined-20f955215a2)
I was suffering with the same problem now, and the solution to me was (adapted to your case): ``` if (!obj) { return ( ... ); } ```
57,385,570
To return a big struct "MODULEENTRY32" from WINAPI I want to use a pointer, but need to allocate memory in the heap inside the function without deleting it. Then, out of the function when I don't want to use that struct anymore I know that I should use the keyword delete to free memory. ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> MODULEENTRY32* GetModuleEntry(const char *module_name, int PID) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { MODULEENTRY32 *moduleEntry = new MODULEENTRY32; // Remember to delete if don't want leak moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { return moduleEntry; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return nullptr; } int main() { int myRandomPID = 123; MODULEENTRY32* p = GetModuleEntry("mymodule.dll", myRandomPID); if (!p) { std::cout << "Obviously you didn't found your random module of your random PID " << std::endl; } delete p; // I just don't want to do this return 0; } ``` How could I avoid having to free memory in main function? **unique\_ptr**? **EDIT: Possible solution** ``` #include <iostream> #include <cstring> #include <Windows.h> #include <TlHelp32.h> bool GetModuleEntry(const char *module_name, int PID, MODULEENTRY32* moduleEntry) { HANDLE moduleSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32, PID); if (moduleSnapshot != INVALID_HANDLE_VALUE) { moduleEntry->dwSize = sizeof(MODULEENTRY32); if (Module32First(moduleSnapshot, moduleEntry)) { do { if (strcmp(moduleEntry->szModule, module_name) == 0) { CloseHandle(moduleSnapshot); return true; } } while (Module32Next(moduleSnapshot, moduleEntry)); } CloseHandle(moduleSnapshot); } return false; } int main() { int myRandomPID = 123; MODULEENTRY32 moduleEntry; if (!GetModuleEntry("mymodule.dll", 123, &moduleEntry)) { std::cout << "Obviously you didn't find your random module of your random PID " << std::endl; } return 0; } ```
2019/08/07
[ "https://Stackoverflow.com/questions/57385570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This happens because the `forEach()` function accepts a callback that can mutate `obj` (i.e. you could do `obj = null;` inside the loop). For this reason Typescript cannot assume that `obj` is not null. An easy solution is to use a `for...of` loop instead, which doesn't required a callback: ``` for (const v of map2) { obj.field1 += "," + v; } ``` I personally try to use modern language alternatives that do not use callbacks when possible precisely because it really helps with Typescript type inference. Some examples that come to mind: * Use `for...of` instead of `forEach(callback)`. * Use `[... ] (spread syntax)` instead of `map(callback)`.
I was suffering with the same problem now, and the solution to me was (adapted to your case): ``` if (!obj) { return ( ... ); } ```
29,899,818
I have a table like this in SQL Server 2008: ``` create table test (id int, array varchar(max)) insert into test values (1,',a,b,c,d') insert into test values (2,',a,b,c,d,e') insert into test values (3,',a,b,c') ``` I want to count the number of elements of the array column with the result being the following: ``` id count --- ----- 1 4 2 5 3 3 ``` Any ideas how to achieve this in a SELECT statement? I understand that making a function that processes the count could help, but just want to know if it can be achieved without a user defined function.
2015/04/27
[ "https://Stackoverflow.com/questions/29899818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1794925/" ]
By far the best option would be to stop storing array in your database. This violates 1NF and it a poor design decision. You can however get the results you are looking for with a simple replace. ``` select ID , LEN(array) - LEN(replace(array, ',', '')) from test ```
You can calc difference in length: ``` select id, len(array) - len(replace(array, ',', '')) from test ```
31,069,933
I've been researching all around the web the most efficient way to design a connection pool and tried to analyze into details the available libraries (HikariCP, BoneCP, etc.). Our application is a heavy-load consumer webapp and most of the time the users are working on similar business objects (thus the underlying SQL queries executed are the often the same, but still there are numerous). It is designed to work with different DBMS (Oracle and MS SQL Server especially). So a simplified use case would be : * User goes on a particular JSP page (e.g. Enterprise). * A corresponding Bean is created. * Each time it realizes an action (e.g. `getEmployees()`, `computeTurnover()`), the Bean asks the pool for a connection and returns it back when done. If we want to take advantage of the Prepared Statement caching of the underlying JDBC driver (as PStatements are attached to a connection - [jTDS doc.](http://jtds.sourceforge.net/faq.html#preparedStatmentMemoryLeak)), from what I understand an optimal way of doing it would be : * Analyze what kind of SQL query a particular Bean want to execute before providing it an available connection from the pool. * Find a connection where the same prepared statement has already been executed if possible. * Serve the connection accordingly (and use the benefits of the cache/precompiled statement). * Return the connection to the pool and start over. Am I missing an important point here (like JDBC drivers capable of reusing cached statements regardless of the connection) or is my analysis correct ? The different [sources](https://stackoverflow.com/questions/6094529/prepared-statements-along-with-connection-pooling) I found state it is not possible, but why ?
2015/06/26
[ "https://Stackoverflow.com/questions/31069933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1434561/" ]
For your scheme to work, you'd need to be able to get the connection that already has that statement prepared. This falls foul on two points: 1. In JDBC you obtain the connection first, 2. Cached prepared statements (if a driver or connection pool even supports that) aren't exposed in a standardized way (if at all) nor would you be able to introspect them. The performance overhead of finding the right connection (and the subsequent contention on the few connections that already have it prepared) would probably undo any benefit of reusing the prepared statement. Also note that some database systems also have a serverside cache for prepared statements (meaning that it already has the plan etc available), limiting the overhead from a new prepare from the client. If you really think the performance benefit is big enough, you should consider using a data source specific for this functionality (so it is almost guaranteed that the connection will have the statement in its cache).
A solution could be for a connection pool implementation to delay retrieving the connection from the pool until the Connection.prepareStatement() is called. At that time a connection pool would look up available connections by the SQL statement text and then play forward all the calls made before Connection.prepareStatement(). This way it would be possible to get a connection with a ready PreparedStatement without the issues other guys suggested. In other words, when you request a connection from the pool, it would return a wrapper that logs everything until the first operation requiring DB access (such as prepareStatement() is requested. You'd need to ask a vendor of your connection pool functionality to add this feature. I've logged this request with C3P0: <https://github.com/swaldman/c3p0/issues/55> Hope this helps.
129,245
By giving an Huffman coding tree and the total frequency of the characters. how can I find the most unoptimal frequencies for each character such that the size of the code will require the most possible of bits?
2020/08/13
[ "https://cs.stackexchange.com/questions/129245", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/125438/" ]
There are several definitions of "unoptimal". As vonbrand noted, Huffman coding falls back to binary encoding if all frequencies are equal. For longest code length, this happens if the frequencies are Fibonacci numbers. For worst compression rate compared to the entropy, this happens with an alphabet of two symbols where $p\_0 = \varepsilon$ and $p\_1 = 1-\varepsilon$. The Shannon entropy approaches $0$ bits per symbol as $\varepsilon \rightarrow 0$, but any Huffman code requires $1$ bit per symbol. For size of the Huffman tree, there is no "worst case", because the size is the same no matter what the frequencies are. For an alphabet of $n$ symbols, the Huffman tree must have $n$ leaves and $n-1$ branches.
All frequencies the same. Thus there is no space for compression.
69,151,309
I have this block of code: ``` public class LinearFunction implements Function<Double, Double> { private final double slope; private final double yIntercept; private final double xIntercept; public LinearFunction(double m, double b) { this.slope = m; this.yIntercept = b; this.xIntercept = -yIntercept / slope; } public LinearFunction(LinearFunction f) { this(f.slope, f.yIntercept); } @Override public Double apply(final Double x) { return slope * x + yIntercept; } } ``` For my constructor which passes a `LinearFunction` as a parameter, it says > > Copy constructor does not copy field 'xIntercept' > > > Which I don't understand, because `xIntercept` is getting initialized in the other constructor
2021/09/12
[ "https://Stackoverflow.com/questions/69151309", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15924233/" ]
IntelliJ is complaining that you don't copy the value from one `LinearFunction` instance to the other. It's analysis is apparently not thorough enough to understand that this member is computed directly from the two other members, and can never change afterward (because it's `final`). You could copy it directly to appease IntelliJ, but personally, I'd just suppress this warning - warnings and static analysis are supposed to help you write better, bug free, code, not to control your work. **EDIT:** Adding some more details, as per the comments below: First, if you want to appease IntelliJ, you need to make sure your copy constructor copies all the fields. To do this, I'd refactor out a (private) constructor that takes values for the three fields, and have the other two constructors call it: ``` public LinearFunction(double m, double b) { this(m, b, -m / b); } public LinearFunction(LinearFunction f) { this(f.slope, f.yIntercept, f.xIntercept); } private LinearFunction(double slope, double yIntercept, double xIntercept) { this.slope = slope; this.yIntercept = yIntercept; this.xIntercept = xIntercept; } ``` However, I think the preferred method would be to just suppress this warning by adding the appropriate `@SuppressWarnings` annotation: ``` @SuppressWarnings("CopyConstructorMissesField") public LinearFunction(LinearFunction f) { this(f.slope, f.yIntercept); } ``` If you aren't sure what annotation to to use, IntelliJ can help you here (note: I'm using Ultimate Edition, but I'm 99% sure it's the same in the community edition) - place your caret on the constructor's name with the warning and hit `Alt`+`Enter`. This will open a context menu with all the warnings on that constructor. Select the relevant one ("Inspection 'Copy constructor misses fields' options"), and hit the right arrow key to get another context menu where you can choose how to deal with it (e.g., suppress the warning for that specific constructor, for the entire class, edit the IDE settings to disable this warning, etc.): [![enter image description here](https://i.stack.imgur.com/VuYGF.png)](https://i.stack.imgur.com/VuYGF.png)
In this case, you can ignore the warning message because it is a "false alarm". If you want the compiler to stop producing the warning, annotate the method with `@SuppressWarnings("CopyConstructorMissesField")`: ``` @SuppressWarnings("CopyConstructorMissesField") public LinearFunction(LinearFunction f) { this(f.slope, f.yIntercept); } ```
35,730,637
I'm writing unit tests for a project the requires some fairly complex json objects. They have become too verbose to plug into the spec file and I'd like to import local json files into my tests. I've tried a number of solutions that don't really work for this project: [Loading a mock JSON file within Karma+AngularJS test](https://stackoverflow.com/questions/17370427/loading-a-mock-json-file-within-karmaangularjs-test) [Angular-Jasmine, loading json](https://stackoverflow.com/questions/33559483/angular-jasmine-loading-json) [How to load external Json file using karma+Jasmine for angularJS testing?/](https://stackoverflow.com/questions/22003472/how-to-load-external-json-file-using-karmajasmine-for-angularjs-testing) and using javascript with `XMLHttpRequest()`.... What's the proper way to go about loading external files into the spec without creating new dependencies?
2016/03/01
[ "https://Stackoverflow.com/questions/35730637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5241159/" ]
I think the best way for testing in your case is to use [$httpBackend](https://docs.angularjs.org/api/ngMock/service/$httpBackend). It allows to use your `$resource` or `$http` and provide them with test data. And if you have to deal with large `json` objects you can define them in separate js files (in functions), include those files in your test js bundle and just call those functions to return your `json` when you setup `$httpBackend` service. Hope this helps. **Update** Since tests are just plain old js you can create a separate js file. And declare a function in it which will return your json. ``` (function (window) { 'use strict'; if (!window.myJsonMocks){ window.myJsonMocks = {}; //creating the object that will contain all the functions with json } window.myJsonMocks.myCustomersQueryResultJson = function(){ return '{"value":[{"CategoryInfo":{"Id":1,"Type":1},"Caption":"Test","Path":"1/#/1"},{"CategoryInfo":{"Id":2,"Type":1},"Caption":"new","Path":"2/#/2"},{"CategoryInfo":{"Id":3,"Type":2},"Caption":"Another one","Path":"3/#/3"}]}' }; })(window); ``` Since you declare it as IIFE it will register your function in Global scope so that it will be available in your Tests. Just make sure you include it before your test scenarios. And then you just call ``` $httpBackend.when('GET', 'api/customerssdata').respond(window.myJsonMocks.myCustomersQueryResultJson()); ```
``` path = basepath; var json = fixture.load(path + '/login-response.json'); ``` Use fixture. In plugin section, add 'karma-fixture' package and add in karma.conf.js ``` jsonFixturesPreprocessor: { variableName: '__json__' } ```
2,614,677
I want implement in my software solution an VBA editor but in c# 3.0. VBA edtior (vb 6.5) is obsolete. Microsoft have a solution for create macros and scripts editor for .net? if not how to implement a like solution?
2010/04/10
[ "https://Stackoverflow.com/questions/2614677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/127891/" ]
You are probably looking for VSTA, [Visual Studio Tools for Applications](http://msdn.microsoft.com/en-us/vstudio/aa700828.aspx), which provides an IDE for .NET languages that can be integrated with your application similar to the VBA IDE.
<http://msdn.microsoft.com/en-us/vstudio/aa700828.aspx> <http://www.winwrap.com/web/basic/default.asp>
4,564,951
I have an arraylist with few duplicate items. I need to know the count of each duplicated item. I am using 2.0 so cannot use linq. I had posted a similar question earlier, but my question was not clear. Thanks Prady
2010/12/30
[ "https://Stackoverflow.com/questions/4564951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/193247/" ]
I've done something in the past. My solution was to loop through the ArrayList and store the counts in a dictionary. Then loop though the dictionary to display the results: ``` ArrayList list = new ArrayList(); list.Add(1); list.Add("test"); list.Add("test"); list.Add("test"); list.Add(2); list.Add(3); list.Add(2); Dictionary<Object, int> itemCount = new Dictionary<object, int>(); foreach (object o in list) { if (itemCount.ContainsKey(o)) itemCount[o]++; else itemCount.Add(o, 1); } foreach (KeyValuePair<Object, int> item in itemCount) { if (item.Value > 1) Console.WriteLine(item.Key + " count: " + item.Value); } ``` Output: ``` test count: 3 2 count: 2 ``` **Edit** Realized I used the var keyword which is not a 2.0 feature. Replaced it with KeyValuePair.
**Option 1**: Sort the list and then count adjacently Equal items (requires you to override the Equals method for your class) **Option 2**: Use your unique identifier (however you're defining two objects to be equal) as the key for a Dictionary and add each of your objects to that entry.
4,564,951
I have an arraylist with few duplicate items. I need to know the count of each duplicated item. I am using 2.0 so cannot use linq. I had posted a similar question earlier, but my question was not clear. Thanks Prady
2010/12/30
[ "https://Stackoverflow.com/questions/4564951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/193247/" ]
I've done something in the past. My solution was to loop through the ArrayList and store the counts in a dictionary. Then loop though the dictionary to display the results: ``` ArrayList list = new ArrayList(); list.Add(1); list.Add("test"); list.Add("test"); list.Add("test"); list.Add(2); list.Add(3); list.Add(2); Dictionary<Object, int> itemCount = new Dictionary<object, int>(); foreach (object o in list) { if (itemCount.ContainsKey(o)) itemCount[o]++; else itemCount.Add(o, 1); } foreach (KeyValuePair<Object, int> item in itemCount) { if (item.Value > 1) Console.WriteLine(item.Key + " count: " + item.Value); } ``` Output: ``` test count: 3 2 count: 2 ``` **Edit** Realized I used the var keyword which is not a 2.0 feature. Replaced it with KeyValuePair.
i needed something similar for a project a long time ago, and made a function for it ``` static Dictionary<object, int> GetDuplicates(ArrayList list, out ArrayList uniqueList) { uniqueList = new ArrayList(); Dictionary<object, int> dups = new Dictionary<object, int>(); foreach (object o in list) { if (uniqueList.Contains(o)) if (!dups.ContainsKey(o)) dups.Add(o, 2); else dups[o]++; else uniqueList.Add(o); } return dups; } ```
2,132,369
I have an array: ``` $array = array( 'key1' => 'value1', 'key2' => 'value2', 'key3' => 'value3', 'key4' => 'value4', 'key5' => 'value5', ); ``` and I would like to get a part of it with specified keys - for example `key2, key4, key5`. Expected result: ``` $result = array( 'key2' => 'value2', 'key4' => 'value4', 'key5' => 'value5', ); ``` What is the fastest way to do it ?
2010/01/25
[ "https://Stackoverflow.com/questions/2132369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223386/" ]
You need [`array_intersect_key`](http://www.php.net/manual/en/function.array-intersect-key.php) function: ``` $result = array_intersect_key($array, array('key2'=>1, 'key4'=>1, 'key5'=>1)); ``` Also [`array_flip`](http://www.php.net/manual/en/function.array-flip.php) can help if your keys are in array as values: ``` $result = array_intersect_key( $array, array_flip(array('key2', 'key4', 'key5')) ); ```
Only way I see is to iterate the array and construct a new one. Either walk the array with array\_walk and construct the new one or construct a matching array and use array\_intersect\_key et al.
2,132,369
I have an array: ``` $array = array( 'key1' => 'value1', 'key2' => 'value2', 'key3' => 'value3', 'key4' => 'value4', 'key5' => 'value5', ); ``` and I would like to get a part of it with specified keys - for example `key2, key4, key5`. Expected result: ``` $result = array( 'key2' => 'value2', 'key4' => 'value4', 'key5' => 'value5', ); ``` What is the fastest way to do it ?
2010/01/25
[ "https://Stackoverflow.com/questions/2132369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223386/" ]
You need [`array_intersect_key`](http://www.php.net/manual/en/function.array-intersect-key.php) function: ``` $result = array_intersect_key($array, array('key2'=>1, 'key4'=>1, 'key5'=>1)); ``` Also [`array_flip`](http://www.php.net/manual/en/function.array-flip.php) can help if your keys are in array as values: ``` $result = array_intersect_key( $array, array_flip(array('key2', 'key4', 'key5')) ); ```
You can use [`array_intersect_key`](http://php.net/array_intersect_key) and [`array_fill_keys`](http://php.net/array_fill_keys) to do so: ``` $keys = array('key2', 'key4', 'key5'); $result = array_intersect_key($array, array_fill_keys($keys, null)); ``` [`array_flip`](http://php.net/array_flip) instead of `array_fill_keys` will also work: ``` $keys = array('key2', 'key4', 'key5'); $result = array_intersect_key($array, array_flip($keys)); ```
2,132,369
I have an array: ``` $array = array( 'key1' => 'value1', 'key2' => 'value2', 'key3' => 'value3', 'key4' => 'value4', 'key5' => 'value5', ); ``` and I would like to get a part of it with specified keys - for example `key2, key4, key5`. Expected result: ``` $result = array( 'key2' => 'value2', 'key4' => 'value4', 'key5' => 'value5', ); ``` What is the fastest way to do it ?
2010/01/25
[ "https://Stackoverflow.com/questions/2132369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/223386/" ]
You can use [`array_intersect_key`](http://php.net/array_intersect_key) and [`array_fill_keys`](http://php.net/array_fill_keys) to do so: ``` $keys = array('key2', 'key4', 'key5'); $result = array_intersect_key($array, array_fill_keys($keys, null)); ``` [`array_flip`](http://php.net/array_flip) instead of `array_fill_keys` will also work: ``` $keys = array('key2', 'key4', 'key5'); $result = array_intersect_key($array, array_flip($keys)); ```
Only way I see is to iterate the array and construct a new one. Either walk the array with array\_walk and construct the new one or construct a matching array and use array\_intersect\_key et al.
44,083,492
I've written an android app that requires users to register online. Is there a way to ensure incoming JSON registration data originated from my app? Thanks.
2017/05/20
[ "https://Stackoverflow.com/questions/44083492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6114016/" ]
> > Is there a way to ensure incoming JSON registration data originated from my app? > > > Well, `Yes` and `No` :) `Yes` ===== When you expose API to the internet, you usually do not want anyone to talk to it. Therefore first wall most APIs use is client token that you may show you hold while using API calls. Some API wants that to be sent with each request (i.e. with header), other wants you to show it while authenticating your user like oauth based APIs. Yet another may give you `secret` string and ask you to i.e. hash your request payload (i.e. JSON) with your `secret` all together and include result checksum in request. In that case API does the same once your request arrives to the backend and as it knows your `secret` string it can do the same hashing to ensure checksum you sent matches. So if you have such mechanism, you should be able to tell which client (software) is talking to you and if it's allowed to do that or not. If you do not have it implemented, then I'd just add it (this additionally would let you ban certain clients (i.e. old, outdated version of your apps) if needed by simply blacklisting their "secret"/tokens. `No` ==== Unfortunately all of these keys, secrets etc I mentioned are part of the client which in case of app are usually included in app binary and as binaries must be released to the public therefore its content cannot be considered fully secret as with some work it can be extracted out it and then used to fake requests on behalf of that app. And telling if the call is from original code or sent by impostor is unfortunately is impossible.
You can generate a random hash and/or a validator/verification hash (e.g. like a public key) The easiest form: Your app stores the hash, including an expiration time, eg a few minutes. Send the verification hash to the user to be added with the registration. So only your app can generate the tokens. e.g: ``` $token = bin2hex(openssl_random_pseudo_bytes(16)); // or $token = bin2hex(random_bytes(16)); ``` If you don't know the incomming hash a different thing trys to register.
44,083,492
I've written an android app that requires users to register online. Is there a way to ensure incoming JSON registration data originated from my app? Thanks.
2017/05/20
[ "https://Stackoverflow.com/questions/44083492", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6114016/" ]
> > Is there a way to ensure incoming JSON registration data originated from my app? > > > Well, `Yes` and `No` :) `Yes` ===== When you expose API to the internet, you usually do not want anyone to talk to it. Therefore first wall most APIs use is client token that you may show you hold while using API calls. Some API wants that to be sent with each request (i.e. with header), other wants you to show it while authenticating your user like oauth based APIs. Yet another may give you `secret` string and ask you to i.e. hash your request payload (i.e. JSON) with your `secret` all together and include result checksum in request. In that case API does the same once your request arrives to the backend and as it knows your `secret` string it can do the same hashing to ensure checksum you sent matches. So if you have such mechanism, you should be able to tell which client (software) is talking to you and if it's allowed to do that or not. If you do not have it implemented, then I'd just add it (this additionally would let you ban certain clients (i.e. old, outdated version of your apps) if needed by simply blacklisting their "secret"/tokens. `No` ==== Unfortunately all of these keys, secrets etc I mentioned are part of the client which in case of app are usually included in app binary and as binaries must be released to the public therefore its content cannot be considered fully secret as with some work it can be extracted out it and then used to fake requests on behalf of that app. And telling if the call is from original code or sent by impostor is unfortunately is impossible.
Write this **Auth** function in your api controller. ``` protected function Auth($id,$token){ if(!empty($id) and !empty($token)){ $user = DB::table('users')->where('id',$id) ->where('token',$token)->count(); if($user == 1){ return true; }else{ return false; } }else{ return false; }} ``` While login for the first time use this method to store a random hash as your user's token.(Store this token in your users table as key=>token) ``` $randomString = $this->random_hash(); ``` And while taking input from any api request id and access\_token for that user. ``` $auth = $this->Auth($id,$access_token); ``` `if($auth == 1)`then you can proceed with your function.
33,015,935
I'm trying to use **DynamicComponentLoader** and the sample code is below: ``` import {Component, View, bootstrap, OnInit, DynamicComponentLoader} from 'angular2'; ... DynamicComponentLoader.loadIntoLocation(PersonsComponent, itemElement); ``` When I run the app, I get the error: > > DynamicComponentLoader.loadIntoLocation is not a function > > > How can I use DynamicComponentLoader.loadIntoLocation in ES6 JavaScript using class?
2015/10/08
[ "https://Stackoverflow.com/questions/33015935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1768008/" ]
`DynamicComponentLoader` is **class**. It doesn't have any static method `loadIntoLocation`. But instances of this class have it. You must instantiate `DynamicComponentLoader` using dependency injection: ``` import { Component, View, DynamicComponentLoader, ElementRef } from 'angular2/angular2' @Component({ selector: 'dc' }) @View({ template: '<b>Some template</b>' }) class DynamicComponent {} @Component({ selector: 'my-app' }) @View({ template: '<div #container></div>' }) export class App { constructor( dynamicComponentLoader: DynamicComponentLoader, elementRef: ElementRef ) { dynamicComponentLoader.loadIntoLocation(DynamicComponent, elementRef, 'container'); } } ``` See [this plunker](http://plnkr.co/edit/OEYeJv7UEQMrfTQibpGB?p=preview) **UPD** What about ES6, I've answered it [here](https://stackoverflow.com/questions/33034930/how-to-use-angular2-dynamiccomponentloader-in-es6/33036634#33036634)
`DynamicComponentLoader` is long gone. <https://angular.io/guide/dynamic-component-loader> explains how it's done in newer Angular version. > > > ``` > loadComponent() { > this.currentAddIndex = (this.currentAddIndex + 1) % this.ads.length; > let adItem = this.ads[this.currentAddIndex]; > > let componentFactory = this.componentFactoryResolver.resolveComponentFactory(adItem.component); > > let viewContainerRef = this.adHost.viewContainerRef; > viewContainerRef.clear(); > > let componentRef = viewContainerRef.createComponent(componentFactory); > (<AdComponent>componentRef.instance).data = adItem.data; > } > > ``` > >
45,857,019
I have abstract class-factory `Factory` with factory-method `getProduct()` and his child classes. I have abstract class-product `Product` and his child classes. Classes-factories created objects of classes-products. ``` abstract class Factory { abstract function getProduct(); } class FirstFactory extends Factory { public function getProduct() { return new FirstProduct(); } } abstract class Product { }; class FirstProduct extends Product { } ``` In result I can use this client code: ``` $factory = new FirstFactory(); $firstProduct = $factory->getProduct(); $factory = new SecondFactory(); $secondProduct = $factory->getProduct(); ``` Question: what for this pattern is necessary? Because in client code I just can use direct classes: ``` $firstProduct = new FirstProduct(); $secondProduct = new SecondProduct(); ```
2017/08/24
[ "https://Stackoverflow.com/questions/45857019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8510521/" ]
If you know at compile time that `firstProduct` is always of type `FirstProduct` and `secondProduct` is always of type `SecondProduct` then there is no need for an factory method. A factory method is only useful if you want to create a product that might be `FirstProduct` or might be `SecondProduct` depending on the runtime type of a factory. Perhaps, for example, the type of the factory is decided by user input.
a Factory can be injected instead of the actual class. Suppose you have a class that can be instantiated only at runtime based on some specific conditions, in this case you cannot do new Foo(...args) . One option is to inject FooFactory instead and have it create the Foo instance for you.
45,857,019
I have abstract class-factory `Factory` with factory-method `getProduct()` and his child classes. I have abstract class-product `Product` and his child classes. Classes-factories created objects of classes-products. ``` abstract class Factory { abstract function getProduct(); } class FirstFactory extends Factory { public function getProduct() { return new FirstProduct(); } } abstract class Product { }; class FirstProduct extends Product { } ``` In result I can use this client code: ``` $factory = new FirstFactory(); $firstProduct = $factory->getProduct(); $factory = new SecondFactory(); $secondProduct = $factory->getProduct(); ``` Question: what for this pattern is necessary? Because in client code I just can use direct classes: ``` $firstProduct = new FirstProduct(); $secondProduct = new SecondProduct(); ```
2017/08/24
[ "https://Stackoverflow.com/questions/45857019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8510521/" ]
It's important to note that the Factory Pattern is not always the best solution and for simple case, it is even better having simple `new`. For instance, your example would probably be better without Factory. Factory Pattern takes all its power when you don't need to work on the exact implementation of a class but stay at the 'abstraction layer' (ex. Interface and abstract classes). You can have a look at the "Dependency Inversion Principle DIP" (Depend upon abstraction. Do not depend upon concrete classes). Factory use case: switching Database with one line ================================================== For instance, let's say you have a software that use Database system. For whatever reason, you know the concrete database used (MongoDB, SQL...) may change later. (Or even just need a fake hard coded database in files during the development). The factory pattern allows you to switch from one to another in just one line, by calling the right Factory, since all the implementation depends upon abstraction. (This is actually the DAO pattern that makes a great use of Factory Pattern, see the oracle documentation for more informations: <http://www.oracle.com/technetwork/java/dataaccessobject-138824.html>) Concrete example: Game with 2 Factions ====================================== Here a concrete and simple example of implementation. ### You have 2 Units * Peon * Warrior ### You have 2 Factions * Orc + Orc Peon + Orc Warrior * Human + Human Peon + Human Warrior ### 2 Players * Orc player (Use only Orc units) * Human player (Use only Human units) You want to instantiate both players but the concrete player class should be implemented in the more generic way, so that it can be reused later. This is also really important in case of adding several new faction, you don't want to spend time going back to your player class. The code sample =============== To build and run it, copy in Main.java, then `javac Main.java` and `java Main` The result should be [![enter image description here](https://i.stack.imgur.com/oZ9NF.png)](https://i.stack.imgur.com/oZ9NF.png) ``` // Factories ---------------------------------------- abstract class AbsUnitFactory { public abstract Warrior creaWarrior(); public abstract Peon creaPeon(); } class OrcFactory extends AbsUnitFactory { public Warrior creaWarrior() { return new OrcWarrior(); } public Peon creaPeon() { return new OrcPeon(); } } class HumanFactory extends AbsUnitFactory { public Warrior creaWarrior() { return new HumanWarrior(); } public Peon creaPeon() { return new HumanPeon(); } } abstract class Unit { public abstract String getRole(); public abstract String getFaction(); @Override public String toString() { String str = new String(); str += "[UNIT]\n"; str += " Role: " + this.getRole() + "\n"; str += " Faction: " + this.getFaction() + "\n"; return str; } } // Warrior Units ---------------------------------------- abstract class Warrior extends Unit { @Override public String getRole() { return "I'm a badass Warrior with the biggest sword!"; } } class OrcWarrior extends Warrior { @Override public String getFaction() { return "Orc"; } } class HumanWarrior extends Warrior { @Override public String getFaction() { return "Human"; } } // Peon Units ---------------------------------------- abstract class Peon extends Unit { @Override public String getRole() { return "I'm a little simple peon... Ready to work."; } } class HumanPeon extends Peon { @Override public String getFaction() { return "Human"; } } class OrcPeon extends Peon { @Override public String getFaction() { return "Orc"; } } // Main components ---------------------------------------- class Player { private AbsUnitFactory factory; private Peon myPeon; private Warrior myWarrior; public Player(AbsUnitFactory pFactory) { this.factory = pFactory; this.myPeon = this.factory.creaPeon(); this.myWarrior = this.factory.creaWarrior(); } @Override public String toString() { return this.myPeon.toString() + this.myWarrior.toString(); } } class Main { public static void main(String[] args) { AbsUnitFactory humanFactory = new HumanFactory(); AbsUnitFactory orcFactory = new OrcFactory(); Player humanPlayer = new Player(humanFactory); Player orcPlayer = new Player(orcFactory); System.out.println("***** Human player *****"); System.out.println(humanPlayer.toString()); System.out.println("***** Orce player *****"); System.out.println(orcPlayer.toString()); } } ``` See how the player class can be reused for any faction and the only line that defines witch faction to use is the Factory. (You may even add a singleton). More resources ============== These are books I really appreciate (About Design Patterns) * Head first design pattern (<http://shop.oreilly.com/product/9780596007126.do>) * Design Patterns, elements of reusable object (<https://www.barnesandnoble.com/w/design-patterns-erich-gamma/1100886879>)
If you know at compile time that `firstProduct` is always of type `FirstProduct` and `secondProduct` is always of type `SecondProduct` then there is no need for an factory method. A factory method is only useful if you want to create a product that might be `FirstProduct` or might be `SecondProduct` depending on the runtime type of a factory. Perhaps, for example, the type of the factory is decided by user input.
45,857,019
I have abstract class-factory `Factory` with factory-method `getProduct()` and his child classes. I have abstract class-product `Product` and his child classes. Classes-factories created objects of classes-products. ``` abstract class Factory { abstract function getProduct(); } class FirstFactory extends Factory { public function getProduct() { return new FirstProduct(); } } abstract class Product { }; class FirstProduct extends Product { } ``` In result I can use this client code: ``` $factory = new FirstFactory(); $firstProduct = $factory->getProduct(); $factory = new SecondFactory(); $secondProduct = $factory->getProduct(); ``` Question: what for this pattern is necessary? Because in client code I just can use direct classes: ``` $firstProduct = new FirstProduct(); $secondProduct = new SecondProduct(); ```
2017/08/24
[ "https://Stackoverflow.com/questions/45857019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8510521/" ]
It's important to note that the Factory Pattern is not always the best solution and for simple case, it is even better having simple `new`. For instance, your example would probably be better without Factory. Factory Pattern takes all its power when you don't need to work on the exact implementation of a class but stay at the 'abstraction layer' (ex. Interface and abstract classes). You can have a look at the "Dependency Inversion Principle DIP" (Depend upon abstraction. Do not depend upon concrete classes). Factory use case: switching Database with one line ================================================== For instance, let's say you have a software that use Database system. For whatever reason, you know the concrete database used (MongoDB, SQL...) may change later. (Or even just need a fake hard coded database in files during the development). The factory pattern allows you to switch from one to another in just one line, by calling the right Factory, since all the implementation depends upon abstraction. (This is actually the DAO pattern that makes a great use of Factory Pattern, see the oracle documentation for more informations: <http://www.oracle.com/technetwork/java/dataaccessobject-138824.html>) Concrete example: Game with 2 Factions ====================================== Here a concrete and simple example of implementation. ### You have 2 Units * Peon * Warrior ### You have 2 Factions * Orc + Orc Peon + Orc Warrior * Human + Human Peon + Human Warrior ### 2 Players * Orc player (Use only Orc units) * Human player (Use only Human units) You want to instantiate both players but the concrete player class should be implemented in the more generic way, so that it can be reused later. This is also really important in case of adding several new faction, you don't want to spend time going back to your player class. The code sample =============== To build and run it, copy in Main.java, then `javac Main.java` and `java Main` The result should be [![enter image description here](https://i.stack.imgur.com/oZ9NF.png)](https://i.stack.imgur.com/oZ9NF.png) ``` // Factories ---------------------------------------- abstract class AbsUnitFactory { public abstract Warrior creaWarrior(); public abstract Peon creaPeon(); } class OrcFactory extends AbsUnitFactory { public Warrior creaWarrior() { return new OrcWarrior(); } public Peon creaPeon() { return new OrcPeon(); } } class HumanFactory extends AbsUnitFactory { public Warrior creaWarrior() { return new HumanWarrior(); } public Peon creaPeon() { return new HumanPeon(); } } abstract class Unit { public abstract String getRole(); public abstract String getFaction(); @Override public String toString() { String str = new String(); str += "[UNIT]\n"; str += " Role: " + this.getRole() + "\n"; str += " Faction: " + this.getFaction() + "\n"; return str; } } // Warrior Units ---------------------------------------- abstract class Warrior extends Unit { @Override public String getRole() { return "I'm a badass Warrior with the biggest sword!"; } } class OrcWarrior extends Warrior { @Override public String getFaction() { return "Orc"; } } class HumanWarrior extends Warrior { @Override public String getFaction() { return "Human"; } } // Peon Units ---------------------------------------- abstract class Peon extends Unit { @Override public String getRole() { return "I'm a little simple peon... Ready to work."; } } class HumanPeon extends Peon { @Override public String getFaction() { return "Human"; } } class OrcPeon extends Peon { @Override public String getFaction() { return "Orc"; } } // Main components ---------------------------------------- class Player { private AbsUnitFactory factory; private Peon myPeon; private Warrior myWarrior; public Player(AbsUnitFactory pFactory) { this.factory = pFactory; this.myPeon = this.factory.creaPeon(); this.myWarrior = this.factory.creaWarrior(); } @Override public String toString() { return this.myPeon.toString() + this.myWarrior.toString(); } } class Main { public static void main(String[] args) { AbsUnitFactory humanFactory = new HumanFactory(); AbsUnitFactory orcFactory = new OrcFactory(); Player humanPlayer = new Player(humanFactory); Player orcPlayer = new Player(orcFactory); System.out.println("***** Human player *****"); System.out.println(humanPlayer.toString()); System.out.println("***** Orce player *****"); System.out.println(orcPlayer.toString()); } } ``` See how the player class can be reused for any faction and the only line that defines witch faction to use is the Factory. (You may even add a singleton). More resources ============== These are books I really appreciate (About Design Patterns) * Head first design pattern (<http://shop.oreilly.com/product/9780596007126.do>) * Design Patterns, elements of reusable object (<https://www.barnesandnoble.com/w/design-patterns-erich-gamma/1100886879>)
a Factory can be injected instead of the actual class. Suppose you have a class that can be instantiated only at runtime based on some specific conditions, in this case you cannot do new Foo(...args) . One option is to inject FooFactory instead and have it create the Foo instance for you.
325,743
What I did ---------- Aura Component ``` <aura:component> <lightning:flow aura:id="flowData"/> </aura:component> ``` Controller ``` ({ init: function (component, event, helper) { console.log('Hello world from Component'); var flow = cmp.find("flowData"); flow.startFlow("Self_Registration"); } }) ``` Aura App ``` <aura:application access="Global" extends="ltng:outApp" implements="ltng:allowGuestAccess"> <aura:dependency resource="c:externalLeadGenerator"/> </aura:application> ``` Controller ``` ({ init: function (component, event, helper) { console.log('Hello world from App'); } }) ``` My website (running WordPress) is whitelisted in CORS. This is an HTML code embedded on my page. ``` <!-- wp:html --> <script src="https://ajfmo-developer-edition.na174.force.com/pmc/lightning/lightning.out.js"></script> <script> $Lightning.use("c:leadGeneratorApp", // name of the Lightning app function() { // Callback once framework and app loaded $Lightning.createComponent( "c:externalLeadGenerator", // top-level component of your app { }, // attributes to set on the component when created "lightningLocator", // the DOM location to insert the component function(cmp) { console.log('Hi from callback');// callback when component is created and active on the page } ); }, 'https://ajfmo-developer-edition.na174.force.com/pmc' // Community endpoint ); </script> <!-- /wp:html --> <!-- wp:paragraph --> <div id="lightningLocator"> <p>Something</p> </div> <!-- /wp:paragraph --> ``` This is the result. [![enter image description here](https://i.stack.imgur.com/Xlelx.png)](https://i.stack.imgur.com/Xlelx.png) --- What am I missing?
2020/11/07
[ "https://salesforce.stackexchange.com/questions/325743", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/60493/" ]
There is no **init handler** in the component code. ``` <aura:handler name="init" value="{!this}" action="{!c.init}"/> ```
Have you ensured that the site guest user for your community endpoint has the right permissions to run the flow? There might be no errors starting the Lightning-out container but the guest user may be running into an issue accessing the flow. More details about site guest users and flow access can be found [here](https://help.salesforce.com/articleView?id=rss_flow_guestuser.htm&type=5). You could probably test this by adding the flow into a publically accessible community and accessing it as the site guest user. I believe the guest user would still need the "Run Flow" permission even if the flow lives in a Salesforce community.
36,558,810
I have tried every way possible, but I am still not able to logout the current user. Currently I have the following code: ``` _authenticationManager.SignOut(DefaultAuthenticationTypes.ApplicationCookie); string sKey = (string)HttpContext.Current.Session["user"]; string sUser = Convert.ToString(HttpContext.Current.Cache[sKey]); HttpContext.Current.Cache.Remove(sUser); HttpContext.Current.Session.Clear(); HttpContext.Current.Response.Cookies.Clear(); HttpContext.Current.Request.Cookies.Clear(); HttpContext.Current.Session.Abandon(); ``` After this, the session is still not cleared. Any ideas? Authentication startup: ``` app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie, LoginPath = new PathString("/Account/Login") }); ``` SignIn Code: ``` public override ApplicationUser Handle([NotNull]LoginCommand command) { var user = _userManager.Find(command.Login, command.Password); if (user == null) { throw new RentalApplicationValidationException("No valid login"); } _authenticationManager.SignOut(DefaultAuthenticationTypes.ApplicationCookie); var identity = _userManager.CreateIdentity(user, DefaultAuthenticationTypes.ApplicationCookie); _authenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = false }, identity); return user; } ```
2016/04/11
[ "https://Stackoverflow.com/questions/36558810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3647304/" ]
Your syntax is wrong, because only the first statement in a `for` preamble is the declaration part — you've attempted to use two statements for that job. However, you *can* declare multiple variables in a single statement, which is what you presumably intended to do: ``` for (int i = 0, n = 10; n < NAX; i++, n++) // ^^^^^^^^^^^^^^^^^ ``` Now, these variables only exist for the duration of the loop. That is their "scope". For your next loop, you have to declare them again. So, instead of this: ``` for(i=0; n=11; n<MAX; i++, n++) ``` repeat the above, substituting `11` for `10` in the new declaration (if that wasn't a typo). Also, I believe you wrote `input[i]` when you meant `input[n]`. Otherwise `n` is pointless in the first place.
``` void reformat(double input[], double newarray[], int MAX) { for(int i=0; int n=10; n<MAX; i++, n++) { newarray[n] = input[i]*2; } for(i=0; n=11; n<MAX; i++, n++) { newarray[i] = pow(input[n], 0.3); } return; } ``` When you declare a variable in a **for** block as you have done here with the variable **i** , it only exists for the duration of the following block. The second **for** block above needs to declare a new loop variable, since the first i no longer exists, after its block has completed executing.
36,558,810
I have tried every way possible, but I am still not able to logout the current user. Currently I have the following code: ``` _authenticationManager.SignOut(DefaultAuthenticationTypes.ApplicationCookie); string sKey = (string)HttpContext.Current.Session["user"]; string sUser = Convert.ToString(HttpContext.Current.Cache[sKey]); HttpContext.Current.Cache.Remove(sUser); HttpContext.Current.Session.Clear(); HttpContext.Current.Response.Cookies.Clear(); HttpContext.Current.Request.Cookies.Clear(); HttpContext.Current.Session.Abandon(); ``` After this, the session is still not cleared. Any ideas? Authentication startup: ``` app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie, LoginPath = new PathString("/Account/Login") }); ``` SignIn Code: ``` public override ApplicationUser Handle([NotNull]LoginCommand command) { var user = _userManager.Find(command.Login, command.Password); if (user == null) { throw new RentalApplicationValidationException("No valid login"); } _authenticationManager.SignOut(DefaultAuthenticationTypes.ApplicationCookie); var identity = _userManager.CreateIdentity(user, DefaultAuthenticationTypes.ApplicationCookie); _authenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = false }, identity); return user; } ```
2016/04/11
[ "https://Stackoverflow.com/questions/36558810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3647304/" ]
Your syntax is wrong, because only the first statement in a `for` preamble is the declaration part — you've attempted to use two statements for that job. However, you *can* declare multiple variables in a single statement, which is what you presumably intended to do: ``` for (int i = 0, n = 10; n < NAX; i++, n++) // ^^^^^^^^^^^^^^^^^ ``` Now, these variables only exist for the duration of the loop. That is their "scope". For your next loop, you have to declare them again. So, instead of this: ``` for(i=0; n=11; n<MAX; i++, n++) ``` repeat the above, substituting `11` for `10` in the new declaration (if that wasn't a typo). Also, I believe you wrote `input[i]` when you meant `input[n]`. Otherwise `n` is pointless in the first place.
You've got couple of problems with the use of variables in the `for` loops in `reformat`. ``` for(int i=0; int n=10; n<MAX; i++, n++) ``` is not right. A `for` loop cannot have four clauses. You need to use: ``` for(int i=0, n=10; n<MAX; i++, n++) ``` The second problem is that the above declaration makes `i` and `n` to be valid only in the `for` loop. Their scope ends in the `for` loop. They are not valid in the second `for` loop. They need to be redeclared in the second `for` loop. ``` for(int i=0, n=11; n<MAX; i++, n++) ```
4,475
On the main site, this ``` <!-- language: lang-shell --> # do these steps on each distro mv Documents Documents.old ln -s /shared/your-user/Documents Documents mv -i Documents.old/* Documents/ rmdir Documents.old ``` produced rather silly syntax highlighting (screenshot, since I can't get *any* syntax highlighting to happen on meta): [![Above example, screenshot from the preview](https://i.stack.imgur.com/Bg5U0.png)](https://i.stack.imgur.com/Bg5U0.png) It seems to think `Documents` means something to shell. And possibly that `rmdir` introduces a comment (or at least it uses the same color as the comment up top). I tried `lang-sh` and `lang-bash` as well, both do the same. The documentation says shell should be supported: [What is syntax highlighting and how does it work?](https://meta.stackexchange.com/questions/184108/what-is-syntax-highlighting-and-how-does-it-work)
2017/05/15
[ "https://unix.meta.stackexchange.com/questions/4475", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/977/" ]
You either need to specify a tag name, or a language prefixed with `lang-`. There is no registered language named `shell`, so `lang-shell` kicks it into a default mode the same as if you'd done `lang-foobarbaz`, and apparently in that mode it looks for C-style comments. If you do `<!-- language: lang-sh -->`, or `<!-- language: shell -->` (which tells it to use the language associated with the [shell](https://unix.stackexchange.com/questions/tagged/shell "show questions tagged 'shell'") tag, which is `lang-sh`), it should work, and the last section doesn't get highlighted as a comment. `Documents` is still wrong though, it gets colored as a typename, which seems to be happening because it's capitalized. I tried to look into [code-prettify](https://github.com/google/code-prettify) to see why they do that, but I don't even see support for `lang-sh`, so it's not clear to me how this all fits together. But whatever the problem is, it's probably in code-prettify, not anything in Stack Exchange.
You didn't notice (or at least didn't mention) that the last word of the fourth line, and the entire fifth line, are colored gray as comments.  That is almost surely because, right in the middle of the fourth line, you have `/*`, so everything up through a matching `*/` is a comment. I find `lang-sh` formatting to be annoying and distracting, so I often suppress it with `lang-none`.  You can also get a code block displayed in the fixed-width font that we're accustomed to, but no silly colors, by putting it in a `<pre>`...`</pre>` block.  (Of course then you must use `&lt;` instead of `<`, and maybe also `&gt;` in place of `>`.) This is documented somewhere, but I don't know how well known it is, so I'll repeat it here. ``` <!-- language: lang-*xxx* --> ``` sets the pretty-printing language for the immediately following code block. ``` <!-- language**-all**: lang-*xxx* --> ``` sets it for all remaining code blocks in the post (or until it is overridden by another ``` <!-- language: lang-*xxx* --> ``` ).
185,932
I have a used iPhone 4 with 16GB of memory in it, running IOS 7.1.2. The problem with this iPhone is that the volume is really quiet whenever i'm playing music or making a call and setting the caller on the speaker. The volume is currently at it's max (by tapping the buttons on the side). ![enter image description here](https://i.stack.imgur.com/MFtZ0.jpg) *Note: I haven't used the "broken" iPhone for this screenshot, this was just for showing what I did to increase the volume to the max.* I have tried rebooting the device a couple of times already, this didn't work. What do (or can) I have to do to make the speaker play at a decent volume again? Do I have to replace it?
2015/05/06
[ "https://apple.stackexchange.com/questions/185932", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/126759/" ]
The general answer is > > "just be sure to paste into an unoccupied area of the Keynote slide" > > > My issue was that I was trying to paste into an area already occupied by a textbox. It may not be obvious due to a bunch of blank lines that make the box extend longer than immediately apparent.
Be sure you are on the slide by clicking into it. The universal keyboard Command + V should pass your image. If that doesn't work then be sure you actually **have** something copied to the clipboard. Alternatively you can easily drag and drop an image onto a slide.
24,862
What are the new features/updates are available in Selenium 3 as compared with Selenium-2.0? What are the impacts of using Selenium 3.0? What are the recent errors which were found in Selenium 3.0? You can share Pros and Cons/your experience with the use of Selenium 3.0.
2017/01/12
[ "https://sqa.stackexchange.com/questions/24862", "https://sqa.stackexchange.com", "https://sqa.stackexchange.com/users/17411/" ]
Good question, everyone should aware of the new updates and features in Selenium 3. As of my experience Selenium 3 has lots changes made in configuration/setup level. Even though we have to use GekoDriver for launching Firefox driver. There is also some browser compatibility restrictions are applied in Selenium 3 like Firefox min version must be 48. * For WebDriver users, it's more of bug fixes and drop-in replacement for 2.x * Selenium Grid bug fixes are done as well. * Selenium project will not actively support only the WebDriver API. * The Selenium RC APIs have been moved to a “legacy” package. * The original code powering Selenium RC has been replaced with something backed by WebDriver, which is also contained in the "legacy" package. * By a quirk of timing, Mozilla has made changes to Firefox that means that from Firefox 48 you must use their geckoDriver to use that browser, regardless of whether you're using Selenium 2 or 3. You can read more [here.](https://saucelabs.com/blog/selenium-3-is-coming) [Reference Link2](https://saucelabs.com/blog/selenium-3-is-coming)
Moving `Selenium 2` to `selenium 3.0.1` is bit easy take less effort [Code Changes](http://seleniumsimplified.com/2016/10/upgrading-to-selenium-3-with-my-first-selenium-project/) One of the biggest change is that the old Selenium Core libraries will be dropped in 3.0. The focus will shift completely to the WebDriver API. For the last six years it has been advised to switch to the newer WebDriver APIs and to stop using the original RC APIs. With Selenium 3.0, the original implementation of RC has been removed, replaced by one that sits on top of WebDriver. For many users, this change will go completely unnoticed, as they’re no longer using the RC APIs. Firefox is only fully supported at `version 47.0.1` or earlier. Support for later versions of firefox is provided by `geckodriver`, which is based on the evolving W3C WebDriver spec, and uses the wire protocol in that spec, which is liable to change without notice. The `WebDriver API` has grown to be relevant outside of Selenium. It is used in multiple tools for automation. For example, it's used heavily in mobile testing through tools such as Appium and iOS Driver. The W3C standard will encourage compatibility across different software implementations of the WebDriver API. [W3C Working Draft](https://www.w3.org/TR/webdriver/) There are many updates please have a look [Change Log](https://raw.githubusercontent.com/SeleniumHQ/selenium/master/java/CHANGELOG)
4,519,875
So far I have it down to \begin{align} \sqrt{x+iy}&=u+iv \\ \Rightarrow x+iy&=u^2-v^2+i(2uv) \\ \Rightarrow x&=u^2-v^2, y=2uv \end{align} But I keep getting stuck here.
2022/08/27
[ "https://math.stackexchange.com/questions/4519875", "https://math.stackexchange.com", "https://math.stackexchange.com/users/620355/" ]
You have arrived at $x = u^2-v^2$ and $y = 2uv$. Now use the second relation to eliminate either $u$ or $v$ from the first relation, and you obtain expressions for $u$ or $v$ in terms of $x$ and $y$. From $4u^4-4xu^2-y^2=0$ we get: $u^2 = 0.5(x+\sqrt{x^2+y^2})$ From $4v^4+4xv^2-y^2=0$ we get: $v^2 = 0.5(-x+\sqrt{x^2+y^2})$ If you take the difference of these two results or multiply them you get the original equations back. If you take the sum, you see that the $(u,v)$ norm is simply the square root of the $(x,y)$ norm. The final step to obtain $u$ and $v$ is to take the square root and choose the correct sign, so that their product satisfies $2uv = y$. The standard choice is to take $u$ greater or equal to zero, and to give $v$ the same sign as $y$. But minus this result is equally valid.
You are almost done, following your way we obtain, with $u\neq 0$ $$2uv=y \implies v = \frac y {2u}$$ and then from the first equation $$x=u^2-\left(\frac y {2u}\right)^2 \implies 4u^4-4u^2x-y^2=0 \implies u^2=\frac{x\pm\sqrt{x^2+y^2}}{2}$$ and then by $r=\sqrt{x^2+y^2}$ $$u=\pm \sqrt\frac{x+r}{2},\; v= \pm \frac y 2 \sqrt\frac 2 {x+r}$$ The case $u=0$ can be checked by inspection.
155,732
I have this table : [![enter image description here](https://i.stack.imgur.com/0zqIF.png)](https://i.stack.imgur.com/0zqIF.png) column **F** is my data and column **H** is to check if column **F** has **N/A** error. In **H1** i use this formula : ``` ifna(F1,"error") ``` Seems to work fine. But in the 2nd row when it check the date , it returns a number instead of a date. How can I fix this ?
2021/07/11
[ "https://webapps.stackexchange.com/questions/155732", "https://webapps.stackexchange.com", "https://webapps.stackexchange.com/users/229140/" ]
Use [filter views](https://support.google.com/docs/answer/3540681) to let view-only users sort the sheet. This feature lets multiple simultaneous users sort and filter the sheet without disturbing each other. For additional ease of use, insert links in the frozen section of the sheet to easily switch between filter views, instead of having to go to **Data > Filter views** to switch. See the [Filter views example](https://docs.google.com/spreadsheets/d/1ME5b4J6fQa8RRainxGtQkwTKFCg96N8_k6kCe8ABNyk/edit#gid=2048818001&fvid=336145647) spreadsheet for an illustration. The links look like this: `=hyperlink("#gid=2048818001&fvid=336145647", "Show all")` ...where `gid=...` identifies the sheet and `fvid=...` identifies the filter view. You can get these identifiers from the browser address bar when a filter view is active.
In case you cannot find a solution using formulas I will give a solution using the following code: ``` function onEdit(e) { const as = e.source.getActiveSheet(); const row = e.range.getRow(); const col = e.range.getColumn(); const lc = as.getLastColumn(); const lr = as.getLastRow(); const rg = as.getRange(2, 1, lr, lc); let asc = true; let val = as.getRange(row, col).getValue(); if (as.getName() == 'Sort' && row == 1 && val.match(/↓/) != null) { asc = false; rg.sort({ column: col, ascending: asc }); } else if (as.getName() == 'Sort' && row == 1 && val.match(/↑/) != null) { asc = true; rg.sort({ column: col, ascending: asc }); } } ``` Configure data validation in the header's cells from the lists "PLAYERS ↓, PLAYERS ↑", "AT BATS ↓, AT BATS ↑", "HITS ↓, HITS ↑".
28,614,371
On a table, there's a delete trigger that performs some operations and then at the end, executes a select statement, so when you do something like... ``` delete from mytable where id=1 ``` it returns a recordset. Is there a way to save the results of that recordset into a temp table or something? I tried something like this: ``` declare @temptable table (returnvalue int); insert into @temptable (returnvalue) delete from mytable where id=1; ``` But apparently that syntax doesn't work.
2015/02/19
[ "https://Stackoverflow.com/questions/28614371", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3093863/" ]
You are amost there. *And I am not challenging or judging this approach, just providing the answer.* There is [a working example](http://plnkr.co/edit/DNas7u8e8NuJMP2n0RUe?p=preview) . You've properly created new scope. Very good. But that is just an **A**. The essential and curcial **B** is missing. **B** - is opposite to **A** `===` destroy new scope *(once old)*. This will do the job for us: ``` app.controller('MenuController', ["$scope","$compile", "$state","$http", function MenuController($scope,$compile, $state, $http) { $scope.goToUrl = function (view) { // was scope previously created var hasPreviousChildScope = this.childScope !== void 0 && this.childScope !== null; if(hasPreviousChildScope) { // we must clear after ourselves this.childScope.$destroy(); this.childScope = null; } // create new scope this.childScope = $scope.$new(); var html; var state; if(view == 'Alerts'){ view = '<div ui-view="Alerts"></div>'; state = 'appHome' } else{ view = '<div ui-view="Other"></div>'; state = 'appOther' } $('#test').html($compile(view)(this.childScope)); $state.go(state); }; } ]); ``` Check that all [in action here](http://plnkr.co/edit/DNas7u8e8NuJMP2n0RUe?p=preview)
Why do you make it so complicated? Is there a special reason why you are using ``` if(view == 'Alerts'){ view = '<div ui-view="Alerts"></div>'; state = 'appHome' } else{ view = '<div ui-view="Other"></div>'; state = 'appOther' } $('#test').html($compile(view)(scope)); $state.go(state); ``` Isn't it possible to use something like ``` <div ng-controller="MenuController"> <button ui-sref="appHome" value="CLick Me">GoTo Alerts</button> <button ui-sref="appOther" value="CLick Me">GoTo Other</button> </div> <section ui-view></section> ``` I used ui-sref to easily navigate between states. I have prepared a little demo to illustrate that. Have a look [here](http://plnkr.co/edit/OLNuV55a9K8uqKNWO5Sc?p=preview). Does that help? Is it usable? That way you don't even need the menuController. If you need named views (multiple nested views) have a look at the [documentation](https://github.com/angular-ui/ui-router/wiki/Multiple-Named-Views). Is is explained pretty well. You would end up with something like: ``` <!-- index.html --> <body> <div ui-view="alerts"></div> <div ui-view="other"></div> </body> $stateProvider .state('report', { views: { 'alerts': { ... templates and/or controllers ... }, 'other': {}, } }) ```
3,578,286
My code base has a lot of `#if DEBUG/#endif` statements that mostly have assertion type logic that I'm not brave enough to run in production. ``` [Conditional("DEBUG")] public void CheckFontSizeMath() { //This check must not block SellSnakeOil() in production, even if it fails. if(perferredSize+increment!=increment+preferredSize) throw new WeAreInAnUnexpectedParallelUniverseException(); } ``` Will I regret changing all these over to the new way of doing things? **UPDATE**: I'm looking for the difference between characteristics of two similar but different syntax styles for doing Assertions. I understand there is a world of other ways to do demonstrate applications work, and I do those, too. I'm not ready to give up Assertions altogether. Also I updated the method name for a realistic debug-release-only scenario.
2010/08/26
[ "https://Stackoverflow.com/questions/3578286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/33264/" ]
There's no problem doing this. The more current approach is to use [Code Contracts](http://msdn.microsoft.com/en-us/library/dd264808.aspx).
Other options that you have is to either use Code Contracts as suggested by Hans or to use the assertion methods in the `System.Diagnostics.Debug` namespace, e.g. ``` Debug.Assert(1 + 1 == 2, "Math went wrong!"); ``` This check is automatically removed when building the Release version.
3,578,286
My code base has a lot of `#if DEBUG/#endif` statements that mostly have assertion type logic that I'm not brave enough to run in production. ``` [Conditional("DEBUG")] public void CheckFontSizeMath() { //This check must not block SellSnakeOil() in production, even if it fails. if(perferredSize+increment!=increment+preferredSize) throw new WeAreInAnUnexpectedParallelUniverseException(); } ``` Will I regret changing all these over to the new way of doing things? **UPDATE**: I'm looking for the difference between characteristics of two similar but different syntax styles for doing Assertions. I understand there is a world of other ways to do demonstrate applications work, and I do those, too. I'm not ready to give up Assertions altogether. Also I updated the method name for a realistic debug-release-only scenario.
2010/08/26
[ "https://Stackoverflow.com/questions/3578286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/33264/" ]
There's no problem doing this. The more current approach is to use [Code Contracts](http://msdn.microsoft.com/en-us/library/dd264808.aspx).
The downsides: 1. It's not as robust as hard checks and exceptions that function independently of build configuration. It's a good idea to take into account the fact that most code should handle errors in release mode. 2. Behaviour has the potential to differ between configurations. If you're careless, you may end up putting some non debug logic in a conditional method or adding side effects so that your production code does not function correctly (basically the same thing as accidentally calling methods inside ASSERT() macros in C++; side effects suck!) The biggest thing here is exceptions. Conditional methods do not allow you to return values for whatever reason so that eliminates one potential error. However, they do allow you to throw exceptions which can dramatically change the path taken on an error condition occurring. This makes things harder to understand and maintain. 3. At the call site, it is not obvious that conditional methods are being called. You just see "DoSomething();". I prefer to name my conditional methods via a convention where I know it's a conditional; e.g. DEBUG\_SanityCheckBufferContents(). 4. If you don't #if out the method body, your debug code will still exist and it can be inspected/invoked via reflection amongst other things. The method's IL is still emitted, the call sites are not. 5. It makes unit testing very difficult, as behaviour differs between configurations. This is along the same lines as point 2, but I added it as a separate point because it's really sucky from a testing perspective. Most developers will run tests in debug or release prior to checking in, but not both. Our CI runs both sets of tests, so it's galling to check something in after passing all of the tests only to find you've broken the debug configuration but the tests passed because you ran the release build (or vice versa). In short my rules of thumb are: * Prefer hard checks & exceptions with unit tests. Use conditional methods where this is not possible (e.g. performance critical checks) * Name conditional methods clearly * Pragma #if out the code in the method body * Be very wary of altering program control flow in conditional methods (I personally prefer Debug.Assert instead of an exception when using conditional methods, but I've seen loads of other programmers use exceptions, so this is probably up for debate)
3,578,286
My code base has a lot of `#if DEBUG/#endif` statements that mostly have assertion type logic that I'm not brave enough to run in production. ``` [Conditional("DEBUG")] public void CheckFontSizeMath() { //This check must not block SellSnakeOil() in production, even if it fails. if(perferredSize+increment!=increment+preferredSize) throw new WeAreInAnUnexpectedParallelUniverseException(); } ``` Will I regret changing all these over to the new way of doing things? **UPDATE**: I'm looking for the difference between characteristics of two similar but different syntax styles for doing Assertions. I understand there is a world of other ways to do demonstrate applications work, and I do those, too. I'm not ready to give up Assertions altogether. Also I updated the method name for a realistic debug-release-only scenario.
2010/08/26
[ "https://Stackoverflow.com/questions/3578286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/33264/" ]
Write tests! Then you don't have to be afraid that your code is doing something that you don't expect when you start making these changes. Once you get your code under test you can refactor to your heart's content. Code without tests, even written yesterday, is legacy code... Get a copy of [Working Effectively with Legacy Code](http://tinyurl.com/3yyv57m), it's an amazing book for managing legacy code; 32 reviews with 5 stars.
Other options that you have is to either use Code Contracts as suggested by Hans or to use the assertion methods in the `System.Diagnostics.Debug` namespace, e.g. ``` Debug.Assert(1 + 1 == 2, "Math went wrong!"); ``` This check is automatically removed when building the Release version.
3,578,286
My code base has a lot of `#if DEBUG/#endif` statements that mostly have assertion type logic that I'm not brave enough to run in production. ``` [Conditional("DEBUG")] public void CheckFontSizeMath() { //This check must not block SellSnakeOil() in production, even if it fails. if(perferredSize+increment!=increment+preferredSize) throw new WeAreInAnUnexpectedParallelUniverseException(); } ``` Will I regret changing all these over to the new way of doing things? **UPDATE**: I'm looking for the difference between characteristics of two similar but different syntax styles for doing Assertions. I understand there is a world of other ways to do demonstrate applications work, and I do those, too. I'm not ready to give up Assertions altogether. Also I updated the method name for a realistic debug-release-only scenario.
2010/08/26
[ "https://Stackoverflow.com/questions/3578286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/33264/" ]
Write tests! Then you don't have to be afraid that your code is doing something that you don't expect when you start making these changes. Once you get your code under test you can refactor to your heart's content. Code without tests, even written yesterday, is legacy code... Get a copy of [Working Effectively with Legacy Code](http://tinyurl.com/3yyv57m), it's an amazing book for managing legacy code; 32 reviews with 5 stars.
The downsides: 1. It's not as robust as hard checks and exceptions that function independently of build configuration. It's a good idea to take into account the fact that most code should handle errors in release mode. 2. Behaviour has the potential to differ between configurations. If you're careless, you may end up putting some non debug logic in a conditional method or adding side effects so that your production code does not function correctly (basically the same thing as accidentally calling methods inside ASSERT() macros in C++; side effects suck!) The biggest thing here is exceptions. Conditional methods do not allow you to return values for whatever reason so that eliminates one potential error. However, they do allow you to throw exceptions which can dramatically change the path taken on an error condition occurring. This makes things harder to understand and maintain. 3. At the call site, it is not obvious that conditional methods are being called. You just see "DoSomething();". I prefer to name my conditional methods via a convention where I know it's a conditional; e.g. DEBUG\_SanityCheckBufferContents(). 4. If you don't #if out the method body, your debug code will still exist and it can be inspected/invoked via reflection amongst other things. The method's IL is still emitted, the call sites are not. 5. It makes unit testing very difficult, as behaviour differs between configurations. This is along the same lines as point 2, but I added it as a separate point because it's really sucky from a testing perspective. Most developers will run tests in debug or release prior to checking in, but not both. Our CI runs both sets of tests, so it's galling to check something in after passing all of the tests only to find you've broken the debug configuration but the tests passed because you ran the release build (or vice versa). In short my rules of thumb are: * Prefer hard checks & exceptions with unit tests. Use conditional methods where this is not possible (e.g. performance critical checks) * Name conditional methods clearly * Pragma #if out the code in the method body * Be very wary of altering program control flow in conditional methods (I personally prefer Debug.Assert instead of an exception when using conditional methods, but I've seen loads of other programmers use exceptions, so this is probably up for debate)
12,748,113
Please help me understand, the below code is showing Type mismatch: "cannot convert from element type Object to List" in the for statement. I know I'm missing something silly. Please help. ``` public void setMapPriceValue(SolrItemVO solrItemVO, ArrayList proce1) throws SolrDAOException { List xcatentAttrList = (List<Xcatentattr>) proce1.get(0); solrItemVO.setMapPrice(-1); // setting default value for(List xcatentattr : xcatentAttrList){ if(xcatentattr.get(0) == 33) solrItemVO.setMapPrice(xcatentattr.get(1)); solrItemVO.setMapPriceVal(xcatentattr.get(2)); } } ```
2012/10/05
[ "https://Stackoverflow.com/questions/12748113", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1228270/" ]
First you are mixing generics and raw types - it would make your life easier if you only used generics: ``` List<Xcatentattr> xcatentAttrList = (List<Xcatentattr>) proce1.get(0); ``` You might also consider using the correct generic type in your method signature (I assume `proce1` is a list of list): ``` public void setMapPriceValue(SolrItemVO solrItemVO, List<List<Xcatentattr>> proce1) ``` In which case you don't need the cast any more: ``` List<Xcatentattr> xcatentAttrList = proce1.get(0); ``` Then the syntax for the enhanced for loop is `for (TypeOfObjectInYourList object : list)`, so in your case: ``` for(Xcatentattr xcatentattr : xcatentAttrList) ```
There are two issues here. * First, if you declare `List xcatentAttrList` without using generics, Java can only know that your list contains `Object`s. So your `for` loop would have to iterate through a list of `Object`s. * Second, the cast `(List<Xcatentattr>)proce1` implies the list contains `Xcatentattr` elements, and yet you are iterating through `List` objects. So provided you had declared the `List` using generics (and therefore saying you are working with a list of `Xcatentattr` elements), the loop `for` would be: ``` for (Xcatentattr xcatentattr : xcatentAttrList) { . . . } ```
12,748,113
Please help me understand, the below code is showing Type mismatch: "cannot convert from element type Object to List" in the for statement. I know I'm missing something silly. Please help. ``` public void setMapPriceValue(SolrItemVO solrItemVO, ArrayList proce1) throws SolrDAOException { List xcatentAttrList = (List<Xcatentattr>) proce1.get(0); solrItemVO.setMapPrice(-1); // setting default value for(List xcatentattr : xcatentAttrList){ if(xcatentattr.get(0) == 33) solrItemVO.setMapPrice(xcatentattr.get(1)); solrItemVO.setMapPriceVal(xcatentattr.get(2)); } } ```
2012/10/05
[ "https://Stackoverflow.com/questions/12748113", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1228270/" ]
If you are going to type things try and keep them typed ``` List<Xcatentattr> xcatentAttrList = (List<Xcatentattr>) proce1.get(0); solrItemVO.setMapPrice(-1); // setting default value for(Xcatentattr xcatentattr : xcatentAttrList){ if(xcatentattr.get(0) == 33) solrItemVO.setMapPrice(xcatentattr.get(1)); solrItemVO.setMapPriceVal(xcatentattr.get(2)); } } ``` then the answer might be clearer ;) The for loop is of type Xatentattr. You are looping through the list of that type. Take a look at this [link](https://blogs.oracle.com/CoreJavaTechTips/entry/using_enhanced_for_loops_with) for more info on for-each loops
There are two issues here. * First, if you declare `List xcatentAttrList` without using generics, Java can only know that your list contains `Object`s. So your `for` loop would have to iterate through a list of `Object`s. * Second, the cast `(List<Xcatentattr>)proce1` implies the list contains `Xcatentattr` elements, and yet you are iterating through `List` objects. So provided you had declared the `List` using generics (and therefore saying you are working with a list of `Xcatentattr` elements), the loop `for` would be: ``` for (Xcatentattr xcatentattr : xcatentAttrList) { . . . } ```
93,364
I'm studying for my AIRAT (Instructor Written) in Canada using Nizus, but I've always had trouble answering the questions about takeoff distances at 27 ºC when given [a chart](https://i.imgur.com/9HnYWR3.png) with 20 ºC and 30 ºC, even since the start of PPL. I want to understand it properly so I can effectively teach my students who have troubles with it. The question is a two-parter, first calculating a takeoff distance with the following values: | Parameter | Value | | --- | --- | | Airport Temp | 30 ºC | | Airport Elevation | 3000' AMSL | | Altimeter | 30.92 inHg | | Wind | 10 kt tailwind | | Flaps | 10º | | Runway | Dry grass | With a pressure altitude of 2089', the nearest values are the 2000' and 3000'. I used the 2000' line of the chart. At 2000' at 20 ºC the ground roll is 1080' and the total to clear a 50' obstacle is 1895'. At 30 ºC it's 1155' and 2030'. The chart says it's configured as follows: 2300 lbs, flaps 10, full power prior to brake release on a paved level dry runway with no wind. The notes state: *Headwind subtract 10% per 9kts* *Tailwind add 10% every 2kts up to 10kts* *Dry, Grass Runway or Gravel add 15% to ground roll.* At 30 ºC I calculate my takeoff distance being **3305'**. --- The second part asks by how much the takeoff distance will decrease if the temperature drops to 27 ºC. I'm doing something wrong here as I calculate it to be 3250', with a distance change of 55', but that isn't an available answer. How do I get the proper numbers to use at 27 ºC?
2022/05/28
[ "https://aviation.stackexchange.com/questions/93364", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/63711/" ]
I am getting a takeoff distance of 3,502' for a takeoff over 50 FT on a **grass strip** with a **10-kt tailwind**. > > 2,030' \* 1.5 \* 1.15 = 3,502' > > > To interpolate you will need to figure out the percentage between the two temperatures. > > (27-20) / (30-20) = 70% > > > We take the difference between the chart values for 20 and 30 degrees > > 3,502' - 3,267' = 235' > > > Multiply this value by the percentage > > 235 \* .7 = 164.5' > > > Since we subtracted the 27 from the lower value we will add in this number ot the lower chart value > > 3,267 + 164.5 = 3,432 > > > This is a difference of 70 feet.
Pressure altitude with Kollsman at 30.92 would be 3000 feet - 1000 feet = 2000 feet. Dry grass **only affects ground roll**. Distance to clear 50' obstacle at 30 C : 2030 feet Ground roll distance: 1155 feet 30 C with 10 knot tailwind: 2030 × 1.50 = 3045 feet Ground roll with tailwind: 1155 × 1.5 = 1732 feet Ground roll with tailwind and grass: 1732 × 1.15 = 1992 feet difference: 1992 - 1732 = 260 feet more > > Total distance to clear 50' at 30 C: 3045 + 260 = 3305 feet > > > Distance to clear 50' obstacle at 20C: 1895 feet Ground roll distance: 1080 feet 20C with 10 knot tailwind: 1895 × 1.5 = 2842 feet Ground roll with tailwind: 1080 × 1.5 = 1621 feet Ground roll with tailwind and grass: 1621 × 1.15 = 1864 feet difference: 1864 - 1621 = 243 feet more > > Total distance to clear 50' at 20 C: 2842 + 243 = 3085 feet > > > 3305 - 3085 = 220 feet 220 feet÷10C = 22 feet per degree > > 22 feet per degree × 3 C = around 66 feet less at 27 C > > > Answers of 3305 feet and 66 feet difference seem ok!
36,792,073
I've got 3 labels in a custom UITableview cell and I'm trying to pass in json data I've gotten from an api with Alamofire but I'm struggling to understand how to push the returned json into the tableview. Any help would be greatly appreciated. Code below: ``` import UIKit import Parse import Alamofire class LeagueTableController: UIViewController, UITableViewDataSource, UITableViewDelegate { override func viewDidLoad() { super.viewDidLoad() Alamofire.request(.GET, "https://api.import.io/store/connector/88c66c----9b01-6bd2bb--d/_query?input=webpage/url:----") .responseJSON { response in // 1 print(response.request) // original URL request print(response.response) // URL response print(response.data) // server data print(response.result) // result of response serialization if let JSON = response.result.value { print("JSON: \(JSON)") } } } func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return 1 } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) return cell } } ``` Returned json like this: ``` { connectorGuid = "88c66cb4-e64f-4316-9b01-6bd2bb2d762d"; connectorVersionGuid = "8aedfe43-948a-4559-b279-d3c3c28047a4"; cookies = ( ); offset = 0; outputProperties = ( { name = team; type = URL; }, { name = played; type = DOUBLE; }, { name = points; type = DOUBLE; } ); pageUrl = "http://www.extratime.ie/leagues/2024/100/premier-division/"; results = ( { played = 9; "played/_source" = 9; points = 22; "points/_source" = 22; team = "http://www.extratime.ie/squads/17/"; "team/_source" = "/squads/17/"; "team/_text" = Dundalk; }, { played = 9; "played/_source" = 9; points = 20; "points/_source" = 20; team = "http://www.extratime.ie/squads/7/"; "team/_source" = "/squads/7/"; "team/_text" = "Derry City"; }, { played = 9; "played/_source" = 9; points = 17; "points/_source" = 17; team = "http://www.extratime.ie/squads/100504/"; "team/_source" = "/squads/100504/"; "team/_text" = "Galway United FC"; }, { played = 9; "played/_source" = 9; points = 16; "points/_source" = 16; team = "http://www.extratime.ie/squads/29/"; "team/_source" = "/squads/29/"; "team/_text" = "St. Patrick's Ath"; }, { played = 8; "played/_source" = 8; points = 15; "points/_source" = 15; team = "http://www.extratime.ie/squads/30/"; "team/_source" = "/squads/30/"; "team/_text" = "Cork City"; }, { played = 8; "played/_source" = 8; points = 15; "points/_source" = 15; team = "http://www.extratime.ie/squads/3/"; "team/_source" = "/squads/3/"; "team/_text" = "Shamrock Rovers"; }, { played = 9; "played/_source" = 9; points = 10; "points/_source" = 10; team = "http://www.extratime.ie/squads/13/"; "team/_source" = "/squads/13/"; "team/_text" = "Finn Harps"; }, { played = 9; "played/_source" = 9; points = 10; "points/_source" = 10; team = "http://www.extratime.ie/squads/2/"; "team/_source" = "/squads/2/"; "team/_text" = Bohemians; }, { played = 9; "played/_source" = 9; points = 7; "points/_source" = 7; team = "http://www.extratime.ie/squads/8/"; "team/_source" = "/squads/8/"; "team/_text" = "Sligo Rovers"; }, { played = 9; "played/_source" = 9; points = 7; "points/_source" = 7; team = "http://www.extratime.ie/squads/6/"; "team/_source" = "/squads/6/"; "team/_text" = "Bray Wanderers"; }, { played = 9; "played/_source" = 9; points = 5; "points/_source" = 5; team = "http://www.extratime.ie/squads/109/"; "team/_source" = "/squads/109/"; "team/_text" = "Wexford Youths"; }, { played = 9; "played/_source" = 9; points = 5; "points/_source" = 5; team = "http://www.extratime.ie/squads/15/"; "team/_source" = "/squads/15/"; "team/_text" = "Longford Town"; } ); ``` } I'm trying to just push the "played", "points" and "team/\_text" results out to each of the labels.
2016/04/22
[ "https://Stackoverflow.com/questions/36792073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5463857/" ]
Since the question is very broad and doesn't specify what exactly is the problem here, The general steps are: 1) map your json to dictionary/nsdictionary. Suppose the JSON snippet you posted is a chunk of JSONArray in following format [{}], all you need to do is: ``` var arrayOfDictionaries:NSArray = NSJSONSerialization.JSONObjectWithData(yourData, options: nil, error: nil) as! NSArray ``` where yourData variable is data downloaded from network casted to NSData format 2) create outlets to these three labels in your custom tableViewCell 3) for each cell, set these labels inside > > func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) > > > method as follows: ``` func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell:YourCustomCell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) as YourCustomCell cell.firstLabel.text = yourDictionary["played"] cell.secondLabel.text = yourDictionary["points"] cell.thirdLabel.text = yourDictionary["team"] return cell } ``` 4) I suppose you will need more cells than just one, then store many dictionaries in array and access each element like this: ``` cell.firstLabel.text = arrayOfDictionaries[indexPath.row]["played"] ```
You'll need to create a subclass of `UITableViewCell`, say `MyTableViewCell` and add a property named `JSON`. Now, since you're probably using Interface Builder to define your cell and its reuse identifier ("Cell"), set that cell's class to your newly created `MyTableViewCell` and connect the labels to some `IBOutlets` in your newly defined class. Then, when you call 'dequeueReusableCellWithIdentifier', cast the cell to `MyTableViewCell` and set its `JSON` property to the value you want to have in the cell. You'll probably want to react to the change, so add the `didSet` property observer. ``` var JSON:[String: AnyObject] = [String: AnyObject]() { didSet { print("populate your labels with your new data"); } } ```
36,792,073
I've got 3 labels in a custom UITableview cell and I'm trying to pass in json data I've gotten from an api with Alamofire but I'm struggling to understand how to push the returned json into the tableview. Any help would be greatly appreciated. Code below: ``` import UIKit import Parse import Alamofire class LeagueTableController: UIViewController, UITableViewDataSource, UITableViewDelegate { override func viewDidLoad() { super.viewDidLoad() Alamofire.request(.GET, "https://api.import.io/store/connector/88c66c----9b01-6bd2bb--d/_query?input=webpage/url:----") .responseJSON { response in // 1 print(response.request) // original URL request print(response.response) // URL response print(response.data) // server data print(response.result) // result of response serialization if let JSON = response.result.value { print("JSON: \(JSON)") } } } func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return 1 } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) return cell } } ``` Returned json like this: ``` { connectorGuid = "88c66cb4-e64f-4316-9b01-6bd2bb2d762d"; connectorVersionGuid = "8aedfe43-948a-4559-b279-d3c3c28047a4"; cookies = ( ); offset = 0; outputProperties = ( { name = team; type = URL; }, { name = played; type = DOUBLE; }, { name = points; type = DOUBLE; } ); pageUrl = "http://www.extratime.ie/leagues/2024/100/premier-division/"; results = ( { played = 9; "played/_source" = 9; points = 22; "points/_source" = 22; team = "http://www.extratime.ie/squads/17/"; "team/_source" = "/squads/17/"; "team/_text" = Dundalk; }, { played = 9; "played/_source" = 9; points = 20; "points/_source" = 20; team = "http://www.extratime.ie/squads/7/"; "team/_source" = "/squads/7/"; "team/_text" = "Derry City"; }, { played = 9; "played/_source" = 9; points = 17; "points/_source" = 17; team = "http://www.extratime.ie/squads/100504/"; "team/_source" = "/squads/100504/"; "team/_text" = "Galway United FC"; }, { played = 9; "played/_source" = 9; points = 16; "points/_source" = 16; team = "http://www.extratime.ie/squads/29/"; "team/_source" = "/squads/29/"; "team/_text" = "St. Patrick's Ath"; }, { played = 8; "played/_source" = 8; points = 15; "points/_source" = 15; team = "http://www.extratime.ie/squads/30/"; "team/_source" = "/squads/30/"; "team/_text" = "Cork City"; }, { played = 8; "played/_source" = 8; points = 15; "points/_source" = 15; team = "http://www.extratime.ie/squads/3/"; "team/_source" = "/squads/3/"; "team/_text" = "Shamrock Rovers"; }, { played = 9; "played/_source" = 9; points = 10; "points/_source" = 10; team = "http://www.extratime.ie/squads/13/"; "team/_source" = "/squads/13/"; "team/_text" = "Finn Harps"; }, { played = 9; "played/_source" = 9; points = 10; "points/_source" = 10; team = "http://www.extratime.ie/squads/2/"; "team/_source" = "/squads/2/"; "team/_text" = Bohemians; }, { played = 9; "played/_source" = 9; points = 7; "points/_source" = 7; team = "http://www.extratime.ie/squads/8/"; "team/_source" = "/squads/8/"; "team/_text" = "Sligo Rovers"; }, { played = 9; "played/_source" = 9; points = 7; "points/_source" = 7; team = "http://www.extratime.ie/squads/6/"; "team/_source" = "/squads/6/"; "team/_text" = "Bray Wanderers"; }, { played = 9; "played/_source" = 9; points = 5; "points/_source" = 5; team = "http://www.extratime.ie/squads/109/"; "team/_source" = "/squads/109/"; "team/_text" = "Wexford Youths"; }, { played = 9; "played/_source" = 9; points = 5; "points/_source" = 5; team = "http://www.extratime.ie/squads/15/"; "team/_source" = "/squads/15/"; "team/_text" = "Longford Town"; } ); ``` } I'm trying to just push the "played", "points" and "team/\_text" results out to each of the labels.
2016/04/22
[ "https://Stackoverflow.com/questions/36792073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5463857/" ]
Since the question is very broad and doesn't specify what exactly is the problem here, The general steps are: 1) map your json to dictionary/nsdictionary. Suppose the JSON snippet you posted is a chunk of JSONArray in following format [{}], all you need to do is: ``` var arrayOfDictionaries:NSArray = NSJSONSerialization.JSONObjectWithData(yourData, options: nil, error: nil) as! NSArray ``` where yourData variable is data downloaded from network casted to NSData format 2) create outlets to these three labels in your custom tableViewCell 3) for each cell, set these labels inside > > func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) > > > method as follows: ``` func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell:YourCustomCell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) as YourCustomCell cell.firstLabel.text = yourDictionary["played"] cell.secondLabel.text = yourDictionary["points"] cell.thirdLabel.text = yourDictionary["team"] return cell } ``` 4) I suppose you will need more cells than just one, then store many dictionaries in array and access each element like this: ``` cell.firstLabel.text = arrayOfDictionaries[indexPath.row]["played"] ```
First, you should create a model that contain 3 properties that you want to save, like: ``` class Data { var team = "" var point = 0 var teamText = "" init(fromJSON json: NSDictionary) { team = json["team"] as! String point = json["points"] as! Int teamText = json["team/_text"] as! String } } ``` In your LeagueTableController, create an array to hold the data and show it to tableView: ``` var data = [Data]() ``` Config the tableView to show the data: ``` func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return data.count } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) as! YourCustomCell let row = indexPath.row cell.lblA.text = data[row].team cell.lblB.text = "\(data[row].point)" cell.lblC.text = data[row].teamText return cell } ``` Finally, parse your response json to our data array to display onto tableView ``` @IBOutlet weak var tableView: UITableView! var data = [Data]() override func viewDidLoad() { super.viewDidLoad() Alamofire.request(.GET, "https://api.import.io/store/connector/88c66c----9b01-6bd2bb--d/_query?input=webpage/url:----") .responseJSON { response in // 1 print(response.request) // original URL request print(response.response) // URL response print(response.data) // server data print(response.result) // result of response serialization if let JSON = response.result.value { print("JSON: \(JSON)") // parse JSON to get array of data let array = JSON["results"] as! [NSDictionary] // map an item to a data object, and add to our data array for item in array { self.data.append(Data(fromJSON: item)) } // after mapping, reload the tableView self.tableView.reloadData() } } } ``` Hope this help!
3,098,231
The question is from [here](http://web.evanchen.cc/handouts/FuncEq-Intro/FuncEq-Intro.pdf): > > Find all continuous functions $f:\mathbb R\to \mathbb R$ such that for any real $x$ and $y$, > $$f(f(x+y))=f(x)+f(y).$$ > > > I'm totally new to functional equations so please correct me if I make a mistake. I try plugging in simple functions and I find that $f(x)=0$ and $f(x)=x+c$ for some $c \in \mathbb R$ works. Then I believe the next step is to derive $f(x)=x+c$ as a solution. Let $y=0$. Then we have \begin{align} f(f(x))=f(x)+f(0) \end{align} Then we can make the substitution $u=f(x)$, which produces $$f(u)=u+c$$ since $f(0)$ is just a constant. Hence the solutions are $f(x)=0$ and $f(x)=x+c$ for some $c \in \mathbb R$. $\Box$ --- Is this a valid answer? As some of you may know the AMO is happening in a few days and I would appreciate any help in improving the quality of my answers.
2019/02/03
[ "https://math.stackexchange.com/questions/3098231", "https://math.stackexchange.com", "https://math.stackexchange.com/users/432780/" ]
This is not correct. You can not let $u=f(x)$ since you haven't prove that $f(x)$ is surjective. Here is a hint: use $f(f(x))=f(x)+f(0)$ to change the LHS or the original functional equation so you have $f(x+y)+f(0)=f(x)+f(y)$. This really looks like Cauchy's functional equation. Can you keep going?
All such functions are of the form $$f(x) = \lambda x + c ,\ x\in \Bbb R$$ where $c = f(0), \lambda = f(1)-f(0)$ and where either $\lambda = c = 0$ or $\lambda = 1$. So either $$f(x) = 0,\ \text {for all}\ x \in \Bbb R$$ or $$f(x) = x + c,\ \text {for all}\ x \in \Bbb R.$$ **EDIT** $:$ First observe that $$f(x+y) + f(0) = f(x) + f(y), \text {for all}\ x,y \in \Bbb R .$$ Now observe that for any $n \in \Bbb N$ $$f(n) = \lambda + f(n-1).$$ By recurrence it is not hard to see that for all $n \in \Bbb N,$ $f(n) = \lambda n + c,$ where $c = f(0)$ and $\lambda = f(1) - f(0).$ Now extend $f$ over $\Bbb Z$ by using the fact that $f(x) + f(-x) = 2c$ for all $x \in \Bbb R$. Extending $f$ over rationals and irrationals is also not very tough. Extending over irrationals from rationals follows from two facts. One is density of rationals and the other is sequential criterion for continuous functions. So we have proved that $$f(x) = \lambda x + c,\ \text {for all}\ x \in \Bbb R.$$ Now again using the given functional equation and putting the values of $f(x)$ there we get $$(\lambda - 1) (\lambda(x+y) + c) = 0, \text {for all}\ x,y \in \Bbb R.$$ So either $\lambda = 1$ or $\lambda (x + y) + c = 0,$ for all $x,y \in \Bbb R$. For the later case if $\lambda \neq 0$ then it yields a contradiction because otherwise we have for all $x,y \in \Bbb R,$ $x+y = -\frac {c} {\lambda},$ which is obviously false. Hence for the later case we should have $\lambda = 0$. But that implies $c=0$. So either $\lambda = c =0$ or $\lambda = 1$. This completes the proof. Is it ok now @abc...?
23,176,076
I'm working with a logit regression and created a model that predicts the likelihood of getting a loan based on FICO Score and the amount requested. I created a data frame that has the results of my probability function -- the rows are the potential FICO Scores, and the columns are the potential requested loan values. For each row and column, there is a value of either 'True' or 'False', with 'True' representing a probability >= .50 that the individual will get a loan. How do I make a scatter plot that plots a point at (FICO Score, Loan Value) based on the information in my data frame, and plots say a green circle for those values that are 'True' and a red X for those values that are 'False'? Also, is there a better way to represent this type of data than a scatter plot?
2014/04/19
[ "https://Stackoverflow.com/questions/23176076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2516989/" ]
If I understand what you want to do, I think a heatmap might work better. There's more than one way to do that in R. `ggplot2`'s `geom_tile` provides a pretty nice way to do it provided you reshape the data first: ``` # using http://www.free-ocr.com/ to OCR your image gives: dat <- structure(list(FICO = c(640L, 650L, 660L, 670L, 680L, 690L, 700L, 710L, 720L, 730L, 740L, 750L, 760L, 770L, 780L, 790L, 800L, 810L, 820L, 830L), `1000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `1500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `2000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `2500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `3000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `3500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `4000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `4500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `5000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `5500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE)), .Names = c("FICO", "1000", "1500", "2000", "2500", "3000", "3500", "4000", "4500", "5000", "5500"), class = "data.frame", row.names = c(NA, -20L )) # now melt that data dat.m <- melt(dat, "FICO") colnames(dat.m) <- c("FICO.Score", "Loan.Value", "p(loan)>0.50") # and use ggplot with geom_tile to make a heatmap gg <- ggplot(dat.m, aes(x=FICO.Score, y=Loan.Value)) gg <- gg + geom_tile(aes(fill=`p(loan)>0.50`), color="white") gg <- gg + theme_bw() gg <- gg + labs(x="", y="") gg <- gg + theme(panel.border=element_blank()) gg <- gg + theme(panel.grid=element_blank()) gg ``` ![enter image description here](https://i.stack.imgur.com/HNXcB.png)
The `lattice` package is nice for plots like the one you want. Here's an example with some fake data ``` > z <- as.logical(sample(TRUE:FALSE, 10, TRUE)) > d <- data.frame(x = 1:10, y = 101:110, z) > d ## x y z ## 1 1 101 FALSE ## 2 2 102 FALSE ## 3 3 103 FALSE ## 4 4 104 TRUE ## 5 5 105 FALSE ## 6 6 106 FALSE ## 7 7 107 FALSE ## 8 8 108 FALSE ## 9 9 109 TRUE ## 10 10 110 TRUE > library(lattice) > xyplot(y ~ x, data = d, groups = z, col = c("red", "green"), pch = 19) ``` ![enter image description here](https://i.stack.imgur.com/Bl3tV.png)
23,176,076
I'm working with a logit regression and created a model that predicts the likelihood of getting a loan based on FICO Score and the amount requested. I created a data frame that has the results of my probability function -- the rows are the potential FICO Scores, and the columns are the potential requested loan values. For each row and column, there is a value of either 'True' or 'False', with 'True' representing a probability >= .50 that the individual will get a loan. How do I make a scatter plot that plots a point at (FICO Score, Loan Value) based on the information in my data frame, and plots say a green circle for those values that are 'True' and a red X for those values that are 'False'? Also, is there a better way to represent this type of data than a scatter plot?
2014/04/19
[ "https://Stackoverflow.com/questions/23176076", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2516989/" ]
If I understand what you want to do, I think a heatmap might work better. There's more than one way to do that in R. `ggplot2`'s `geom_tile` provides a pretty nice way to do it provided you reshape the data first: ``` # using http://www.free-ocr.com/ to OCR your image gives: dat <- structure(list(FICO = c(640L, 650L, 660L, 670L, 680L, 690L, 700L, 710L, 720L, 730L, 740L, 750L, 760L, 770L, 780L, 790L, 800L, 810L, 820L, 830L), `1000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `1500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `2000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `2500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `3000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `3500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `4000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `4500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `5000` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE), `5500` = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE)), .Names = c("FICO", "1000", "1500", "2000", "2500", "3000", "3500", "4000", "4500", "5000", "5500"), class = "data.frame", row.names = c(NA, -20L )) # now melt that data dat.m <- melt(dat, "FICO") colnames(dat.m) <- c("FICO.Score", "Loan.Value", "p(loan)>0.50") # and use ggplot with geom_tile to make a heatmap gg <- ggplot(dat.m, aes(x=FICO.Score, y=Loan.Value)) gg <- gg + geom_tile(aes(fill=`p(loan)>0.50`), color="white") gg <- gg + theme_bw() gg <- gg + labs(x="", y="") gg <- gg + theme(panel.border=element_blank()) gg <- gg + theme(panel.grid=element_blank()) gg ``` ![enter image description here](https://i.stack.imgur.com/HNXcB.png)
You can do this with the `ggplot2` as well. A sample of some FICO data: ``` fico <- structure(list(LoanValue = c(20000, 19200, 35000, 9975, 12000, 6000, 10000, 33450, 14675, 7000, 2000, 10625, 27975, 34950, 9600, 24975, 10000, 13900.25, 10000, 5175, 21975, 30000, 6500, 17400, 4000, 7200, 8000, 8000, 3000, 14500, 23850, 14000, 34975, 16000, 7019.25, 7975, 7200, 20125, 11875, 1850, 3200, 12725, 5500, 15650, 9000, 5000, 3000, 19975, 5450, 14000, 8799.04, 3000, 32000, 22250, 7300, 16450, 2500, 6000, 27575, 1000, 12000, 30000, 13500, 9000, 15000, 5300, 7000, 19975, 14993.57, 8000, 23947.48, 7500, 16875, 12000, 6000, 825, 4500, 1600, 10000, 18525, 7450, 3225, 23675, 12000, 25000, 15850, 4175, 10000, 6000, 6000, 7925, 15925, 9500, 6000, 9975, 7000, 4500, 12000, 10375, 4800), FICOscore = c(735L, 715L, 690L, 695L, 695L, 670L, 720L, 705L, 685L, 715L, 670L, 665L, 670L, 735L, 725L, 730L, 695L, 740L, 730L, 760L, 665L, 695L, 665L, 695L, 670L, 705L, 675L, 675L, 765L, 760L, 685L, 685L, 720L, 685L, 675L, 780L, 720L, 830L, 715L, 660L, 670L, 720L, 660L, 660L, 675L, 715L, 710L, 670L, 785L, 705L, 750L, 660L, 700L, 665L, 680L, 725L, 670L, 715L, 690L, 755L, 705L, 715L, 680L, 665L, 730L, 725L, 685L, 685L, 705L, 695L, 695L, 715L, 735L, 665L, 670L, 670L, 790L, 700L, 665L, 725L, 710L, 760L, 680L, 690L, 695L, 725L, 810L, 675L, 750L, 685L, 665L, 765L, 670L, 675L, 675L, 750L, 765L, 735L, 665L, 670L)), .Names = c("LoanValue", "FICOscore"), class = "data.frame", row.names = c(NA, -100L)) ``` Create an artificial propability: ``` fico$getloan <- ifelse(fico$FICOscore<700, "0", "1") ``` Loading the ggplot2 package: ``` require(ggplot2) ``` Creating the scatterplot: ``` ggplot(fico, aes(x=FICOscore, y=LoanValue)) + geom_point(aes(color=getloan)) ``` which gives: ![enter image description here](https://i.stack.imgur.com/O5bAS.png)
16,179,700
I have an xml file, saml.config that contains some saml info. That information needs to get transformed in release builds to include production urls instead of development and staging urls. In my development and staging environments the transformation happens perfectly, in the release environment, the transformation does not take. I have been trying out the <http://webconfigtransformationtester.apphb.com/> to test out the transforms, and they are not being applied, but again, VS applies them perfectly. The base saml.config file: ``` <?xml version="1.0"?> <SAMLConfiguration xmlns="urn:componentspace:SAML:2.0:configuration"> <ServiceProvider Name="Portal.Web" AssertionConsumerServiceUrl="http://localhost:49462/SingleSignOn/ConsumeAssertion"/> <PartnerIdentityProvider Name="MTMIdentity" SignAuthnRequest="false" WantResponseSigned="false" WantAssertionSigned="false" WantAssertionEncrypted="false" SingleSignOnServiceUrl="https://identity.*********.com/SingleSignOn/"/> </SAMLConfiguration> ``` The transform file: ``` <?xml version="1.0" encoding="utf-8" ?> <SAMLConfiguration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform" xmlns="urn:componentspace:SAML:2.0:configuration"> <ServiceProvider Name="Portal.Web" AssertionConsumerServiceUrl="https://***********.com/SingleSignOn/ConsumeAssertion" xdt:Transform="SetAttributes" xdt:Locator="Match(Name)" /> </SAMLConfiguration> ``` The result should replace the localhost url in the service provider AssertionConsumerServiceUrl with the \*\*\*\*\*\*\*\*\*\*\*.com version, but it does not ``` <?xml version="1.0"?> <SAMLConfiguration xmlns="urn:componentspace:SAML:2.0:configuration"> <ServiceProvider Name="Portal.Web" AssertionConsumerServiceUrl="http://localhost:49462/SingleSignOn/ConsumeAssertion" /> <PartnerIdentityProvider Name="MTMIdentity" SignAuthnRequest="false" WantResponseSigned="false" WantAssertionSigned="false" WantAssertionEncrypted="false" SingleSignOnServiceUrl="https://identity.********.com/SingleSignOn/" /> </SAMLConfiguration> ``` Why is the transform tester not applying the transformation? **EDIT:** I should add that I use the SlowCheetah addin / nuget package to handle the transformation locally and in our staging environment. Considering the documentation (<http://support.appharbor.com/kb/getting-started/managing-environments>) states > > Configuration file transformation is supported on all .config files that have a corresponding .release.config file. > > > I assume that AppHarbor can do this without SlowCheetah. But again, the WebConfigTransformTester tool does not apply this transformation. So the question is still, how can I apply this transformation? Can I use SlowCheetah in AppHarbor? **EDIT:** Upon further investigation, it appears that AppHarbor is NOT applying the transformations to the Web.Config as well. My Configs: ![enter image description here](https://i.stack.imgur.com/c2vsx.png) AppSettings in Web.Config ``` <appSettings> <add key="webpages:Version" value="2.0.0.0" /> <add key="webpages:Enabled" value="false" /> <add key="aspnet:UseHostHeaderForRequestUrl" value="true" /> <add key="PreserveLoginUrl" value="true" /> <add key="ClientValidationEnabled" value="true" /> <add key="UnobtrusiveJavaScriptEnabled" value="true" /> <add key="Environment" value="Development" /> </appSettings> ``` Notice the Environment is set to "Development" The release transformation: ``` <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <appSettings> <add key="Environment" value="Release" xdt:Transform="SetAttributes" xdt:Locator="Match(key)" /> </appSettings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="Off" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> ``` Notice the Environment is being replaced with "Release" The AppHarbor Environment: ![enter image description here](https://i.stack.imgur.com/ywlpg.png) After the deploy to AppHarbor, I downloaded the build and checked the Web.Config and it still has the Environment setting at "Development". **EDIT:** I added an Action to one of my Controllers that reads the Environment AppSetting and outputs to the view, and much to my surprise it was "Release"!!! So what gives? The "Download Build" content does not have the transformation in place, but when a request happens it does? When does AppHarbor apply the transformation? Is it at runtime instead of during the build? **EDIT:** Heard back from the AppHarbor guys and the transformation happens on the actual publish, so even though the build has a "Published Websites" folder, it is still not the final output of the publish actions. Thanks, Joe
2013/04/23
[ "https://Stackoverflow.com/questions/16179700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/257975/" ]
We have just rolled out the solution (using the newest transformation assemblies as you point out) for this issue and updated <http://webconfigtransformationtester.apphb.com/> accordingly. Your transformations should be applied as expected now. Thank you for your thorough investigation of this issue.
Come to find out, the answer to my original question as to way the transformation is not taking place, is because the xmlns attribute in the root node of the config file. Unfortunately, the <http://webconfigtransformationtester.apphb.com/> does not give you any kind of information about why the transformation did not take place, but if you look through the code, you can see that there is a logger available, but it is being set to null. I ended up pulling the code and adding a logger and found a warning "No element in the source document matches '/SAMLConfiguration'". Digging a bit further I found this post [Why does this web.config transform say it can't find the applicationSettings element?](https://stackoverflow.com/questions/6844868/why-does-this-web-config-transform-say-it-cant-find-the-applicationsettings-ele) that in Sayed's answer he states that older MSBuild Transformation Tasks do not honor the xmlns attribute. Removing the xmlns attribute from both the config file and the transform file should solve the problem. However, in my case the xmlns attribute is required and cannot be removed. So until AppHarbor updates their transformation assemblies to use the MSBuild v11.0 assemblies, I am pretty much stuck.
923,561
How can I use YUI to locate a text input by name and insert a div after it? For example, this doesn't work: ``` <input name="some_name" type="text"> <script> var el = new YAHOO.util.Element(document.createElement('div')); var some_element = document.getElementsByName('some_name'); // doesn't work .... replacing 'some_element' with 'document.body' works el.appendTo(some_element); </script> ```
2009/05/28
[ "https://Stackoverflow.com/questions/923561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32242/" ]
As mentioned, document.getElementsByName returns an array. You may want to give your input an ID with the same name as the name attribute. (Aside: This is common and good practice for forms. Other js libraries provide helpful extensions when you do this.) ``` <input name="some_name" id="some_name" type="text"> <script> (function () { var el = new YAHOO.util.Element(document.createElement('div')); // var some_element = document.getElementByName('some_name')[0]; var some_element = document.getElementsById('some_name'); // preferred, faster // el.appendTo(some_element); YAHOO.util.Dom.insertAfter(el,some_element); })(); </script> ``` Also notice the use of **insertAfter rather than appendTo**. You do not want el to be a child of your input element. You want it to be the next sibling. Inputs do not have children. Also lastly, you're adding these variables to the global namespace. This may or may not be a problem, but it's generally a good idea to wrap your code in an anonymous function unless you intend for the variables to have global scope and reuse them later, but then you might want to provide a proper namespace for them. Hope that helps (and not too much info.) ;)
document.getElementsByName('some\_name') always return a collection (an Array), if you are sure that the name is unique you can safely write this. ``` var some_element = document.getElementsByName('some_name')[0]; ```
923,561
How can I use YUI to locate a text input by name and insert a div after it? For example, this doesn't work: ``` <input name="some_name" type="text"> <script> var el = new YAHOO.util.Element(document.createElement('div')); var some_element = document.getElementsByName('some_name'); // doesn't work .... replacing 'some_element' with 'document.body' works el.appendTo(some_element); </script> ```
2009/05/28
[ "https://Stackoverflow.com/questions/923561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32242/" ]
document.getElementsByName('some\_name') always return a collection (an Array), if you are sure that the name is unique you can safely write this. ``` var some_element = document.getElementsByName('some_name')[0]; ```
With YUI 3 you can now do [this](https://stackoverflow.com/a/21319966/274502): ```html <script src="http://yui.yahooapis.com/3.18.1/build/yui/yui-min.js"></script> <input id="some_id" type="text"> <script> YUI().use('node', function(Y) { var contentNode = Y.Node.create('<p>'); contentNode.setHTML('This is a para created by YUI...'); Y.one('#some_id').insert(contentNode, 'after'); }); </script> ``` But keep in mind [YUI is being discontinued](http://yahooeng.tumblr.com/post/96098168666/important-announcement-regarding-yui)!
923,561
How can I use YUI to locate a text input by name and insert a div after it? For example, this doesn't work: ``` <input name="some_name" type="text"> <script> var el = new YAHOO.util.Element(document.createElement('div')); var some_element = document.getElementsByName('some_name'); // doesn't work .... replacing 'some_element' with 'document.body' works el.appendTo(some_element); </script> ```
2009/05/28
[ "https://Stackoverflow.com/questions/923561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32242/" ]
document.getElementsByName('some\_name') always return a collection (an Array), if you are sure that the name is unique you can safely write this. ``` var some_element = document.getElementsByName('some_name')[0]; ```
You basically put a condition and say name="some\_name". See the code below and observe that is still does insert the aragraph. ```html <script src="http://yui.yahooapis.com/3.18.1/build/yui/yui-min.js"></script> <input id="some_id" type="text" name="inpuname" > <script> YUI().use('node', function(Y) { var contentNode = Y.Node.create('<p>'); contentNode.setHTML('Paragraph created by YUI by searching for *INPUNAME*...'); Y.one('input[name="inpuname"]').insert(contentNode, 'after'); }); </script> ```
923,561
How can I use YUI to locate a text input by name and insert a div after it? For example, this doesn't work: ``` <input name="some_name" type="text"> <script> var el = new YAHOO.util.Element(document.createElement('div')); var some_element = document.getElementsByName('some_name'); // doesn't work .... replacing 'some_element' with 'document.body' works el.appendTo(some_element); </script> ```
2009/05/28
[ "https://Stackoverflow.com/questions/923561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32242/" ]
As mentioned, document.getElementsByName returns an array. You may want to give your input an ID with the same name as the name attribute. (Aside: This is common and good practice for forms. Other js libraries provide helpful extensions when you do this.) ``` <input name="some_name" id="some_name" type="text"> <script> (function () { var el = new YAHOO.util.Element(document.createElement('div')); // var some_element = document.getElementByName('some_name')[0]; var some_element = document.getElementsById('some_name'); // preferred, faster // el.appendTo(some_element); YAHOO.util.Dom.insertAfter(el,some_element); })(); </script> ``` Also notice the use of **insertAfter rather than appendTo**. You do not want el to be a child of your input element. You want it to be the next sibling. Inputs do not have children. Also lastly, you're adding these variables to the global namespace. This may or may not be a problem, but it's generally a good idea to wrap your code in an anonymous function unless you intend for the variables to have global scope and reuse them later, but then you might want to provide a proper namespace for them. Hope that helps (and not too much info.) ;)
With YUI 3 you can now do [this](https://stackoverflow.com/a/21319966/274502): ```html <script src="http://yui.yahooapis.com/3.18.1/build/yui/yui-min.js"></script> <input id="some_id" type="text"> <script> YUI().use('node', function(Y) { var contentNode = Y.Node.create('<p>'); contentNode.setHTML('This is a para created by YUI...'); Y.one('#some_id').insert(contentNode, 'after'); }); </script> ``` But keep in mind [YUI is being discontinued](http://yahooeng.tumblr.com/post/96098168666/important-announcement-regarding-yui)!
923,561
How can I use YUI to locate a text input by name and insert a div after it? For example, this doesn't work: ``` <input name="some_name" type="text"> <script> var el = new YAHOO.util.Element(document.createElement('div')); var some_element = document.getElementsByName('some_name'); // doesn't work .... replacing 'some_element' with 'document.body' works el.appendTo(some_element); </script> ```
2009/05/28
[ "https://Stackoverflow.com/questions/923561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32242/" ]
As mentioned, document.getElementsByName returns an array. You may want to give your input an ID with the same name as the name attribute. (Aside: This is common and good practice for forms. Other js libraries provide helpful extensions when you do this.) ``` <input name="some_name" id="some_name" type="text"> <script> (function () { var el = new YAHOO.util.Element(document.createElement('div')); // var some_element = document.getElementByName('some_name')[0]; var some_element = document.getElementsById('some_name'); // preferred, faster // el.appendTo(some_element); YAHOO.util.Dom.insertAfter(el,some_element); })(); </script> ``` Also notice the use of **insertAfter rather than appendTo**. You do not want el to be a child of your input element. You want it to be the next sibling. Inputs do not have children. Also lastly, you're adding these variables to the global namespace. This may or may not be a problem, but it's generally a good idea to wrap your code in an anonymous function unless you intend for the variables to have global scope and reuse them later, but then you might want to provide a proper namespace for them. Hope that helps (and not too much info.) ;)
You basically put a condition and say name="some\_name". See the code below and observe that is still does insert the aragraph. ```html <script src="http://yui.yahooapis.com/3.18.1/build/yui/yui-min.js"></script> <input id="some_id" type="text" name="inpuname" > <script> YUI().use('node', function(Y) { var contentNode = Y.Node.create('<p>'); contentNode.setHTML('Paragraph created by YUI by searching for *INPUNAME*...'); Y.one('input[name="inpuname"]').insert(contentNode, 'after'); }); </script> ```
323,640
I want to supply a microcontroller with 4x AA alkaline batteries or via USB, with the USB source being dominant. Meaning when USB is on, the batteries should be disconnected. For the USB path, I intend to use a Schottky diode. For the battery path, I initially thought about using a "ORing" IC or one of those "powerpath" controllers. The problem with most of the ones I've seen so far is, that they only have one integrated FET, so there could be a reverse current flowing into the battery when the batteries drop below the USB voltage and the forward voltage of the FET body diode. I want to use two back to back FETs with common source, but I'm not sure what driver would be suitable. One way I thought of would be to use a MOSFET driver like MAX1614 and let it drive two FETs. But the MAX1614 only goes down to 5V Vin which would be too high. Something like 4V would be better to get the most out of the batteries. Vin is between 4V and at least >7V, if possible >12V. But most of the drivers I find are for lowside configurations and some barely go above 5.5V. Then there would be ones like the MAX5048 or the MAX15070 that go down to 4V. Or I could use an ORing controller with external FETs, something like a LTC4412 or LTC4359 (looks like the 4359 is hard to get in Germany) and then use the enable pin to shut them down when the USB voltage is present. I tried to design a discrete circuit with MOSFETs and transistors but that didn't work very well, I lack the experience. Does someone have tips or ideas on a good approach, or alternative MOSFET driver ICs? **Edit:** This was my general idea for a discrete circuit, but I didn't choose proper parts yet. When I simulate this in LTSpice, I get a huge spike for Vout when it switches from USB to Batt. (from ~4.7V down to 0V and back up to 4V) I think an output capacitor is needed to prevent that spike. ![schematic](https://i.stack.imgur.com/CiuCk.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fCiuCk.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
2017/08/12
[ "https://electronics.stackexchange.com/questions/323640", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/152813/" ]
Use this circuit, when USB is active it took power from USB. ![schematic](https://i.stack.imgur.com/J91m9.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJ91m9.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
I think this will work. The only drawback is that maybe M1 and M2 will turn on just a little bit when VBATT is high, even though VBUS is present. This could lead to the batteries discharging just a bit. They will stop discharging when the voltage drops a little. This can be partially controlled by selecting M1 and M2 with kind of medium range Vgs(th). Another alternative is a more complicated arrangement for switching the gates of M1 and M2. Maybe a buffer powered directly from VBAT, with VBUS as the input. This could insure that the gates get fully turned off when VBUS is present. ![schematic](https://i.stack.imgur.com/dmkSh.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fdmkSh.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) R1 is just there to limit battery charge current in the event M1 or M2 should fail. You don't want to charge alkaline batteries. You could bump it up to 100k if you like. Or even 1M.
323,640
I want to supply a microcontroller with 4x AA alkaline batteries or via USB, with the USB source being dominant. Meaning when USB is on, the batteries should be disconnected. For the USB path, I intend to use a Schottky diode. For the battery path, I initially thought about using a "ORing" IC or one of those "powerpath" controllers. The problem with most of the ones I've seen so far is, that they only have one integrated FET, so there could be a reverse current flowing into the battery when the batteries drop below the USB voltage and the forward voltage of the FET body diode. I want to use two back to back FETs with common source, but I'm not sure what driver would be suitable. One way I thought of would be to use a MOSFET driver like MAX1614 and let it drive two FETs. But the MAX1614 only goes down to 5V Vin which would be too high. Something like 4V would be better to get the most out of the batteries. Vin is between 4V and at least >7V, if possible >12V. But most of the drivers I find are for lowside configurations and some barely go above 5.5V. Then there would be ones like the MAX5048 or the MAX15070 that go down to 4V. Or I could use an ORing controller with external FETs, something like a LTC4412 or LTC4359 (looks like the 4359 is hard to get in Germany) and then use the enable pin to shut them down when the USB voltage is present. I tried to design a discrete circuit with MOSFETs and transistors but that didn't work very well, I lack the experience. Does someone have tips or ideas on a good approach, or alternative MOSFET driver ICs? **Edit:** This was my general idea for a discrete circuit, but I didn't choose proper parts yet. When I simulate this in LTSpice, I get a huge spike for Vout when it switches from USB to Batt. (from ~4.7V down to 0V and back up to 4V) I think an output capacitor is needed to prevent that spike. ![schematic](https://i.stack.imgur.com/CiuCk.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fCiuCk.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
2017/08/12
[ "https://electronics.stackexchange.com/questions/323640", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/152813/" ]
Use this circuit, when USB is active it took power from USB. ![schematic](https://i.stack.imgur.com/J91m9.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJ91m9.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
With quick simulation, this kind of circuit should work. The FETs take a little time to open after the USB voltage goes away, so you need a bulk capacitor to offer current for that time. Make sure that the P-FET conducts with 5 V V\_GS [![](https://i.stack.imgur.com/xeCkf.png)](https://i.stack.imgur.com/xeCkf.png)
323,640
I want to supply a microcontroller with 4x AA alkaline batteries or via USB, with the USB source being dominant. Meaning when USB is on, the batteries should be disconnected. For the USB path, I intend to use a Schottky diode. For the battery path, I initially thought about using a "ORing" IC or one of those "powerpath" controllers. The problem with most of the ones I've seen so far is, that they only have one integrated FET, so there could be a reverse current flowing into the battery when the batteries drop below the USB voltage and the forward voltage of the FET body diode. I want to use two back to back FETs with common source, but I'm not sure what driver would be suitable. One way I thought of would be to use a MOSFET driver like MAX1614 and let it drive two FETs. But the MAX1614 only goes down to 5V Vin which would be too high. Something like 4V would be better to get the most out of the batteries. Vin is between 4V and at least >7V, if possible >12V. But most of the drivers I find are for lowside configurations and some barely go above 5.5V. Then there would be ones like the MAX5048 or the MAX15070 that go down to 4V. Or I could use an ORing controller with external FETs, something like a LTC4412 or LTC4359 (looks like the 4359 is hard to get in Germany) and then use the enable pin to shut them down when the USB voltage is present. I tried to design a discrete circuit with MOSFETs and transistors but that didn't work very well, I lack the experience. Does someone have tips or ideas on a good approach, or alternative MOSFET driver ICs? **Edit:** This was my general idea for a discrete circuit, but I didn't choose proper parts yet. When I simulate this in LTSpice, I get a huge spike for Vout when it switches from USB to Batt. (from ~4.7V down to 0V and back up to 4V) I think an output capacitor is needed to prevent that spike. ![schematic](https://i.stack.imgur.com/CiuCk.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fCiuCk.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
2017/08/12
[ "https://electronics.stackexchange.com/questions/323640", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/152813/" ]
Use this circuit, when USB is active it took power from USB. ![schematic](https://i.stack.imgur.com/J91m9.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJ91m9.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
You might check out the TI tips 22860. It's an integrated, low leakage, high side switch. Tie the input to the USB +5VDC Though a resistor. It's also pretty cheap.
323,640
I want to supply a microcontroller with 4x AA alkaline batteries or via USB, with the USB source being dominant. Meaning when USB is on, the batteries should be disconnected. For the USB path, I intend to use a Schottky diode. For the battery path, I initially thought about using a "ORing" IC or one of those "powerpath" controllers. The problem with most of the ones I've seen so far is, that they only have one integrated FET, so there could be a reverse current flowing into the battery when the batteries drop below the USB voltage and the forward voltage of the FET body diode. I want to use two back to back FETs with common source, but I'm not sure what driver would be suitable. One way I thought of would be to use a MOSFET driver like MAX1614 and let it drive two FETs. But the MAX1614 only goes down to 5V Vin which would be too high. Something like 4V would be better to get the most out of the batteries. Vin is between 4V and at least >7V, if possible >12V. But most of the drivers I find are for lowside configurations and some barely go above 5.5V. Then there would be ones like the MAX5048 or the MAX15070 that go down to 4V. Or I could use an ORing controller with external FETs, something like a LTC4412 or LTC4359 (looks like the 4359 is hard to get in Germany) and then use the enable pin to shut them down when the USB voltage is present. I tried to design a discrete circuit with MOSFETs and transistors but that didn't work very well, I lack the experience. Does someone have tips or ideas on a good approach, or alternative MOSFET driver ICs? **Edit:** This was my general idea for a discrete circuit, but I didn't choose proper parts yet. When I simulate this in LTSpice, I get a huge spike for Vout when it switches from USB to Batt. (from ~4.7V down to 0V and back up to 4V) I think an output capacitor is needed to prevent that spike. ![schematic](https://i.stack.imgur.com/CiuCk.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fCiuCk.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
2017/08/12
[ "https://electronics.stackexchange.com/questions/323640", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/152813/" ]
Use this circuit, when USB is active it took power from USB. ![schematic](https://i.stack.imgur.com/J91m9.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJ91m9.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
I routinely do this with use an isolated power source/regulator powering a low-side gate drive (any low-side gate drive method will work...nothing fancy here) with an optocoupler to float the control signal to the now floating gate driver.
15,919,815
I want to write a macro which converts all the shapes and pictures in my word document to inline with text. The code that I am using converts all shapes(which are edited by drawing tools) to inline with text, but the table which are previously converted to pictures(edited by picture tools) or any other pictures are not text wrapping it to inline with text. Pasting the code I am using ``` For Each oShp In ActiveDocument.Shapes oShp.Select Selection.ShapeRange.WrapFormat.Type = wdWrapInline Next oShp ```
2013/04/10
[ "https://Stackoverflow.com/questions/15919815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1800271/" ]
The first answer was a good start but there is a problem. For Each iterates over the collection but the collection is changed by ConvertToInlineShape. In other languages this throws an exception that the collection has been modified, here it just stops silently. To avoid this you need to either add each shape to another collection and iterate over them there. Or in the case of the below sample just keep track of the index manually. ``` Sub InlineAllImages() Dim Shape As Shape Dim Index As Integer: Index = 1 ' Store the count as it will change each time. Dim NumberOfShapes As Integer: NumberOfShapes = Shapes.Count ' Break out if either all shapes have been checked or there are none left. Do While Shapes.Count > 0 And Index < NumberOfShapes + 1 With Shapes(Index) If .Type = msoPicture Then ' If the shape is a picture convert it to inline. ' It will be removed from the collection so don't increment the Index. .ConvertToInlineShape Else ' The shape is not a picture so increment the index (move to next shape). Index = Index + 1 End If End With Loop End Sub ```
give this a go ``` For Each oShp In ActiveDocument.Shapes oShp.Select Selection.ShapeRange.ConvertToInlineShape Next oShp ```
15,919,815
I want to write a macro which converts all the shapes and pictures in my word document to inline with text. The code that I am using converts all shapes(which are edited by drawing tools) to inline with text, but the table which are previously converted to pictures(edited by picture tools) or any other pictures are not text wrapping it to inline with text. Pasting the code I am using ``` For Each oShp In ActiveDocument.Shapes oShp.Select Selection.ShapeRange.WrapFormat.Type = wdWrapInline Next oShp ```
2013/04/10
[ "https://Stackoverflow.com/questions/15919815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1800271/" ]
The first answer was a good start but there is a problem. For Each iterates over the collection but the collection is changed by ConvertToInlineShape. In other languages this throws an exception that the collection has been modified, here it just stops silently. To avoid this you need to either add each shape to another collection and iterate over them there. Or in the case of the below sample just keep track of the index manually. ``` Sub InlineAllImages() Dim Shape As Shape Dim Index As Integer: Index = 1 ' Store the count as it will change each time. Dim NumberOfShapes As Integer: NumberOfShapes = Shapes.Count ' Break out if either all shapes have been checked or there are none left. Do While Shapes.Count > 0 And Index < NumberOfShapes + 1 With Shapes(Index) If .Type = msoPicture Then ' If the shape is a picture convert it to inline. ' It will be removed from the collection so don't increment the Index. .ConvertToInlineShape Else ' The shape is not a picture so increment the index (move to next shape). Index = Index + 1 End If End With Loop End Sub ```
I think selecting each iterated shape is unnecessary and may be getting in the way. Anyone who still needs an answer, this works for me: ``` For Count = 1 To 2 For Each oShp In ActiveDocument.Shapes oShp.ConvertToInlineShape Next oShp Next Count ``` The first iteration of the outer loop tackles the pictures and the second iteration tackles the drawing objects. Don't ask why!
15,919,815
I want to write a macro which converts all the shapes and pictures in my word document to inline with text. The code that I am using converts all shapes(which are edited by drawing tools) to inline with text, but the table which are previously converted to pictures(edited by picture tools) or any other pictures are not text wrapping it to inline with text. Pasting the code I am using ``` For Each oShp In ActiveDocument.Shapes oShp.Select Selection.ShapeRange.WrapFormat.Type = wdWrapInline Next oShp ```
2013/04/10
[ "https://Stackoverflow.com/questions/15919815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1800271/" ]
The first answer was a good start but there is a problem. For Each iterates over the collection but the collection is changed by ConvertToInlineShape. In other languages this throws an exception that the collection has been modified, here it just stops silently. To avoid this you need to either add each shape to another collection and iterate over them there. Or in the case of the below sample just keep track of the index manually. ``` Sub InlineAllImages() Dim Shape As Shape Dim Index As Integer: Index = 1 ' Store the count as it will change each time. Dim NumberOfShapes As Integer: NumberOfShapes = Shapes.Count ' Break out if either all shapes have been checked or there are none left. Do While Shapes.Count > 0 And Index < NumberOfShapes + 1 With Shapes(Index) If .Type = msoPicture Then ' If the shape is a picture convert it to inline. ' It will be removed from the collection so don't increment the Index. .ConvertToInlineShape Else ' The shape is not a picture so increment the index (move to next shape). Index = Index + 1 End If End With Loop End Sub ```
For i = 1 To ActiveDocument.Shapes.Count ActiveDocument.Shapes(1).ConvertToInlineShape 'index value be 1 as the count will decrease on every conversion Next
56,961,594
I'm curious what `x` means when you look at this: ``` import random for x in range(10): print random.randint(1,101) ```
2019/07/09
[ "https://Stackoverflow.com/questions/56961594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11512753/" ]
`x` itself has no special meaning, it simply (as a part of the `for` loop) provides a way to repeat ``` print random.randint(1,101) ``` 10 times, regardless of the variable name (i.e., `x` could be, say, `n`). In each iteration the value of `x` keeps increasing, but we don't use it. On the other hand, e.g., ``` for x in range(3): print(x) ``` would give ``` 0 1 2 ```
X is a variable name, so could be any name allowed by python for variable names. As a variable , it’s value will be different every time the loop circle ends, in this particular loop range(10) the value of x will start in 0 and next 1 , and next 2, until reach value of 10 so If you want to print a random int: ``` for x in range(10): print(random.randint(x)) ``` also, if it is python3.X, `print(x)` not `print x`, second is python2.X
56,961,594
I'm curious what `x` means when you look at this: ``` import random for x in range(10): print random.randint(1,101) ```
2019/07/09
[ "https://Stackoverflow.com/questions/56961594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11512753/" ]
`x` itself has no special meaning, it simply (as a part of the `for` loop) provides a way to repeat ``` print random.randint(1,101) ``` 10 times, regardless of the variable name (i.e., `x` could be, say, `n`). In each iteration the value of `x` keeps increasing, but we don't use it. On the other hand, e.g., ``` for x in range(3): print(x) ``` would give ``` 0 1 2 ```
For x in range(3) simply means, for each value of x in range(3), range(3) = 0,1,2 As it is range(3), the loop is looped three times and at each time, value of x becomes 0, then 1 and then 2
56,961,594
I'm curious what `x` means when you look at this: ``` import random for x in range(10): print random.randint(1,101) ```
2019/07/09
[ "https://Stackoverflow.com/questions/56961594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11512753/" ]
`x` itself has no special meaning, it simply (as a part of the `for` loop) provides a way to repeat ``` print random.randint(1,101) ``` 10 times, regardless of the variable name (i.e., `x` could be, say, `n`). In each iteration the value of `x` keeps increasing, but we don't use it. On the other hand, e.g., ``` for x in range(3): print(x) ``` would give ``` 0 1 2 ```
Here, x is just a variable name used to store the integer value of the current position in the range of loop and it iterates through out the range of a loop. Like in for x in range(10): x iterates 10 times and for instance in your for loop above, during the first iteration of the loop x = 1, then x=2 for the next iteration then, x= 3 and so on... It is not neccessary to take x as a variable only you can take any variable name like i,a etc...
56,961,594
I'm curious what `x` means when you look at this: ``` import random for x in range(10): print random.randint(1,101) ```
2019/07/09
[ "https://Stackoverflow.com/questions/56961594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11512753/" ]
`x` itself has no special meaning, it simply (as a part of the `for` loop) provides a way to repeat ``` print random.randint(1,101) ``` 10 times, regardless of the variable name (i.e., `x` could be, say, `n`). In each iteration the value of `x` keeps increasing, but we don't use it. On the other hand, e.g., ``` for x in range(3): print(x) ``` would give ``` 0 1 2 ```
I am coming from a short intro in C, and I was confused by the 'x' as well. For those coming from C,C++,C# etc.: The 'x in range(10)' is the same thing as doing this in C: for (x = 0; x < 10; x++)
40,729,525
How do I configure vim to open a default file (in my case, `~/Desktop/now.md`) if the `vim` command is invoked without arguments on the command-line?
2016/11/21
[ "https://Stackoverflow.com/questions/40729525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/742/" ]
You're asking the user for a date in the format `YYYY-MM-DD` but then trying to parse it according to this format `%d/%m/%y`. You instead should parse the string in the same way that you requested it, [`%Y-%m-%d`](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior)
You're looking for format `%d/%m/%y` and asking for `%Y-%m-%d` ``` > date_visit = '2016-11-23' > datetime.datetime.strptime(date_visit, "%Y-%m-%d") datetime.datetime(2016, 11, 23, 0, 0) ``` Some notes: * `%Y` : Year in four digits `%y` : year in two digits. * `%d/%m/%y` translates to *"day of month in one or two digits, `/`, month of year in one or two digits, `/` year in two digits".* * `%Y-%m-%d` translates to *"four-digit-year, `-`, month-of-year, `-` day-of-month"*
40,729,525
How do I configure vim to open a default file (in my case, `~/Desktop/now.md`) if the `vim` command is invoked without arguments on the command-line?
2016/11/21
[ "https://Stackoverflow.com/questions/40729525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/742/" ]
You're asking the user for a date in the format `YYYY-MM-DD` but then trying to parse it according to this format `%d/%m/%y`. You instead should parse the string in the same way that you requested it, [`%Y-%m-%d`](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior)
Your prompt asks you to enter in `YYYY-MM-DD`, but your `strptime` is attempting to use the format `%d/%m/%y`. You need to have the formats to match for `strptime` to work ``` import datetime d = '2016-11-21' d = datetime.datetime.strptime(d, "%Y-%m-%d") d = datetime.datetime(2016, 11, 21, 0, 0) >>> 2016-11-21 00:00:00 d = '11/21/2016' d = datetime.datetime.strptime(d, "%d/%M/%Y") d = datetime.datetime(2016, 11, 21, 0, 0) >>> 2016-11-21 00:00:00 ``` I personally like to use the `python-dateutil` module for parsing date strings that allows for different formats > > pip install python-dateutil > > > ``` from dateutil.parser import parse d1 = 'Tuesday, October 21 2003, 12:14 CDT' d2 = 'Dec. 23rd of 2012 at 12:34pm' d3 = 'March 4th, 2016' d4 = '2015-12-09' print(parse(d1)) >>> 2003-10-21 12:14:00 print(parse(d2)) >>> 2012-12-23 12:34:00 print(parse(d3)) >>> 2016-03-04 00:00:00 print(parse(d4)) >>> 2015-12-09 00:00:00 ```
57,076,530
I need to convert this: ``` prev = {in_progress: 1, todo: 3, done: 1} ``` Into this ``` output = {[ ['Status', 'Count'], ['Todo', 3], ['In Progress', 1], ['Done', 1] ]} ``` When I'm trying to assign `prev` to a variable, it gets assigned. ``` let a = {in_progress: 1, todo: 3, done: 1} ``` Success. But when im trying to assign `output` to a variable, it throws an error ``` let a = {[ ['Status', 'Count'], ['Todo', 1], ['In Progress', 2], ['Done', 3] ]} Error: Uncaught SyntaxError: Unexpected token , ``` Can you explain the reason behind this?
2019/07/17
[ "https://Stackoverflow.com/questions/57076530", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9015957/" ]
The problem is that `{[ ['Status', 'Count'], ['Todo', 3], ['In Progress', 1], ['Done', 1] ]}` is not a valid object literal. It's just curly braces around a list of lists. Object literals have to have key: value pairs. If you wanted to turn it into an object, it would look like `{ 'Status': 'Count', 'Todo': 3, 'In Progress': 1, 'Done': 1 }`. If you already have the list of lists and want to turn it into an object programmatically, you can do that with `reduce`: ``` // Note: no braces let kvlist = [ ['Status', 'Count'], ['Todo', 3], ['In Progress', 1], ['Done', 1] ]; let obj = kvlist.reduce((h,[k,v]) => { h[k] = v; return h },{}); ```
You could get the entries and map the formatted key with the value. ```js const format = s => s.split('_').map(([c, ...s]) => [c.toUpperCase(), ...s].join('')).join(' '); var prev = { in_progress: 1, todo: 3, done: 1 }, result = [ ['Status', 'Count'], ...Object.entries(prev).map(([k, v]) => [format(k), v]) ]; console.log(result); ```
1,792,397
Is there a way to back up a mercurial repository while preserving the files' timestamps? Right now, I'm using `hg clone` to copy the repository to a staging directory, and the backup program picks up the files from there. I'm not pointing the backup program directly at the repository because I don't want it to be changing (from commits) while the backup is happening. The problem is that `hg clone` changes all the files' timestamps to the current time, so the backup program (which I cannot change) thinks everything has been modified.
2009/11/24
[ "https://Stackoverflow.com/questions/1792397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6160/" ]
I suggest using `hg pull` instead of `hg clone`. So you'll keep a mirror of the repository on your server and update it periodically with `hg pull`. You then let your backup program take a backup of *that*. When you use `hg pull` you will transfer the newest history and only changed files under `.hg/store/data` which were actually effected by the pull. Here I tested this by making a small repo with two files: `a.txt` and `b.txt`. I then cloned the repository "to the server" using `hg clone --noupdate`. That ensures that we have no working copy on the server -- it only needs the history found in `.hg`. The timestamps looked like this after the clone: ``` % ll --time-style=full .hg/store/data total 8.0K -rw-r--r-- 1 mg mg 76 2009-11-25 20:07:52.000000000 +0100 a.txt.i -rw-r--r-- 1 mg mg 69 2009-11-25 20:07:52.000000000 +0100 b.txt.i ``` As you noted, they are all identical since the files were all just created by the clone operation. I then changed the original repository (the one on the client) and made a commit. After pulling the changeset I got these timestamps: ``` % ll --time-style=full .hg/store/data total 8.0K -rw-r--r-- 1 mg mg 159 2009-11-25 20:08:47.000000000 +0100 a.txt.i -rw-r--r-- 1 mg mg 69 2009-11-25 20:07:52.000000000 +0100 b.txt.i ``` Notice how the timestamp for `a.txt.i` has been updated (I only touched `a.txt` in my commit) while the timestamp for `b.txt.i` has been left alone. If your backup software is smart, it will even notice that Mercurial has only appended data to `a.txt.i`. This means that the new `a.txt.i` file is identical to the old `a.txt.i` file up to certain point -- the backup program should therefore only copy the final part of the file. Rsync is an example of a backup program that will notice this.
**Plan A:** When the source and destination directories reside on the *same file system*, `hg clone -U` would simply hardlink all its files in the repository, without changing timestamps. This approach is quite fast and always safe (files are unlinked lazily when written to). If you need to, you can clone on the same file system first, and then rsync this new clone over to another file system. **Plan B:** It's *usually* safe to use rsync or some other file-based synchronization tool. Mercurial doesn't store anything magical on disk, just plain files. There is a race condition, when you happen to commit to this repository at the same time when rsync is running, but I think it's negligible because a "`hg rollback`" should be able to clean up your such inconsistencies if you restore from a broken backup. Do note, that rollback *cannot* recover if you had multiple separate transactions (such as multiple "push" or "commit" commands) in the rsync window, or run destructive operations that tamper with history (such as rebase, `hg strip`, and some MQ commands).
1,792,397
Is there a way to back up a mercurial repository while preserving the files' timestamps? Right now, I'm using `hg clone` to copy the repository to a staging directory, and the backup program picks up the files from there. I'm not pointing the backup program directly at the repository because I don't want it to be changing (from commits) while the backup is happening. The problem is that `hg clone` changes all the files' timestamps to the current time, so the backup program (which I cannot change) thinks everything has been modified.
2009/11/24
[ "https://Stackoverflow.com/questions/1792397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6160/" ]
**Plan A:** When the source and destination directories reside on the *same file system*, `hg clone -U` would simply hardlink all its files in the repository, without changing timestamps. This approach is quite fast and always safe (files are unlinked lazily when written to). If you need to, you can clone on the same file system first, and then rsync this new clone over to another file system. **Plan B:** It's *usually* safe to use rsync or some other file-based synchronization tool. Mercurial doesn't store anything magical on disk, just plain files. There is a race condition, when you happen to commit to this repository at the same time when rsync is running, but I think it's negligible because a "`hg rollback`" should be able to clean up your such inconsistencies if you restore from a broken backup. Do note, that rollback *cannot* recover if you had multiple separate transactions (such as multiple "push" or "commit" commands) in the rsync window, or run destructive operations that tamper with history (such as rebase, `hg strip`, and some MQ commands).
Here's a hg extension that might help: [TimestampExtension](https://www.mercurial-scm.org/wiki/TimestampExtension).
1,792,397
Is there a way to back up a mercurial repository while preserving the files' timestamps? Right now, I'm using `hg clone` to copy the repository to a staging directory, and the backup program picks up the files from there. I'm not pointing the backup program directly at the repository because I don't want it to be changing (from commits) while the backup is happening. The problem is that `hg clone` changes all the files' timestamps to the current time, so the backup program (which I cannot change) thinks everything has been modified.
2009/11/24
[ "https://Stackoverflow.com/questions/1792397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6160/" ]
I suggest using `hg pull` instead of `hg clone`. So you'll keep a mirror of the repository on your server and update it periodically with `hg pull`. You then let your backup program take a backup of *that*. When you use `hg pull` you will transfer the newest history and only changed files under `.hg/store/data` which were actually effected by the pull. Here I tested this by making a small repo with two files: `a.txt` and `b.txt`. I then cloned the repository "to the server" using `hg clone --noupdate`. That ensures that we have no working copy on the server -- it only needs the history found in `.hg`. The timestamps looked like this after the clone: ``` % ll --time-style=full .hg/store/data total 8.0K -rw-r--r-- 1 mg mg 76 2009-11-25 20:07:52.000000000 +0100 a.txt.i -rw-r--r-- 1 mg mg 69 2009-11-25 20:07:52.000000000 +0100 b.txt.i ``` As you noted, they are all identical since the files were all just created by the clone operation. I then changed the original repository (the one on the client) and made a commit. After pulling the changeset I got these timestamps: ``` % ll --time-style=full .hg/store/data total 8.0K -rw-r--r-- 1 mg mg 159 2009-11-25 20:08:47.000000000 +0100 a.txt.i -rw-r--r-- 1 mg mg 69 2009-11-25 20:07:52.000000000 +0100 b.txt.i ``` Notice how the timestamp for `a.txt.i` has been updated (I only touched `a.txt` in my commit) while the timestamp for `b.txt.i` has been left alone. If your backup software is smart, it will even notice that Mercurial has only appended data to `a.txt.i`. This means that the new `a.txt.i` file is identical to the old `a.txt.i` file up to certain point -- the backup program should therefore only copy the final part of the file. Rsync is an example of a backup program that will notice this.
Here's a hg extension that might help: [TimestampExtension](https://www.mercurial-scm.org/wiki/TimestampExtension).
1,780,496
My understanding is limited and I'm trying to learn more about how homotopy forms the notion of equivalence. I can grasp its definition as "continuous", but my understanding of homotopy falls away in spaces that are discrete. How does the mapping actually work?
2016/05/11
[ "https://math.stackexchange.com/questions/1780496", "https://math.stackexchange.com", "https://math.stackexchange.com/users/257252/" ]
Two discrete spaces are homotopy equivalent if and only if they have the same number of points. Any function whose domain is discrete is automatically continuous, and two functions $X \to Y$ for discrete $Y$ are homotopic if and only if they are equal. (I'm not sure I understood your question, let me know if that clears things up or not.)
Old question, but let me cast some light (or maybe not) for future readers. Instead of looking at homotopy in discrete spaces, it works better looking to define a homotopy between graphs, so you could identify the form, or at least the connections between discrete objects, graphs inherit a topology by it's connection to groups, so it shouldn't be that hard. If I could say that something involving graphs is easy, which I can not, but like I've said before, this is just to cast some light in the subject.
938,647
I am developing a CMS based on Zend Framework which have many modules, for example News and Gallery. Each module has some part with same function as manage categories and comments (Categories and comments for News, Photos, Albums - cames from News and Gallery modules - are separate). Can somebody give me the advise to avoid making duplicate code? Thanks.
2009/06/02
[ "https://Stackoverflow.com/questions/938647", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Write classes to abstract the logic to a central source file. Basically, use encapsulation.
Separate the part of your application that should be reused. It's a good strategy to begin by copy-pasting, and only refactor it out into a shared component once you have at least two concrete uses for an abstraction.
49,159,420
Is it possible to get the permissions of a folder and its sub-folders then display the path, group, and users associated to that group? So, to look something like this. Or will it have to be one folder at a time. **`-Folder1`** ``` -Line separator -Group -Line separator -List of users ``` **`-Folder2`** ``` -Line separator -Group -Line separator -List of users ``` The script I've come up with so far be warned I have very little experience with powershell. (Don't worry my boss knows.) ``` Param([string]$filePath) $Version=$PSVersionTable.PSVersion if ($Version.Major -lt 3) {Throw "Powershell version out of date. Please update powershell." } Get-ChildItem $filePath -Recurse | Get-Acl | where { $_.Access | where { $_.IsInherited -eq $false } } | select -exp Access | select IdentityReference -Unique | Out-File .\Groups.txt $Rspaces=(Get-Content .\Groups.txt) -replace 'JAC.*?\\|','' | Where-Object {$_ -notmatch 'BUILTIN|NT AUTHORITY|CREATOR|-----|Identity'} | ForEach-Object {$_.TrimEnd()} $Rspaces | Out-File .\Groups.txt $ErrorActionPreference= 'SilentlyContinue' $Groups=Get-Content .\Groups.txt ForEach ($Group in $Groups) {Write-Host"";$Group;Write-Host -------------- ;Get-ADGroupMember -Identity $Group -Recursive | Get-ADUser -Property DisplayName | Select Name} ``` This only shows the groups and users, but not the path.
2018/03/07
[ "https://Stackoverflow.com/questions/49159420", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9458089/" ]
Ok, let's take it from the top! Excellent, you actually declare a parameter. What you might want to consider is setting a default value for the parameter. What I would do is use the current directory, which conveniently has an automatic variable `$PWD` (I believe that's short for PowerShell Working Directory). ``` Param([string]$filePath = $PWD) ``` Now if a path is provided it will use that, but if no path is provided it just uses the current folder as a default value. Version check is fine. I'm pretty sure there's more elegant ways to do it, but I honestly don't have never done any version checking. Now you are querying AD for each group and user that is found (after some filtering, granted). I would propose that we keep track of groups and members so that we only have to query AD once for each one. It may not save a lot of time, but it'll save some if any group is used more than once. So for that purpose we're going to make an empty hashtable to track groups and their members. ``` $ADGroups = @{} ``` Now starts a bad trend... writing to files and then reading those files back in. Outputting to a file is fine, or saving configurations, or something that you'll need again outside of the current PowerShell session, but writing to a file just to read it back into the current session is just a waste. Instead you should either save the results to a variable, or work with them directly. So, rather than getting the folder listing, piping it directly into `Get-Acl`, and losing the paths we're going to do a `ForEach` loop on the folders. Mind you, I added the `-Directory` switch so it will only look at folders and ignore files. This happens at the provider level, so you will get much faster results from `Get-ChildItem` this way. ``` ForEach($Folder in (Get-ChildItem $filePath -Recurse -Directory)){ ``` Now, you wanted to output the path of the folder, and a line. That's easy enough now that we aren't ditching the folder object: ``` $Folder.FullName '-'*$Folder.FullName.Length ``` Next we get the ACLs for the current folder: ``` $ACLs = Get-Acl -Path $Folder.FullName ``` And here's where things get complicated. I'm getting the group names from the ACLs, but I've combined a couple of your `Where` statements, and also added a check to see if it is an Allow rule (because including Deny rules in this would just be confusing). I've used `?` which is an alias for `Where`, as well as `%` which is an alias for `ForEach-Object`. You can have a natural line brake after a pipe, so I've done that for ease of reading. I included comments on each line for what I'm doing, but if any of it is confusing just let me know what you need clarification on. ``` $Groups = $ACLs.Access | #Expand the Access property ?{ $_.IsInherited -eq $false -and $_.AccessControlType -eq 'Allow' -and $_.IdentityReference -notmatch 'BUILTIN|NT AUTHORITY|CREATOR|-----|Identity'} | #Only instances that allow access, are not inherited, and aren't a local group or special case %{$_.IdentityReference -replace 'JAC.*?\\'} | #Expand the IdentityReference property, and replace anything that starts with JAC all the way to the first backslash (likely domain name trimming) Select -Unique #Select only unique values ``` Now we'll loop through the groups, starting off by outputting the group name and a line. ``` ForEach ($Group in $Groups){ $Group '-'*$Group.Length ``` For each group I'll see if we already know who's in it by checking the list of keys on the hashtable. If we don't find the group there we'll query AD and add the group as a key, and the members as the associated value. ``` If($ADGroups.Keys -notcontains $Group){ $Members = Get-ADGroupMember $Group -Recursive -ErrorAction Ignore | % Name $ADGroups.Add($Group,$Members) } ``` Now that we're sure that we have the group members we will display them. ``` $ADGroups[$Group] ``` We can close the `ForEach` loop pertaining to groups, and since this is the end of the loop for the current folder we'll add a blank line to the output, and close that loop as well ``` } "`n" } ``` So I wrote this up and then ran it against my C:\temp folder. It did tell me that I need to clean up that folder, but more importantly it showed me that most of the folders don't have any non-inherited permissions, so it would just give me the path with an underline, a blank line, and move to the next folder so I had a ton of things like: ``` C:\Temp\FolderA --------------- C:\Temp\FolderB --------------- C:\Temp\FolderC --------------- ``` That doesn't seem useful to me. If it is to you then use the lines above as I have them. Personally I chose to get the ACLs, check for groups, and then if there are no groups move to the next folder. The below is the product of that. ``` Param([string]$filePath = $PWD) $Version=$PSVersionTable.PSVersion if ($Version.Major -lt 3) {Throw "Powershell version out of date. Please update powershell." } #Create an empty hashtable to track groups $ADGroups = @{} #Get a recursive list of folders and loop through them ForEach($Folder in (Get-ChildItem $filePath -Recurse -Directory)){ # Get ACLs for the folder $ACLs = Get-Acl -Path $Folder.FullName #Do a bunch of filtering to just get AD groups $Groups = $ACLs | % Access | #Expand the Access property where { $_.IsInherited -eq $false -and $_.AccessControlType -eq 'Allow' -and $_.IdentityReference -notmatch 'BUILTIN|NT AUTHORITY|CREATOR|-----|Identity'} | #Only instances that allow access, are not inherited, and aren't a local group or special case %{$_.IdentityReference -replace 'JAC.*?\\'} | #Expand the IdentityReference property, and replace anything that starts with JAC all the way to the first backslash (likely domain name trimming) Select -Unique #Select only unique values #If there are no groups to display for this folder move to the next folder If($Groups.Count -eq 0){Continue} #Display Folder Path $Folder.FullName #Put a dashed line under the folder path (using the length of the folder path for the length of the line, just to look nice) '-'*$Folder.FullName.Length #Loop through each group and display its name and users ForEach ($Group in $Groups){ #Display the group name $Group #Add a line under the group name '-'*$Group.Length #Check if we already have this group, and if not get the group from AD If($ADGroups.Keys -notcontains $Group){ $Members = Get-ADGroupMember $Group -Recursive -ErrorAction Ignore | % Name $ADGroups.Add($Group,$Members) } #Display the group members $ADGroups[$Group] } #output a blank line, for some seperation between folders "`n" } ```
I have managed to get this working for me. I edited the below section to show the Name and Username of the user. ``` $Members = Get-ADGroupMember $Group -Recursive -ErrorAction Ignore | % Name | Get-ADUser -Property DisplayName | Select-Object DisplayName,Name | Sort-Object DisplayName ``` This works really well, but would there be a way to get it to stop listing the same group access if it's repeated down the folder structure? For example, "\example1\example2" was assigned a group called "group1" and we had the following folder structure: ``` \\example1\example2\folder1 \\example1\example2\folder2 \\example1\example2\folder1\randomfolder \\example1\example2\folder2\anotherrandomfolder ``` All the folders are assigned the group "group1", and the current code will list each directory's group and users, even though it's the same. Would it be possible to get it to only list the group and users once if it's repeated down the directory structure? The `-notcontains` doesn't seem to work for me If that makes sense?
55,132,453
[![sample of the target](https://i.stack.imgur.com/ULXR2.png)](https://i.stack.imgur.com/ULXR2.png) I have been trying to achieve something like the above in ionic 4 but it seems like there is no hope for me cos it seems I can only use inputs and not a custom HTML & icons been passed on the alert. any idea on how to achieve this pls ```js async presentColor() { const alert = await this.alertController.create({ header: "Choose Color", inputs: [ { name: "Red", type: "checkbox", label: "Red", value: "Red", checked: true }, { name: "Black", type: "checkbox", label: "Black", value: "Black" }, { name: "purple", type: "checkbox", label: "Purple", value: "Purple" } ], buttons: [ { text: "Cancel", role: "cancel", cssClass: "secondary", handler: (data) => { console.log("Confirm Cancel", data); } }, { text: "Ok", handler: () => { console.log("Confirm Ok"); } } ] }); ```
2019/03/13
[ "https://Stackoverflow.com/questions/55132453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7468924/" ]
The Ionic team has not made alert components easily customizable so icons can be added to the alert component. See issue: <https://github.com/ionic-team/ionic/issues/7874> But you could easily create a modal component and reduce its size to be closer to that of the alert dialog box.
Why you want use ion alert. You can use other components like [popovers](https://ionicframework.com/docs/v3/components/#popovers) to do that
55,132,453
[![sample of the target](https://i.stack.imgur.com/ULXR2.png)](https://i.stack.imgur.com/ULXR2.png) I have been trying to achieve something like the above in ionic 4 but it seems like there is no hope for me cos it seems I can only use inputs and not a custom HTML & icons been passed on the alert. any idea on how to achieve this pls ```js async presentColor() { const alert = await this.alertController.create({ header: "Choose Color", inputs: [ { name: "Red", type: "checkbox", label: "Red", value: "Red", checked: true }, { name: "Black", type: "checkbox", label: "Black", value: "Black" }, { name: "purple", type: "checkbox", label: "Purple", value: "Purple" } ], buttons: [ { text: "Cancel", role: "cancel", cssClass: "secondary", handler: (data) => { console.log("Confirm Cancel", data); } }, { text: "Ok", handler: () => { console.log("Confirm Ok"); } } ] }); ```
2019/03/13
[ "https://Stackoverflow.com/questions/55132453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7468924/" ]
The Ionic team has not made alert components easily customizable so icons can be added to the alert component. See issue: <https://github.com/ionic-team/ionic/issues/7874> But you could easily create a modal component and reduce its size to be closer to that of the alert dialog box.
I know this is a bit late, but I just created an [npm package](https://www.npmjs.com/package/ionic-custom-alert) to handle adding an angular component into a popup.
42,262,282
I refactored my storyboard and now I'm unable to set the badge value on the refactored storyboard. This is the main story board and MessageCenter is the refactored storyboard. [![enter image description here](https://i.stack.imgur.com/Qflvp.png)](https://i.stack.imgur.com/Qflvp.png) Message Center StoryBoard: [![enter image description here](https://i.stack.imgur.com/c2fM4.png)](https://i.stack.imgur.com/c2fM4.png) I'm setting the badge value in the app delegate, which isn't working: if let tabBarController = self.window!.rootViewController as? UITabBarController { tabBarController.tabBar.items![2](https://i.stack.imgur.com/c2fM4.png).badgeValue = "3" } Any ideas?
2017/02/15
[ "https://Stackoverflow.com/questions/42262282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5992282/" ]
Try with this code in your TabBarCustomViewController: ``` DispatchQueue.main.async(execute: { self.tabBar.items?[3].badgeValue = "3" }) ```
if let tabBarController = self.window?.rootViewController as? UITabBarController { tabBarController.viewControllers?[2].tabBarItem.badgeValue = "3" }
46,897,375
I did overloading of '+' operator using friend functions for base and derived classes as below. ``` #include <iostream> using namespace std; class base { private: int x; public: base(int a) : x(a) { } void printx() { cout << "x : " << x << endl; } friend void operator+(int data, base &obj); }; void operator+(int data, base &obj) { cout << "in friend base" << endl; obj.x = data + obj.x; } class derived : public base { private: int y; public: derived(int a, int b) : base(a), y(b) { } void printy() { cout << "y : " << y << endl; } friend void operator+(int data, derived &obj); }; void operator+(int data, derived &obj) { cout << "in friend derived" << endl; operator+(data, obj.base); obj.y = data + obj.y; cout << "y in friend : " << obj.y << endl; } int main() { derived c(2, 3); 4 + c; c.printx(); c.printy(); } ``` But this is giving compilation error as below ``` inoverload+.cpp: In function ‘void operator+(int, derived&)’: inoverload+.cpp:51:25: error: invalid use of ‘base::base’ operator+(data, obj.base); ``` While the below program is getting successfully compiled. ``` #include <iostream> using namespace std; class base { public: int x; base(int a) : x(a) { } }; class d1 : public base { public: d1(int a) : base(a) { } }; class d2 : public base { public: d2(int a) : base(a) { } }; class derived : public d1, public d2 { public: derived(int a) : d1(a), d2(a) { } }; int main() { derived obj(2); cout << obj.d1::x << endl; } ``` Can any one please explain why I was not able to access the base part using derived class object in first program where as I am able to do the same in the same program. Note : I already tried with cast and it worked. But my question is if it is not correct to access base using obj.base how this is correct in second program obj.d1::x?
2017/10/23
[ "https://Stackoverflow.com/questions/46897375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3137388/" ]
The dot operator is for *member access*. But there is not a `base` member in `derived`, so `obj.base` is ill-formed. To obtain a base class reference explicitly from a derived object, you need to cast: ``` operator+(data, static_cast<base&>(obj)); ``` In the second program `obj.d1::x` does not access the *member* `d1`. It accesses the member `x`. However, to resolve ambiguity, C++ allows to use the scope resolution operator to disambiguate in which unique base class sub-object this member is found.
The line `operator+(data, obj.base);` implies that `obj` has a member named `base`, which is not true. The base portion of a derived object is not represented as a member. To coerce overloading to consider your object as if it was it's base type, you can case it to a reference of it's base class. The compiler must then treat it as if it's type was strictly the base type for that function call. Try the following : ``` void operator+(int data, derived &obj) { cout << "in friend derived" << endl; operator+(data, static_cast<base&>(obj)); // Cast happens here obj.y = data + obj.y; cout << "y in friend : " << obj.y << endl; } ```
59,695,746
I have following string 1,2,3,a,b,c,a,b,c,1,2,3,c,b,a,2,3,1, I would like to get only the first occurrence of any number without changing the order. This would be 1,2,3,a,b,c, With this regex (found @ <https://stackoverflow.com/a/29480898/9307482>) I can find them, but only the last occurrences. And this reverses the order. `(\w)(?!.*?\1)` (<https://regex101.com/r/3fqpu9/1>) It doesn't matter if the regex ignores the comma. The order is important.
2020/01/11
[ "https://Stackoverflow.com/questions/59695746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9307482/" ]
Regular expression is not meant for that purpose. You will need to use an index filter or Set on array of characters. Since you don't have a language specified I assume you are using javascript. Example modified from: <https://stackoverflow.com/a/14438954/1456201> ```js String.prototype.uniqueChars = function() { return [...new Set(this)]; } var unique = "1,2,3,a,b,c,a,b,c,1,2,3,c,b,a,2,3,1,".split(",").join('').uniqueChars(); console.log(unique); // Array(6) [ "1", "2", "3", "a", "b", "c" ] ```
I would use something like this: ``` // each index represents one digit: 0-9 const digits = new Array(10); // make your string an array const arr = '123abcabc123cba231'.split(''); // test for digit var reg = new RegExp('^[0-9]$'); arr.forEach((val, index) => { if (reg.test(val) && !reg.test(digits[val])) { digits[val] = index; } }); console.log(`occurrences: ${digits}`); // [,0,1,2,,,,....] ``` To interpret, for the digits array, since you have nothing in the 0 index you know you have zero occurrences of zero. Since you have a zero in the 1 index, you know that your first one appears in the first character of your string (index zero for array). Two appears in index 1 and so on..
59,695,746
I have following string 1,2,3,a,b,c,a,b,c,1,2,3,c,b,a,2,3,1, I would like to get only the first occurrence of any number without changing the order. This would be 1,2,3,a,b,c, With this regex (found @ <https://stackoverflow.com/a/29480898/9307482>) I can find them, but only the last occurrences. And this reverses the order. `(\w)(?!.*?\1)` (<https://regex101.com/r/3fqpu9/1>) It doesn't matter if the regex ignores the comma. The order is important.
2020/01/11
[ "https://Stackoverflow.com/questions/59695746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9307482/" ]
Regular expression is not meant for that purpose. You will need to use an index filter or Set on array of characters. Since you don't have a language specified I assume you are using javascript. Example modified from: <https://stackoverflow.com/a/14438954/1456201> ```js String.prototype.uniqueChars = function() { return [...new Set(this)]; } var unique = "1,2,3,a,b,c,a,b,c,1,2,3,c,b,a,2,3,1,".split(",").join('').uniqueChars(); console.log(unique); // Array(6) [ "1", "2", "3", "a", "b", "c" ] ```
A perl way to do the job: ``` use Modern::Perl; my $in = '4,d,e,1,2,3,4,a,b,c,d,e,f,a,b,c,1,2,3,c,b,a,2,3,1,'; my (%h, @r); for (split',',$in) { push @r, $_ unless exists $h{$_}; $h{$_} = 1; } say join',',@r; ``` **Output:** ``` 4,d,e,1,2,3,a,b,c,f ```
4,603,576
In Vakil's FOAG, the projective space $\Bbb{P}^n\_A$ is defined to be $\operatorname{Proj} A[x\_0, x\_1, \ldots, x\_n]$. (there is another definition in the book too, just glueing $n+1$ affine space). Then in serveral exercises, the author uses the symbol $[x\_0, x\_1, \ldots, x\_n]$ without giving the meaning of it. For example(added a screenshot too for voiding misquote): > > **7.3.F** Make sense of the following sentence: $\Bbb{A}\_k^{n+1} \backslash \{\vec{0}\} \to\Bbb{P}^n\_k$ given by $(x\_0, x\_1, \ldots, x\_n) \mapsto [x\_0, x\_1, \ldots, x\_n]$ is a morphism of schemes. > > > [![enter image description here](https://i.stack.imgur.com/KI8sZ.png)](https://i.stack.imgur.com/KI8sZ.png) I did get a morphism $\Bbb{A}\_k^{n+1} \backslash \{\vec{0}\} \to\Bbb{P}^n\_k$ by glueing the morphism $$D(x\_i) = \operatorname{Spec} A[x\_0, x\_1, \cdots, x\_n]\_{x\_i} \to \operatorname{Spec} (A[x\_0, x\_1, \ldots, x\_n]\_{x\_i})\_0 = D\_+(x\_i)$$. But I cannot see how is it given by $(x\_0, x\_1, \ldots, x\_n) \mapsto [x\_0, x\_1, \ldots, x\_n]$. Related paragraphs are in [FOAG subsection 4.5.7 (page 150) and exercise 7.3.F (page 202)](http://math.stanford.edu/%7Evakil/216blog/FOAGaug2922publici.pdf) (and exercise 7.3.O uses the notation $[f\_0, f\_1, \ldots, f\_n]$) There are more examples using notations of this style, e.g., 7.4.1 Example, the morphism $\Bbb{C}\Bbb{P}^1 \to \Bbb{C}\Bbb{P}^2$ given by $[s, t] \to [s^{20}, s^{9}t^{11}, t^{20}]$. I interpret it as the morphism given by $\Bbb{C}[x, y, z] \to \Bbb{C}[s, t]$ given by $x=s^{20}, y=s^9t^{11}, z=t^{20}$. Nevertheless, the usage of notation $[x\_0, x\_1, \ldots, x\_n]$ in exercise 7.3.F seems unable to be interpret in this way. Thank you very much. Update: After Robert and hm2020's excellent answers, I want to update the question and make my question more specific to the sentence "given by $(x\_0, x\_1, \cdots, x\_n) \mapsto [x\_0, x\_1, \cdots, x\_n]$". I think I know how to construct the morphism, here is a proof(skip it if too long, my concrete question is after the proof): > > Denote $D(x\_i)$ be the distinguished open subset of $\mathbb{A}\_{k}^{n+1}$ that $x\_i$ does not vanished. Then one has that $\cup D(x\_i) = \mathbb{A}\_{k}^{n+1}\backslash\{\overrightarrow{0}\}$: if some point $p \notin \cup D(x\_i)$, then for all $x\_i$ one has $p \notin D(x\_i)$, i.e., $x\_i \in p$. Then $(x\_0, x\_1, \ldots, x\_n) \subset p$. Since $(x\_0, x\_1, \ldots, x\_n)$ is a maximal ideal, one have $(x\_0, x\_1, \ldots, x\_n) = p$. Then $p = \overrightarrow{0}$ by the notation and $p \notin \mathbb{A}\_{k}^{n+1}\backslash\{\overrightarrow{0}\}.$ If $p \in \cup D(x\_i)$, then there is some $x\_i$ such that $p \in D(x\_i)$. Hence $x\_i \notin p$. Hence $p \neq \overrightarrow{0}$ and $p \in \mathbb{A}\_{k}^{n+1}\backslash\{\overrightarrow{0}\}$. > > > On the other hand, denote $D\_+(x\_i)$ be the homogeneous distinguished open subset of $\Bbb{P}^n\_k := \operatorname{Proj} k[x\_0, x\_1, \ldots, x\_n]$ that $x\_i$ does not vanished. One has $\Bbb{P}^n\_k = \cup D\_+(x\_i)$. > > > By $D(x\_i) \cong \operatorname{Spec} k[x\_0, x\_1, \ldots, x\_n]\_{x\_i}$ > and $D\_+(x\_i) \cong \operatorname{Spec}(( k[x\_0, x\_1, \ldots, x\_n])\_{x\_i})\_0$ and since any map on rings induce a map on the schemes of their spectrums on the opposite direction, from the inclusion map $(( k[x\_0, x\_1, \ldots, x\_n])\_{x\_i})\_0 \hookrightarrow k[x\_0, x\_1, \ldots, x\_n]\_{x\_i}$ one has > > > $$D(x\_i) \cong \operatorname{Spec} k[x\_0, x\_1, \ldots, x\_n]\_{x\_i} > \to D\_+(x\_i) \cong \operatorname{Spec}(( k[x\_0, x\_1, \ldots, x\_n])\_{x\_i})\_0$$ > > > . Composing it with $D\_+(x\_i) \hookrightarrow \Bbb{P}^n\_k$. One has morphisms $\phi\_i: D(x\_i) \to \Bbb{P}^n\_k$. > > > These morphisms in fact can be glued into a morphism $\mathbb{A}\_{k}^{n+1}\backslash\{\overrightarrow{0}\} \rightarrow \mathbb{P}\_{k}^{n}$, i.e, they agree on the overlaps: Since $D(x\_i) \cap D(x\_j) = D(x\_i x\_j) \cong \operatorname{Spec} k[x\_0, x\_1, \ldots, x\_n]\_{x\_i x\_j}$ and $D\_+(x\_i) \cap D\_+(x\_j) = D\_+(x\_i x\_j) \cong \operatorname{Spec} (( k[x\_0, x\_1, \ldots, x\_n])\_{x\_i x\_j})\_0$, one has $\phi\_i |\_{D(x\_i) \cap D(x\_j)} = \phi\_j |\_{D(x\_i) \cap D(x\_j)}: \operatorname{Spec} k[x\_0, x\_1, \ldots, x\_n]\_{x\_i x\_j} \to \operatorname{Spec} (( k[x\_0, x\_1, \ldots, x\_n])\_{x\_i x\_j})\_0 \hookrightarrow \Bbb{P}^n\_k$. > > > $\square$ > > > But from the proof, one see that the morphism $\mathbb{A}\_{k}^{n+1}\backslash\{\overrightarrow{0}\} \rightarrow \mathbb{P}\_{k}^{n}$ is just "given"(one construct it from glueing of pieces, and the morphism on pieces are just given like "from one's mind", It's like "aha. as we already know this and this, we construct it from that and that"). It's not as Vakil said in the text, the morphism is "given by $(x\_0, x\_1, \cdots, x\_n) \mapsto [x\_0, x\_1, \cdots, x\_n]$". I can make sense construction a morphism $\mathbb{A}\_{k}^{n+1}\backslash\{\overrightarrow{0}\} \rightarrow \mathbb{P}\_{k}^{n}$, but I cannot make sense how it "given by $(x\_0, x\_1, \cdots, x\_n) \mapsto [x\_0, x\_1, \cdots, x\_n]$". So my question is about how "given by $(x\_0, x\_1, \cdots, x\_n) \mapsto [x\_0, x\_1, \cdots, x\_n]$" really make sense scheme-theoreticly? I know that: 1. In classical algebraic geometry, it just like Robert's answer. 2. For affine scheme, since the opposite category of rings is equivalent to the category of affine schemes, I understand how does morphism given by maps on affine coordinates work too. 3. For morphism on projective schemes, I understand it too I think, it's via Vakil's section 7.4(here I have to attach two screenshots to make the context complete): [![enter image description here](https://i.stack.imgur.com/DzIHq.png)](https://i.stack.imgur.com/DzIHq.png) [![enter image description here](https://i.stack.imgur.com/7Vd07.png)](https://i.stack.imgur.com/7Vd07.png) But back to the case "given by $(x\_0, x\_1, \cdots, x\_n) \mapsto [x\_0, x\_1, \cdots, x\_n]$", the map $\mathbb{A}\_{k}^{n+1}\backslash\{\overrightarrow{0}\} \rightarrow \mathbb{P}\_{k}^{n}$ is neither affine scheme to affine scheme, nor project scheme to project scheme. How can it make sense saying "given by $(x\_0, x\_1, \cdots, x\_n) \mapsto [x\_0, x\_1, \cdots, x\_n]$"? I hope this update state my question more clear. I mainly study algebraic geometry by myself using Vakil's FOAG. It may be a little weird question and maybe I hit some wrong path. Thank you very much again!
2022/12/22
[ "https://math.stackexchange.com/questions/4603576", "https://math.stackexchange.com", "https://math.stackexchange.com/users/195865/" ]
In classical, i.e. non-scheme-theoretic algebraic geometry, the projective space of dimension $n$ over some field $k$ is usually defined as $\mathbb{P}^n(k) = (k^{n+1} \setminus \lbrace 0 \rbrace) / k^\times = (\mathbb{A}^{n+1}(k) \setminus \lbrace 0 \rbrace) / k^\times$. This means that an element of $\mathbb{P}^n(k)$ is an equivalence class of points of $\mathbb{A}^{n+1}(k) \setminus \lbrace 0 \rbrace $, where we identify two points $(x\_0,\dotsc,x\_n)$ and $(y\_0,\dotsc,y\_n)$, if there exists $\lambda \in k^\times$ with $y\_i = \lambda x\_i$ for all $0 \leq i \leq n$. These equivalence classe are most commonly denoted by $[x\_0,\dotsc,x\_n]$ or by $[x\_0 : \dotsc : x\_n]$ and are called homogeneous coordinates. The quotient map $\mathbb{A}^{n+1}(k) \setminus \lbrace 0 \rbrace \rightarrow \mathbb{P}^n(k)$ is thus given by mapping $(x\_0,\dotsc,x\_n)$ to $[x\_0,\dotsc,x\_n]$. Also here one works with affine charts to study its properties. In scheme-theoretic language one can also make sense of maps defined in terms of coordinates, see e.g. [this post here](https://math.stackexchange.com/questions/101038/what-means-this-notion-for-scheme-morphism). Vakil wants the readers to construct the above quotient map scheme-theoretically, which for example boils down to gluing it together from the standard affine charts as you did. In particular, he assumes familiarity with the classical situation. I would now suggest you to compare the local pieces of the two maps and how they are glued to see that they encode the same kind of map. One can also interpret the task to check that the right thing happens on maximal ideals, i.e. closed points, of course.
**Question:** "Thank you very much." **Answer:** If $V:=\{e\_0,..,e\_n\}$ is a vector space over a field $k$ and if $V^\*:=\{x\_0,..,x\_n\}$ is the dual vector space it follows the symbols $x\_i:=e\_i^\*$ are the coordinate functions on $V$ in the sense that for a vector $v:=\sum\_i a\_ie\_i$ it follows $x\_i(v):=e\_i$. The ring of polynomial functions on $V$, denoted $Sym\_k^\*(V^\*)$ is isomorphic to the polynomial ring $A:=k[x\_0,..,x\_n]$ and if $\mathbb{A}^{n+1}\_k:=Spec(A)$ it follows for any $n+1$ tuple of numbers $p:=(a\_0,..,a\_n):=\sum\_i a\_ie\_i \in V$ we get a maximal ideal $\mathfrak{m}\_p:=(x\_0-a\_0,..,x\_n-a\_n) \subseteq A$. A map of affine schemes $\phi: Spec(R) \rightarrow Spec(T)$ is unquely determined by a map of commutative rings $f: T \rightarrow R$. You should not confuse the point $p \in V$ and the maximal ideal $\mathfrak{m}\_p \subseteq A$. **Example:** When writing down 2-uple embedding $f: \mathbb{P}^1\_k \rightarrow \mathbb{P}^2\_k$ some authors write $$(1) f([x\_0:x\_1]):=[x\_0^2:x\_0x\_1:x\_1^2]$$ where $x\_i$ are "homogeneous coordinates" on the projective line. Some authors write $$(2) f([a\_0: a\_1]):=[a\_0^2: a\_0a\_1:a\_1^2]$$ where $a\_0,a\_1 \in k$ are elements in the field and where not both $a\_i=0$. You may also write down $\pi: \mathbb{A}^{n+1}\_k-\{(0)\} \rightarrow \mathbb{P}^n\_k$ by $$\pi(a\_0,..,a\_n):=[a\_0: \cdots : a\_n].$$ Since not all $a\_i$ are zero it follows you get a well defined map. Moreover $\pi(\lambda a\_0,..,\lambda a\_n)=\pi(a\_0,..,a\_n)$. Some authors do not distingush between the point $p \in V$ and the coordinate functions $x\_0,..,x\_n$ which are elements in the dual $V^\*$. The exercise asks: You should convince yourself that you understand the distinction between the "set theoretic map" defined in $(2)$ and the map of schemes $f: \mathbb{P}^1\_k \rightarrow \mathbb{P}^2\_k$. When you have written down the map as in $(2)$ this does not mean you have written down a "map of schemes". Hence: You must figure out yourself what a morphism of schemes is and why some authors write a morphism as in $(1))$ and $(2)$. If you want to construct this map as a "map of schemes" you must do this in terms of coordinate rings and maps of rings. Since the projective line and plane are not affine schemes, what you must do is to define the map on an open affine cover and then prove your construction glue to give a global map. This is explained in Ch II in Hartshorne. In this case you construct projective space as a quotient of $X:=\mathbb{A}^{n+1}-{(0)}$ by a group action, and $X$ is not an affine scheme: You want to construct $\mathbb{P}^n\_k$ as a quotient $X/G$ where $G$ is the "multiplicative group" $k^\*$. In the "scheme language" you define $G:=Spec(k[t,1/t])$. There is an action $\sigma: G \times\_k X \rightarrow X$ and to define this action you must write it down "dually" in terms "coactions". The open subscheme $X \subseteq \mathbb{A}^{n+1}\_k$ is not affine hence a morphism $\sigma$ must be defined locally, on an open affine cover of $X$. You find this discussed on this site. Here is an example how to do this for the projective line: [Why is the projection map $\mathbb{A}^{n+1}\_k\setminus \{0\} \to \mathbb{P}^n\_k$ a morphism of schemes?](https://math.stackexchange.com/questions/2369274/why-is-the-projection-map-mathbban1-k-setminus-0-to-mathbbpn-k/4208185#4208185) Here you construct the "line bundle" $\mathbb{V}(\mathcal{O}(m)^\*)$ using "quotients": [A description of line bundles on projective spaces, $\mathcal{O}\_{\mathbb{P}^n}(m)$ defined using a character of $\mathbb{C}^\*$.](https://math.stackexchange.com/questions/4599310/a-description-of-line-bundles-on-projective-spaces-mathcalo-mathbbpn/4601133#4601133) What I do here is to construct an action of group schemes $$\sigma: G \times \mathbb{A}^2\_k \rightarrow \mathbb{A}^2\_k$$ for any commutative ring $k$ and then we restrict this to the open subscheme $U:=\mathbb{A}^2\_k-\{(0)\}$ to get an action $$\sigma: G \times U \rightarrow U.$$ If you want to define such group actions for more general group schemes, such as $G:=Spec(\mathbb{Z}[t,1/t ])$ and $\mathbb{P}^n\_{\mathbb{Z}}$ etc., you must understand this construction. **Comment:** "Now I only understand Pn via Proj (and via glueing n+1 affine schemes, and the classical set-theoretic definition). Sad I'm still lacking some backgrounds for (actions and) quotients of schemes. Is it related to how to scheme-theoreticly interpret (x0,x1,⋯,xn)↦[x0,x1,⋯,xn] rigorously? If so, I return the question later and maybe vaguely use (x0,x1,⋯,xn)↦[x0,x1,⋯,xn] just as a hint rather than a definition temporarily (since I'am mainly following Vakil's FOAG)." **Response:** Over more general base schemes there is no "quick and easy" way to define morphisms of schemes. You must define maps wrto an open affine cover and prove that maps glue to a global map. If $X:=\mathbb{A}^2\_{\mathbb{Z}}$ with $U:=X-\{(0)\}$ there is a well defined action of $G:=Spec(\mathbb{Z}[t,1/t])$ on $U$ and a well defined $G$-invariant map $$ \pi: U \rightarrow \mathbb{P}^1\_{\mathbb{Z}}.$$ In Mumford's "Geometric invariant theory" he defines the notion "geometric quotient" for arbitrary actions of group schemes, and I suspect the map $\pi$ is such a geometric quotient. You must verify that Mumford's axioms are fulfilled for $\pi$. **Note:** If you take a look in Hartshorne's book, CHI, you fill find the following result (Prop.I.3.5): If $Y$ is any affine variety and $X$ any quasi projective variety (over an algebraically closed field $k$), there is a 1-1 correspondence of sets $$Hom\_{var/k}(X,Y) \cong Hom\_{k-alg}(A(Y), \Gamma(X, \mathcal{O}\_X))$$ But in the above case the varieties/ (or schemes) $U, \mathbb{P}^1\_k$ are not affine varieties (or schemes). Hence a morphism of such varieties must be defined locally: You choose open affine subvarieties of $U$ and $\mathbb{P}^1\_k$, define maps locally and then prove that your maps agree on intersections and glue to a globally defined morphism. Once you have understood the definition of a morphism, you can continue and study the relation between a morphism to projective space and global sections of invertible sheaves. In fact: If you take a look at Prop.II.7.1 in HH.Ch.II you will find they use this principle to prove the relation between global sections of invertible sheaves and maps to projective space.
23,207,471
say I have a piece of code in a local method ``` int a = 5, b; ``` and then some code to indirectly initialize B such as ``` if (true) b = 5; // legal if (someBool) b = 10; else b = 7; // legal if (false) b = 5; // illegal ``` illegal even though B will always be initialized ``` if (a > 10) b = 4; if (a <= 10) b = 4; ``` My question is what are all of the exact circumstances in which a local variable can be legitimately considered "initialized"?
2014/04/21
[ "https://Stackoverflow.com/questions/23207471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2255185/" ]
A local variable can be considered to be "initialized" if the compiler can easily deduce that every possible code path will lead through a path where the value has been set. * `if(true)` can be determined to always run. * `if(false)` can be determined to never run. * `if/else` can be determined to run at least one of the branches, so you must assign the variable in each branch if you want it to be guaranteed initialized. Same principle applies for `if/else if/.../else` * `switch` statements will either run one of the possible `case`s, or will hit the `default` case, so if you assign the variable in all of these places then it can be guaranteed initialized. The Java compiler doesn't bother checking all the possible values of each variable at various points in the method when making this determination, because variables are *variable*--they can change. However, if values can be considered *constant* then it can safely assume they won't change. For example, the compiler doesn't care if you assign a variable and never change it in your code: ``` boolean val = true; if(val) { b = 5; } ``` Debuggers and things make it possible for you to change the value of `val` on-the-fly, so the compiler doesn't make any assumptions here. However, if you make `val` *constant*, by declaring it `final` and initializing it with a constant or literal value, then the compiler will treat it exactly the same as if you'd used the constant value in code. ``` final boolean val = true; if(val) { // Same as if you'd said `if(true)` b = 5; } ``` Such constants can be chained, as well, and the compiler will simplify them to their constant values rather than maintaining the longer expressions and operators: ``` final int five = 5; final int four = five - 1; // Same as `four = 5 - 1`, or `four = 4` final boolean val = five > four; if(val) { // Same as `if(5 > 4)`, or `if(true)` b = 5; } ``` For further reading, check out the [Java Specs](http://docs.oracle.com/javase/specs/jls/se8/html/jls-16.html). (Hat tip to Radiodef for finding the right section.)
In this case: ``` if (false) b = 5; // illegal ``` The compiler throws an exception since `if(false)` can be erased at compile time. It is futile to even analyze a block of code that won't be executed by any mean. In this case: ``` int a = 5, b; if (a > 10) b = 4; if (a <= 10) b = 4; ``` The compiler cannot assure that the piece of code will be executed since `a` can change its value. In this case, you can *fix* it by marking `a` as final and assigning it a literal int value (which compiler can understand): ``` final int a = 5; int b; if (a > 10) b = 4; if (a <= 10) b = 4; ``` But note that you can still break this code by giving `final int a` a value that the compiler cannot determine: ``` final int a = foo(); int b; if (a > 10) b = 4; if (a <= 10) b = 4; //... int foo() { return 5; } ``` More info: * [Java Language Specification. Chapter 16. Definite Assignment](http://docs.oracle.com/javase/specs/jls/se8/html/jls-16.html)
29,191,682
I am having user id's and equivalent mail ids in file. according to user input ( i.e, we will give input as "apple") it should grep only mail id (only [email protected]) and stored in email=$(...) in-between user ids & mail ids have one space gap in file *mail id's in file* ``` user_id's mail_id's apple [email protected] mango [email protected] cat [email protected] etc... ``` *sample* ``` email=$(....) msg= dont run jobs in login node echo -e "$msg" |mail -s "RAM limit exceeded in iitmlogin4" "$email" ```
2015/03/22
[ "https://Stackoverflow.com/questions/29191682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4627729/" ]
You can use the Jackson library. An example can be found here: <http://www.mkyong.com/spring-mvc/spring-3-mvc-and-json-example/>
You will have to provide csrf token for POST request. Instead you can try this. [sending HashMap by angularjs $http.get in spring mvc](https://stackoverflow.com/questions/29060493/sending-hashmap-by-angularjs-http-get-in-spring-mvc) It works fine just a bit extra @RequestParams but on the better side you can send additional information too and not only the respective object.
29,191,682
I am having user id's and equivalent mail ids in file. according to user input ( i.e, we will give input as "apple") it should grep only mail id (only [email protected]) and stored in email=$(...) in-between user ids & mail ids have one space gap in file *mail id's in file* ``` user_id's mail_id's apple [email protected] mango [email protected] cat [email protected] etc... ``` *sample* ``` email=$(....) msg= dont run jobs in login node echo -e "$msg" |mail -s "RAM limit exceeded in iitmlogin4" "$email" ```
2015/03/22
[ "https://Stackoverflow.com/questions/29191682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4627729/" ]
Serialization (POJO -> JSON) and deserialization (JSON -> POJO) in Spring is simply obtained via `@RequestBody` and `@ResponseBody` annotations. You just need to define a Java class that represents/maps your JSON object on server-side. Example: ### Input JSON ``` {id: 123, name: "your name", description: ""} ``` ### Java class ``` public class MyClass { private int id; private String name; private String description; } ``` ### Methods in your controller ``` public void postJson(@RequestBody MyClass o){ // do something... } public @ResponseBody MyClass getJson(){ // do something... } ``` **NOTE** I omitted `@RequestMapping` settings.
You will have to provide csrf token for POST request. Instead you can try this. [sending HashMap by angularjs $http.get in spring mvc](https://stackoverflow.com/questions/29060493/sending-hashmap-by-angularjs-http-get-in-spring-mvc) It works fine just a bit extra @RequestParams but on the better side you can send additional information too and not only the respective object.
53,912,256
I'm trying to implement a CSSTransition to a modal in my project. The problem is that I am using css modules. **My modal's render method** ``` render() { return ( <Aux> <Backdrop show={this.props.show} clicked={this.props.modalClosed}/> <CSSTransition in={this.props.show} timeout={1000} mountOnEnter unmountOnExit classNames={?} > <div className={classes.Modal} > {this.props.children} </div> </CSSTransition> </Aux> ) } ``` **My Modal.css** ``` .fade-enter { } .fade-enter-active { animation:openModal 0.4s ease-out forwards; } .fade-exit{ } .fade-exit-active{ animation: closeModal 0.4s ease-out forwards; } ``` What do i pass to the classNames attribute in the CSSTransition component in order to make it work?
2018/12/24
[ "https://Stackoverflow.com/questions/53912256", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6193913/" ]
JSX: ``` <CSSTransition in={focused} timeout={500} classNames={{ enterActive: styles.MyClassEnterActive, enterDone: styles.MyClassEnterDone, exitActive: styles.MyClassExit, exitDone: styles.MyClassExitActive }}> <span className={styles.MyClass}>animated</span> </CSSTransition> ``` CSS Module: ``` .MyClass { position: absolute; left: 5px; } .MyClassEnterActive { left: 15px; transition: left 500ms; } .MyClassEnterDone { left: 15px; } .MyClassExit { left: 15px; } .MyClassExitActive { left: 5px; transition: left 500ms; } ``` Gracias [Lionel](https://github.com/css-modules/css-modules/issues/84#issuecomment-226731145)!
Solved by entering classes like this: ``` render() { return ( <Aux> <Backdrop show={this.props.show} clicked={this.props.modalClosed}/> <CSSTransition in={this.props.show} timeout={1000} mountOnEnter unmountOnExit classNames={{ enterActive: classes["fade-enter-active"], exitActive:classes["fade-exit-active"] }} > <div className={classes.Modal} > {this.props.children} </div> </CSSTransition> </Aux> ) } ```
33,608,523
Having the following test case: (find fiddle [**here**](https://jsfiddle.net/5qct25hu/)) ``` var a = new Date(); var b = null; var c = {test: "test"}; if(a) console.log(a); //--- prints the current date if(b) console.log('null'); //--- never reached if(c) console.log('test'); //--- prints 'test' console.log(a && b); //--- prints null ``` Knowing that ``` console.log(typeof null); //--- prints "object" console.log(typeof c); //--- prints "object" ``` I expect the result of ``` console.log(a && b); ``` to be **false** and not **null** as it shown in the example. Any hint?
2015/11/09
[ "https://Stackoverflow.com/questions/33608523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1054151/" ]
From [the MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Expressions_and_Operators#Logical_operators): > > `expr1 && expr2` : Returns expr1 if it can be converted to false; **otherwise, returns expr2** > > > `new Date` can't be converted to `false` (it's not [falsy](https://developer.mozilla.org/en-US/docs/Glossary/Falsy)), so `b` is returned.
From MDN: > > Logical AND (&&) expr1 && expr2 Returns expr1 if it can be converted > to false; otherwise, returns expr2. Thus, when used with Boolean > values, && returns true if both operands are true; otherwise, returns > false. > > > [MDN Documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators)
33,608,523
Having the following test case: (find fiddle [**here**](https://jsfiddle.net/5qct25hu/)) ``` var a = new Date(); var b = null; var c = {test: "test"}; if(a) console.log(a); //--- prints the current date if(b) console.log('null'); //--- never reached if(c) console.log('test'); //--- prints 'test' console.log(a && b); //--- prints null ``` Knowing that ``` console.log(typeof null); //--- prints "object" console.log(typeof c); //--- prints "object" ``` I expect the result of ``` console.log(a && b); ``` to be **false** and not **null** as it shown in the example. Any hint?
2015/11/09
[ "https://Stackoverflow.com/questions/33608523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1054151/" ]
> > I expect the result of > > > > ``` > console.log(a && b); > > ``` > > to be `false` and not `null` as it shown in the example. > > > In many languages with `&&` and `||` operators (or `AND` and `OR`), the result is always a boolean, yes.\* But JavaScript's `&&` and `||` are more useful than that: Their result is their left-hand operand's value or their right-hand operand's value, **not** coerced to boolean. Here's how `&&` works: 1. Evaluate the left-hand operand. 2. If the value is *falsey* (coerces to `false` when we make it a boolean), return the value (not the coerced value) 3. If the value from #2 is *truthy*, evaluate and return the right-hand value (uncoerced) The *falsey* values are `null`, `undefined`, `0`, `""`, `NaN`, and of course, `false`. Everything else is *truthy*. So for instance ``` console.log(0 && 'hi'); // 0 ``` ...shows `0` because `0` is falsey. Note it resulted in `0`, not `false`. Similarly, the result here: ``` console.log(1 && 'hello'); // hello ``` ...is `'hello'` because the right-hand side isn't coerced at all by `&&`. `||` works similarly (I call it the [*curiously-powerful OR operator*](http://blog.niftysnippets.org/2008/02/javascripts-curiously-powerful-or.html)): It evaluates the left-hand side and, if that's *truthy*, takes that as its result; otherwise it evaluates the right-hand side and takes *that* as its result. The only time you get `true` or `false` from `&&` or `||` is if the selected operand is *already* a boolean. --- \* Of course, many (but not all) of those languages (Java, C#) also require that the operands to `&&` and `||` already be booleans. C's an exception, it does some coercion, but still has the result of the operands as a boolean (or if you go back far enough, an int that will be 1 or 0).
From [the MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Expressions_and_Operators#Logical_operators): > > `expr1 && expr2` : Returns expr1 if it can be converted to false; **otherwise, returns expr2** > > > `new Date` can't be converted to `false` (it's not [falsy](https://developer.mozilla.org/en-US/docs/Glossary/Falsy)), so `b` is returned.
33,608,523
Having the following test case: (find fiddle [**here**](https://jsfiddle.net/5qct25hu/)) ``` var a = new Date(); var b = null; var c = {test: "test"}; if(a) console.log(a); //--- prints the current date if(b) console.log('null'); //--- never reached if(c) console.log('test'); //--- prints 'test' console.log(a && b); //--- prints null ``` Knowing that ``` console.log(typeof null); //--- prints "object" console.log(typeof c); //--- prints "object" ``` I expect the result of ``` console.log(a && b); ``` to be **false** and not **null** as it shown in the example. Any hint?
2015/11/09
[ "https://Stackoverflow.com/questions/33608523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1054151/" ]
> > I expect the result of > > > > ``` > console.log(a && b); > > ``` > > to be `false` and not `null` as it shown in the example. > > > In many languages with `&&` and `||` operators (or `AND` and `OR`), the result is always a boolean, yes.\* But JavaScript's `&&` and `||` are more useful than that: Their result is their left-hand operand's value or their right-hand operand's value, **not** coerced to boolean. Here's how `&&` works: 1. Evaluate the left-hand operand. 2. If the value is *falsey* (coerces to `false` when we make it a boolean), return the value (not the coerced value) 3. If the value from #2 is *truthy*, evaluate and return the right-hand value (uncoerced) The *falsey* values are `null`, `undefined`, `0`, `""`, `NaN`, and of course, `false`. Everything else is *truthy*. So for instance ``` console.log(0 && 'hi'); // 0 ``` ...shows `0` because `0` is falsey. Note it resulted in `0`, not `false`. Similarly, the result here: ``` console.log(1 && 'hello'); // hello ``` ...is `'hello'` because the right-hand side isn't coerced at all by `&&`. `||` works similarly (I call it the [*curiously-powerful OR operator*](http://blog.niftysnippets.org/2008/02/javascripts-curiously-powerful-or.html)): It evaluates the left-hand side and, if that's *truthy*, takes that as its result; otherwise it evaluates the right-hand side and takes *that* as its result. The only time you get `true` or `false` from `&&` or `||` is if the selected operand is *already* a boolean. --- \* Of course, many (but not all) of those languages (Java, C#) also require that the operands to `&&` and `||` already be booleans. C's an exception, it does some coercion, but still has the result of the operands as a boolean (or if you go back far enough, an int that will be 1 or 0).
From MDN: > > Logical AND (&&) expr1 && expr2 Returns expr1 if it can be converted > to false; otherwise, returns expr2. Thus, when used with Boolean > values, && returns true if both operands are true; otherwise, returns > false. > > > [MDN Documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators)
27,482,207
I'm trying to get all data-name atributes from this html section ``` <div class='get-all'> <div class='left-wrapper'> <div class='field-wrapper'> <div class='field'> <span class='product-id' data-name='itemId'>5</span> <img src='carousel/images/120231671_1GG.jpg' width='52' height='52'> <span class='final-descript' data-name='itemDescription'>Product 1</span> <span class='product-id' data-name='itemAmount'>225,99</span> <span class='product-id' data-name='itemQuantity'>1</span> </div> <div class='fix'></div> <div class='field'> <span class='review-price'>$ &nbsp;225,99</span> </div> </div> <div class='field-wrapper'> <div class='field'> <span class='product-id' data-name='itemId'>4</span> <img src='carousel/images/120231671_1GG.jpg' width='52' height='52'> <span class='final-descript' data-name='itemDescription'>Product 2</span> <span class='product-id' data-name='itemAmount'>699,80</span> <span class='product-id' data-name='itemQuantity'>1</span> </div> <div class='fix'></div> <div class='field'> <span class='review-price'>$ &nbsp;699,80</span> </div> </div> </div> <div class='left-wrapper'> <span class='f-itens'>Total</span><span id='fff' class='f-value'>925,79</span> <hr class='f-hr' /> <span class='f-itens'>Shipping</span><span class='f-value' id='f-value'> - </span> <hr class='f-hr' /> <div class='f-itens tots'>Total Price</div><span class='f-value' id='pagar'>-</span> </div> </div> ``` I'm using this javascript ``` $(".get-all").each(function(index, element){ $('[data-name]').each(function(index, element) { if($(this).attr("data-name")) { if (startsWith($(this).attr("data-name"),"itemAmount")) { var a = $(this).html() var b = a.replace(".", ""); var c = b.replace(",", "."); params[$(this).attr("data-name") + (index+1)] = c; } else { params[$(this).attr("data-name") + (index+1)] = $(this).html(); } } }); }); ``` But I'm only get the first data-name span atributes, like this ``` ItemId = 5 ItemDescription = Product 1 ItemAmount 225,99 ItemQuantity = 1 ``` How To get all attribute-name inside this spans? Thanks for all replyes.
2014/12/15
[ "https://Stackoverflow.com/questions/27482207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1803434/" ]
Do you mean that you want a list of the names of the objects in the database? That would be `select * from sys.objects`
You should check out [Tokenizer](http://search.cpan.org/~izut/SQL-Tokenizer-0.19/lib/SQL/Tokenizer.pm) - it splits your query on separate parts ("WHERE CLAUSE", "FUNCTION" etc.) I guess you can use any other tool, but the general approach is to tokenize your query.
27,482,207
I'm trying to get all data-name atributes from this html section ``` <div class='get-all'> <div class='left-wrapper'> <div class='field-wrapper'> <div class='field'> <span class='product-id' data-name='itemId'>5</span> <img src='carousel/images/120231671_1GG.jpg' width='52' height='52'> <span class='final-descript' data-name='itemDescription'>Product 1</span> <span class='product-id' data-name='itemAmount'>225,99</span> <span class='product-id' data-name='itemQuantity'>1</span> </div> <div class='fix'></div> <div class='field'> <span class='review-price'>$ &nbsp;225,99</span> </div> </div> <div class='field-wrapper'> <div class='field'> <span class='product-id' data-name='itemId'>4</span> <img src='carousel/images/120231671_1GG.jpg' width='52' height='52'> <span class='final-descript' data-name='itemDescription'>Product 2</span> <span class='product-id' data-name='itemAmount'>699,80</span> <span class='product-id' data-name='itemQuantity'>1</span> </div> <div class='fix'></div> <div class='field'> <span class='review-price'>$ &nbsp;699,80</span> </div> </div> </div> <div class='left-wrapper'> <span class='f-itens'>Total</span><span id='fff' class='f-value'>925,79</span> <hr class='f-hr' /> <span class='f-itens'>Shipping</span><span class='f-value' id='f-value'> - </span> <hr class='f-hr' /> <div class='f-itens tots'>Total Price</div><span class='f-value' id='pagar'>-</span> </div> </div> ``` I'm using this javascript ``` $(".get-all").each(function(index, element){ $('[data-name]').each(function(index, element) { if($(this).attr("data-name")) { if (startsWith($(this).attr("data-name"),"itemAmount")) { var a = $(this).html() var b = a.replace(".", ""); var c = b.replace(",", "."); params[$(this).attr("data-name") + (index+1)] = c; } else { params[$(this).attr("data-name") + (index+1)] = $(this).html(); } } }); }); ``` But I'm only get the first data-name span atributes, like this ``` ItemId = 5 ItemDescription = Product 1 ItemAmount 225,99 ItemQuantity = 1 ``` How To get all attribute-name inside this spans? Thanks for all replyes.
2014/12/15
[ "https://Stackoverflow.com/questions/27482207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1803434/" ]
Do you mean that you want a list of the names of the objects in the database? That would be `select * from sys.objects`
There are lots of different ways to solve this one, but each can be different depending on the version of SQL server you're on. One of the lowest levels to extract this info from is sysobjects... ``` SELECT o.name, CASE o.type WHEN 'U' THEN 'Table' WHEN 'V' THEN 'View' WHEN 'P' THEN 'Procedure' WHEN 'FN' THEN 'Function' ELSE o.Type END as Type FROM sysobjects o WHERE type in ('U', 'V', 'P', 'FN') ORDER BY Type ``` This will work on just about any version of SQL Server. Hope that helps
27,482,207
I'm trying to get all data-name atributes from this html section ``` <div class='get-all'> <div class='left-wrapper'> <div class='field-wrapper'> <div class='field'> <span class='product-id' data-name='itemId'>5</span> <img src='carousel/images/120231671_1GG.jpg' width='52' height='52'> <span class='final-descript' data-name='itemDescription'>Product 1</span> <span class='product-id' data-name='itemAmount'>225,99</span> <span class='product-id' data-name='itemQuantity'>1</span> </div> <div class='fix'></div> <div class='field'> <span class='review-price'>$ &nbsp;225,99</span> </div> </div> <div class='field-wrapper'> <div class='field'> <span class='product-id' data-name='itemId'>4</span> <img src='carousel/images/120231671_1GG.jpg' width='52' height='52'> <span class='final-descript' data-name='itemDescription'>Product 2</span> <span class='product-id' data-name='itemAmount'>699,80</span> <span class='product-id' data-name='itemQuantity'>1</span> </div> <div class='fix'></div> <div class='field'> <span class='review-price'>$ &nbsp;699,80</span> </div> </div> </div> <div class='left-wrapper'> <span class='f-itens'>Total</span><span id='fff' class='f-value'>925,79</span> <hr class='f-hr' /> <span class='f-itens'>Shipping</span><span class='f-value' id='f-value'> - </span> <hr class='f-hr' /> <div class='f-itens tots'>Total Price</div><span class='f-value' id='pagar'>-</span> </div> </div> ``` I'm using this javascript ``` $(".get-all").each(function(index, element){ $('[data-name]').each(function(index, element) { if($(this).attr("data-name")) { if (startsWith($(this).attr("data-name"),"itemAmount")) { var a = $(this).html() var b = a.replace(".", ""); var c = b.replace(",", "."); params[$(this).attr("data-name") + (index+1)] = c; } else { params[$(this).attr("data-name") + (index+1)] = $(this).html(); } } }); }); ``` But I'm only get the first data-name span atributes, like this ``` ItemId = 5 ItemDescription = Product 1 ItemAmount 225,99 ItemQuantity = 1 ``` How To get all attribute-name inside this spans? Thanks for all replyes.
2014/12/15
[ "https://Stackoverflow.com/questions/27482207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1803434/" ]
Do you mean that you want a list of the names of the objects in the database? That would be `select * from sys.objects`
This will give you the list of objects. Just similar to Paul Answer. SELECT name, type\_desc, type, create\_Date, modify\_date FROM sys.objects ORDER BY type\_Desc, type, name
23,199,085
I am writing a subroutine that prints an array of non redundant elements from another array. This code is inside my subroutine. ``` foreach (@old_table) { push(@new_table, $_) unless ($seen{$_}++); } print "@new_table" . "\n"; ``` Then i call my subroutine in a loop inside my main program, for the first iteration it is OK and my new table contains one occurrence of my old table. But after that `@new_table` keeps elements from past iterations and the print result is false. I tried emptying `@new_table` inside my subroutine like this ``` @new_table = (); foreach (@old_table) { push(@new_table, $_) unless ($seen{$_}++); } print "@new_table" . "\n"; ``` But then my `@new_table` becomes empty in all iterations except for the first one. What is the problem with this and how can i fix it ?
2014/04/21
[ "https://Stackoverflow.com/questions/23199085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2250162/" ]
Due to incorrect scoping, you're reusing the `@new_table` and `%seen` of previous passes. Create these just before the loop. ``` my @new_table; my %seen; foreach (@old_table) { push(@new_table, $_) unless ($seen{$_}++); } print "@new_table" . "\n"; ``` This can be simplified to ``` my %seen; my @new_table = grep { !$seen{$_}++ } @old_table; print "@new_table\n"; ``` You can also use ``` use List::MoreUtils qw( uniq ); my @new_table = uniq(@old_table); print "@new_table\n"; ``` --- You are using `use strict; use warnings;`, right? If not, you should be. Always.
You can try `uniq` from `List::MoreUtils` to remove redundant elements. ``` my @new_table = uniq(@old_table); ``` To quote from perldoc > > uniq LIST > > distinct LIST > > > Returns a new list by stripping duplicate values in LIST. The order of elements in the returned list is the same as in LIST. In > scalar context, returns > the number of unique elements in LIST. > > > > ``` > my @x = uniq 1, 1, 2, 2, 3, 5, 3, 4; # returns 1 2 3 5 4 > my $x = uniq 1, 1, 2, 2, 3, 5, 3, 4; # returns 5 > > ``` > >
25,501,251
I installed a fresh copy of GGTS on a fresh copy of windows 8 with JDK 1.7 installed. I tried to get it to compile my existing project which was based on 2.3.6 and it failed miserably as GGTS comes with grails 2.4.2. I know serveral people who had problems with 2.4.x so decided to stick with 2.3. So I downloaded 2.3.11 (latest 2.3) and created a GRAILS\_HOME pointing to 2.3.11 dir, and JAVA\_HOME pointing to the root of the JDK. On command line, I can now type grails - version and get back 2.3.11. I restarted GGTS, and deleted the project and createde it again, adding 2.3.11 as a new Grails version, and set the project to use that. It fails to compile, saying version is wrong. Fair enough. But when I try to launch the command line from GGTS using the gree circular button at the top which looks like 3 cups, it says: ``` Retrieving available scripts Retrieving available scripts An internal error occurred during: "Retrieving available scripts". java.lang.NullPointerException ``` Any ideas?
2014/08/26
[ "https://Stackoverflow.com/questions/25501251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072187/" ]
**check** all grails-project's '`app.grails.version`' in `application.properties`, then open window - preferences - groovy -grails, make sure that **all** VERSIONs were defined ( I use sts, maybe a little difference ). alternatively **update** all `application.properties` set `app.grails.version=2.3.11` ( the version your ggts defined )
Check if your environment variable of `GRAILS_HOME` is set to the right version of Grails.
25,501,251
I installed a fresh copy of GGTS on a fresh copy of windows 8 with JDK 1.7 installed. I tried to get it to compile my existing project which was based on 2.3.6 and it failed miserably as GGTS comes with grails 2.4.2. I know serveral people who had problems with 2.4.x so decided to stick with 2.3. So I downloaded 2.3.11 (latest 2.3) and created a GRAILS\_HOME pointing to 2.3.11 dir, and JAVA\_HOME pointing to the root of the JDK. On command line, I can now type grails - version and get back 2.3.11. I restarted GGTS, and deleted the project and createde it again, adding 2.3.11 as a new Grails version, and set the project to use that. It fails to compile, saying version is wrong. Fair enough. But when I try to launch the command line from GGTS using the gree circular button at the top which looks like 3 cups, it says: ``` Retrieving available scripts Retrieving available scripts An internal error occurred during: "Retrieving available scripts". java.lang.NullPointerException ``` Any ideas?
2014/08/26
[ "https://Stackoverflow.com/questions/25501251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072187/" ]
In my case the problem was, that I had moved grails to a different folder. I just had to change the location of grails in the settings. Hope that helps somebody.
**check** all grails-project's '`app.grails.version`' in `application.properties`, then open window - preferences - groovy -grails, make sure that **all** VERSIONs were defined ( I use sts, maybe a little difference ). alternatively **update** all `application.properties` set `app.grails.version=2.3.11` ( the version your ggts defined )
25,501,251
I installed a fresh copy of GGTS on a fresh copy of windows 8 with JDK 1.7 installed. I tried to get it to compile my existing project which was based on 2.3.6 and it failed miserably as GGTS comes with grails 2.4.2. I know serveral people who had problems with 2.4.x so decided to stick with 2.3. So I downloaded 2.3.11 (latest 2.3) and created a GRAILS\_HOME pointing to 2.3.11 dir, and JAVA\_HOME pointing to the root of the JDK. On command line, I can now type grails - version and get back 2.3.11. I restarted GGTS, and deleted the project and createde it again, adding 2.3.11 as a new Grails version, and set the project to use that. It fails to compile, saying version is wrong. Fair enough. But when I try to launch the command line from GGTS using the gree circular button at the top which looks like 3 cups, it says: ``` Retrieving available scripts Retrieving available scripts An internal error occurred during: "Retrieving available scripts". java.lang.NullPointerException ``` Any ideas?
2014/08/26
[ "https://Stackoverflow.com/questions/25501251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072187/" ]
**check** all grails-project's '`app.grails.version`' in `application.properties`, then open window - preferences - groovy -grails, make sure that **all** VERSIONs were defined ( I use sts, maybe a little difference ). alternatively **update** all `application.properties` set `app.grails.version=2.3.11` ( the version your ggts defined )
I ran into this same issue, and the problem was caused by the Grails installation it was trying to use. It wanted to use the default version that comes with GGTS rather than the one I installed myself. To fix, go to Window > Preferences > Groovy > Grails, then "Edit" the Grails installation it shows. I had to switch mine from `C:\ggts-bundle\grails-2.4.4\` to `C:\grails-2.4.2\`.
25,501,251
I installed a fresh copy of GGTS on a fresh copy of windows 8 with JDK 1.7 installed. I tried to get it to compile my existing project which was based on 2.3.6 and it failed miserably as GGTS comes with grails 2.4.2. I know serveral people who had problems with 2.4.x so decided to stick with 2.3. So I downloaded 2.3.11 (latest 2.3) and created a GRAILS\_HOME pointing to 2.3.11 dir, and JAVA\_HOME pointing to the root of the JDK. On command line, I can now type grails - version and get back 2.3.11. I restarted GGTS, and deleted the project and createde it again, adding 2.3.11 as a new Grails version, and set the project to use that. It fails to compile, saying version is wrong. Fair enough. But when I try to launch the command line from GGTS using the gree circular button at the top which looks like 3 cups, it says: ``` Retrieving available scripts Retrieving available scripts An internal error occurred during: "Retrieving available scripts". java.lang.NullPointerException ``` Any ideas?
2014/08/26
[ "https://Stackoverflow.com/questions/25501251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072187/" ]
In my case the problem was, that I had moved grails to a different folder. I just had to change the location of grails in the settings. Hope that helps somebody.
Check if your environment variable of `GRAILS_HOME` is set to the right version of Grails.
25,501,251
I installed a fresh copy of GGTS on a fresh copy of windows 8 with JDK 1.7 installed. I tried to get it to compile my existing project which was based on 2.3.6 and it failed miserably as GGTS comes with grails 2.4.2. I know serveral people who had problems with 2.4.x so decided to stick with 2.3. So I downloaded 2.3.11 (latest 2.3) and created a GRAILS\_HOME pointing to 2.3.11 dir, and JAVA\_HOME pointing to the root of the JDK. On command line, I can now type grails - version and get back 2.3.11. I restarted GGTS, and deleted the project and createde it again, adding 2.3.11 as a new Grails version, and set the project to use that. It fails to compile, saying version is wrong. Fair enough. But when I try to launch the command line from GGTS using the gree circular button at the top which looks like 3 cups, it says: ``` Retrieving available scripts Retrieving available scripts An internal error occurred during: "Retrieving available scripts". java.lang.NullPointerException ``` Any ideas?
2014/08/26
[ "https://Stackoverflow.com/questions/25501251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1072187/" ]
In my case the problem was, that I had moved grails to a different folder. I just had to change the location of grails in the settings. Hope that helps somebody.
I ran into this same issue, and the problem was caused by the Grails installation it was trying to use. It wanted to use the default version that comes with GGTS rather than the one I installed myself. To fix, go to Window > Preferences > Groovy > Grails, then "Edit" the Grails installation it shows. I had to switch mine from `C:\ggts-bundle\grails-2.4.4\` to `C:\grails-2.4.2\`.