qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
46,689,384
There is a css file and I want to make two things: 1) Remove all webkit keyframes and surrounding whitespace characters like: ``` @keyframes outToLeft { to { opacity: 0; -webkit-transform: translate3d(-100%, 0, 0); transform: translate3d(-100%, 0, 0); } } ``` 2) Remove all webkit prefixed properties and surrounding whitespace characters like: ``` -webkit-transform: translate3d(100%, 0, 0); ``` I tries to use %s but it doesn't work (maybe my construction wasn't right) What is the best way to do this?
2017/10/11
[ "https://Stackoverflow.com/questions/46689384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7013674/" ]
You need to add permission to your manifest first ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` After permission added in manifest following code would work fine for you "Number\_to\_call" will be youe number that is need to be replaced ``` val call = Intent(Intent.ACTION_DIAL) call.setData(Uri.parse("tel:" +"Number_to_call")) startActivity(call) ```
You need to add the run time permission. [Download the source code from here](http://deepshikhapuri.blogspot.in/2017/11/runtime-permission-in-android-kotlin.html) //Click function of layout: ``` rl_call.setOnClickListener { if (boolean_call) { phonecall() }else { fn_permission(Manifest.permission.CALL_PHONE,CALLMODE) } } ``` // Request permission response ``` fun fn_permission(permission:String,mode:Int){ requestPermissions(permission, object : PermissionCallBack { override fun permissionGranted() { super.permissionGranted() Log.v("Call permissions", "Granted") boolean_call=true phonecall() } override fun permissionDenied() { super.permissionDenied() Log.v("Call permissions", "Denied") boolean_call=false } }) } ``` // function to call intent ``` fun phonecall() { val intent = Intent(Intent.ACTION_CALL); intent.data = Uri.parse("tel:1234567890s") startActivity(intent) } ``` Thanks!
46,689,384
There is a css file and I want to make two things: 1) Remove all webkit keyframes and surrounding whitespace characters like: ``` @keyframes outToLeft { to { opacity: 0; -webkit-transform: translate3d(-100%, 0, 0); transform: translate3d(-100%, 0, 0); } } ``` 2) Remove all webkit prefixed properties and surrounding whitespace characters like: ``` -webkit-transform: translate3d(100%, 0, 0); ``` I tries to use %s but it doesn't work (maybe my construction wasn't right) What is the best way to do this?
2017/10/11
[ "https://Stackoverflow.com/questions/46689384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7013674/" ]
You need to add permission to your manifest first ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` After permission added in manifest following code would work fine for you "Number\_to\_call" will be youe number that is need to be replaced ``` val call = Intent(Intent.ACTION_DIAL) call.setData(Uri.parse("tel:" +"Number_to_call")) startActivity(call) ```
First you need to add permission to your `manifest` first : ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` This bit of code is used on the place of your method : ``` fun buChargeEvent(view: View) { var number: Int = txtCharge.text.toString().toInt() val callIntent = Intent(Intent.ACTION_CALL) callIntent.data = Uri.parse("tel:$number") if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CALL_PHONE) != PackageManager.PERMISSION_GRANTED) { if (ActivityCompat.shouldShowRequestPermissionRationale(this as Activity, Manifest.permission.CALL_PHONE)) { } else { ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.CALL_PHONE), MY_PERMISSIONS_REQUEST_CALL_PHONE) } } startActivity(callIntent) } ```
46,689,384
There is a css file and I want to make two things: 1) Remove all webkit keyframes and surrounding whitespace characters like: ``` @keyframes outToLeft { to { opacity: 0; -webkit-transform: translate3d(-100%, 0, 0); transform: translate3d(-100%, 0, 0); } } ``` 2) Remove all webkit prefixed properties and surrounding whitespace characters like: ``` -webkit-transform: translate3d(100%, 0, 0); ``` I tries to use %s but it doesn't work (maybe my construction wasn't right) What is the best way to do this?
2017/10/11
[ "https://Stackoverflow.com/questions/46689384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7013674/" ]
You need to add permission to your manifest first ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` After permission added in manifest following code would work fine for you "Number\_to\_call" will be youe number that is need to be replaced ``` val call = Intent(Intent.ACTION_DIAL) call.setData(Uri.parse("tel:" +"Number_to_call")) startActivity(call) ```
**This is the complete code for runtime permissions for Call Phone** > > **Step 1**:- add permission in manifest > > > ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` > > **Step 2**:- Call this method `checkAndroidVersion()` in `onCreate()` > > > ``` fun checkAndroidVersion() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { if (checkAndRequestPermissions()) { } else { } } else { // do code for pre-lollipop devices } } val REQUEST_ID_MULTIPLE_PERMISSIONS = 1 fun checkAndRequestPermissions(): Boolean { val call = ContextCompat.checkSelfPermission(this@MainActivity, Manifest.permission.CALL_PHONE) val listPermissionsNeeded = ArrayList<String>() if (call != PackageManager.PERMISSION_GRANTED) { listPermissionsNeeded.add(Manifest.permission.CALL_PHONE) } if (!listPermissionsNeeded.isEmpty()) { ActivityCompat.requestPermissions(this@MainActivity, listPermissionsNeeded.toTypedArray(), REQUEST_ID_MULTIPLE_PERMISSIONS) return false } return true } fun checkAndRequestPermissions(): Boolean { val call = ContextCompat.checkSelfPermission(this@MainActivity, Manifest.permission.CALL_PHONE) val listPermissionsNeeded = ArrayList<String>() if (call != PackageManager.PERMISSION_GRANTED) { listPermissionsNeeded.add(Manifest.permission.CALL_PHONE) } if (!listPermissionsNeeded.isEmpty()) { ActivityCompat.requestPermissions(this@MainActivity, listPermissionsNeeded.toTypedArray(), REQUEST_ID_MULTIPLE_PERMISSIONS) return false } return true } override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String>, grantResults: IntArray) { Log.d("in fragment on request", "Permission callback called-------") when (requestCode) { REQUEST_ID_MULTIPLE_PERMISSIONS -> { val perms = HashMap<String, Int>() // Initialize the map with both permissions perms[Manifest.permission.CALL_PHONE] = PackageManager.PERMISSION_GRANTED // Fill with actual results from user if (grantResults.size > 0) { for (i in permissions.indices) perms[permissions[i]] = grantResults[i] // Check for both permissions if (perms[Manifest.permission.CALL_PHONE] == PackageManager.PERMISSION_GRANTED ) { print("Storage permissions are required") // process the normal flow //else any one or both the permissions are not granted } else { Log.d("in fragment on request", "Some permissions are not granted ask again ") //permission is denied (this is the first time, when "never ask again" is not checked) so ask again explaining the usage of permission // // shouldShowRequestPermissionRationale will return true //show the dialog or snackbar saying its necessary and try again otherwise proceed with setup. if (ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.WRITE_EXTERNAL_STORAGE) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.CAMERA) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.READ_EXTERNAL_STORAGE) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.ACCESS_FINE_LOCATION)) { showDialogOK("Call permission is required for this app", DialogInterface.OnClickListener { dialog, which -> when (which) { DialogInterface.BUTTON_POSITIVE -> checkAndRequestPermissions() DialogInterface.BUTTON_NEGATIVE -> { } }// proceed with logic by disabling the related features or quit the app. }) } else { Toast.makeText(this@MainActivity, "Go to settings and enable permissions", Toast.LENGTH_LONG) .show() // //proceed with logic by disabling the related features or quit the app. }//permission is denied (and never ask again is checked) //shouldShowRequestPermissionRationale will return false } } } } } fun showDialogOK(message: String, okListener: DialogInterface.OnClickListener) { AlertDialog.Builder(this@MainActivity) .setMessage(message) .setPositiveButton("OK", okListener) .setNegativeButton("Cancel", okListener) .create() .show() } ``` > > > ``` > **Step 3**:- On button click > > ``` > > ``` fun buChargeEvent(view: View){ if(checkAndRequestPermissions(){ var number: Int = txtCharge.text.toString().toInt() val intentChrage = Intent(Intent.ACTION_CALL) intent.data = Uri.parse("tel:$number") startActivity(intentChrage) } } ```
46,689,384
There is a css file and I want to make two things: 1) Remove all webkit keyframes and surrounding whitespace characters like: ``` @keyframes outToLeft { to { opacity: 0; -webkit-transform: translate3d(-100%, 0, 0); transform: translate3d(-100%, 0, 0); } } ``` 2) Remove all webkit prefixed properties and surrounding whitespace characters like: ``` -webkit-transform: translate3d(100%, 0, 0); ``` I tries to use %s but it doesn't work (maybe my construction wasn't right) What is the best way to do this?
2017/10/11
[ "https://Stackoverflow.com/questions/46689384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7013674/" ]
You need to add the run time permission. [Download the source code from here](http://deepshikhapuri.blogspot.in/2017/11/runtime-permission-in-android-kotlin.html) //Click function of layout: ``` rl_call.setOnClickListener { if (boolean_call) { phonecall() }else { fn_permission(Manifest.permission.CALL_PHONE,CALLMODE) } } ``` // Request permission response ``` fun fn_permission(permission:String,mode:Int){ requestPermissions(permission, object : PermissionCallBack { override fun permissionGranted() { super.permissionGranted() Log.v("Call permissions", "Granted") boolean_call=true phonecall() } override fun permissionDenied() { super.permissionDenied() Log.v("Call permissions", "Denied") boolean_call=false } }) } ``` // function to call intent ``` fun phonecall() { val intent = Intent(Intent.ACTION_CALL); intent.data = Uri.parse("tel:1234567890s") startActivity(intent) } ``` Thanks!
**This is the complete code for runtime permissions for Call Phone** > > **Step 1**:- add permission in manifest > > > ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` > > **Step 2**:- Call this method `checkAndroidVersion()` in `onCreate()` > > > ``` fun checkAndroidVersion() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { if (checkAndRequestPermissions()) { } else { } } else { // do code for pre-lollipop devices } } val REQUEST_ID_MULTIPLE_PERMISSIONS = 1 fun checkAndRequestPermissions(): Boolean { val call = ContextCompat.checkSelfPermission(this@MainActivity, Manifest.permission.CALL_PHONE) val listPermissionsNeeded = ArrayList<String>() if (call != PackageManager.PERMISSION_GRANTED) { listPermissionsNeeded.add(Manifest.permission.CALL_PHONE) } if (!listPermissionsNeeded.isEmpty()) { ActivityCompat.requestPermissions(this@MainActivity, listPermissionsNeeded.toTypedArray(), REQUEST_ID_MULTIPLE_PERMISSIONS) return false } return true } fun checkAndRequestPermissions(): Boolean { val call = ContextCompat.checkSelfPermission(this@MainActivity, Manifest.permission.CALL_PHONE) val listPermissionsNeeded = ArrayList<String>() if (call != PackageManager.PERMISSION_GRANTED) { listPermissionsNeeded.add(Manifest.permission.CALL_PHONE) } if (!listPermissionsNeeded.isEmpty()) { ActivityCompat.requestPermissions(this@MainActivity, listPermissionsNeeded.toTypedArray(), REQUEST_ID_MULTIPLE_PERMISSIONS) return false } return true } override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String>, grantResults: IntArray) { Log.d("in fragment on request", "Permission callback called-------") when (requestCode) { REQUEST_ID_MULTIPLE_PERMISSIONS -> { val perms = HashMap<String, Int>() // Initialize the map with both permissions perms[Manifest.permission.CALL_PHONE] = PackageManager.PERMISSION_GRANTED // Fill with actual results from user if (grantResults.size > 0) { for (i in permissions.indices) perms[permissions[i]] = grantResults[i] // Check for both permissions if (perms[Manifest.permission.CALL_PHONE] == PackageManager.PERMISSION_GRANTED ) { print("Storage permissions are required") // process the normal flow //else any one or both the permissions are not granted } else { Log.d("in fragment on request", "Some permissions are not granted ask again ") //permission is denied (this is the first time, when "never ask again" is not checked) so ask again explaining the usage of permission // // shouldShowRequestPermissionRationale will return true //show the dialog or snackbar saying its necessary and try again otherwise proceed with setup. if (ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.WRITE_EXTERNAL_STORAGE) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.CAMERA) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.READ_EXTERNAL_STORAGE) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.ACCESS_FINE_LOCATION)) { showDialogOK("Call permission is required for this app", DialogInterface.OnClickListener { dialog, which -> when (which) { DialogInterface.BUTTON_POSITIVE -> checkAndRequestPermissions() DialogInterface.BUTTON_NEGATIVE -> { } }// proceed with logic by disabling the related features or quit the app. }) } else { Toast.makeText(this@MainActivity, "Go to settings and enable permissions", Toast.LENGTH_LONG) .show() // //proceed with logic by disabling the related features or quit the app. }//permission is denied (and never ask again is checked) //shouldShowRequestPermissionRationale will return false } } } } } fun showDialogOK(message: String, okListener: DialogInterface.OnClickListener) { AlertDialog.Builder(this@MainActivity) .setMessage(message) .setPositiveButton("OK", okListener) .setNegativeButton("Cancel", okListener) .create() .show() } ``` > > > ``` > **Step 3**:- On button click > > ``` > > ``` fun buChargeEvent(view: View){ if(checkAndRequestPermissions(){ var number: Int = txtCharge.text.toString().toInt() val intentChrage = Intent(Intent.ACTION_CALL) intent.data = Uri.parse("tel:$number") startActivity(intentChrage) } } ```
46,689,384
There is a css file and I want to make two things: 1) Remove all webkit keyframes and surrounding whitespace characters like: ``` @keyframes outToLeft { to { opacity: 0; -webkit-transform: translate3d(-100%, 0, 0); transform: translate3d(-100%, 0, 0); } } ``` 2) Remove all webkit prefixed properties and surrounding whitespace characters like: ``` -webkit-transform: translate3d(100%, 0, 0); ``` I tries to use %s but it doesn't work (maybe my construction wasn't right) What is the best way to do this?
2017/10/11
[ "https://Stackoverflow.com/questions/46689384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7013674/" ]
First you need to add permission to your `manifest` first : ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` This bit of code is used on the place of your method : ``` fun buChargeEvent(view: View) { var number: Int = txtCharge.text.toString().toInt() val callIntent = Intent(Intent.ACTION_CALL) callIntent.data = Uri.parse("tel:$number") if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CALL_PHONE) != PackageManager.PERMISSION_GRANTED) { if (ActivityCompat.shouldShowRequestPermissionRationale(this as Activity, Manifest.permission.CALL_PHONE)) { } else { ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.CALL_PHONE), MY_PERMISSIONS_REQUEST_CALL_PHONE) } } startActivity(callIntent) } ```
**This is the complete code for runtime permissions for Call Phone** > > **Step 1**:- add permission in manifest > > > ``` <uses-permission android:name="android.permission.CALL_PHONE" /> ``` > > **Step 2**:- Call this method `checkAndroidVersion()` in `onCreate()` > > > ``` fun checkAndroidVersion() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { if (checkAndRequestPermissions()) { } else { } } else { // do code for pre-lollipop devices } } val REQUEST_ID_MULTIPLE_PERMISSIONS = 1 fun checkAndRequestPermissions(): Boolean { val call = ContextCompat.checkSelfPermission(this@MainActivity, Manifest.permission.CALL_PHONE) val listPermissionsNeeded = ArrayList<String>() if (call != PackageManager.PERMISSION_GRANTED) { listPermissionsNeeded.add(Manifest.permission.CALL_PHONE) } if (!listPermissionsNeeded.isEmpty()) { ActivityCompat.requestPermissions(this@MainActivity, listPermissionsNeeded.toTypedArray(), REQUEST_ID_MULTIPLE_PERMISSIONS) return false } return true } fun checkAndRequestPermissions(): Boolean { val call = ContextCompat.checkSelfPermission(this@MainActivity, Manifest.permission.CALL_PHONE) val listPermissionsNeeded = ArrayList<String>() if (call != PackageManager.PERMISSION_GRANTED) { listPermissionsNeeded.add(Manifest.permission.CALL_PHONE) } if (!listPermissionsNeeded.isEmpty()) { ActivityCompat.requestPermissions(this@MainActivity, listPermissionsNeeded.toTypedArray(), REQUEST_ID_MULTIPLE_PERMISSIONS) return false } return true } override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String>, grantResults: IntArray) { Log.d("in fragment on request", "Permission callback called-------") when (requestCode) { REQUEST_ID_MULTIPLE_PERMISSIONS -> { val perms = HashMap<String, Int>() // Initialize the map with both permissions perms[Manifest.permission.CALL_PHONE] = PackageManager.PERMISSION_GRANTED // Fill with actual results from user if (grantResults.size > 0) { for (i in permissions.indices) perms[permissions[i]] = grantResults[i] // Check for both permissions if (perms[Manifest.permission.CALL_PHONE] == PackageManager.PERMISSION_GRANTED ) { print("Storage permissions are required") // process the normal flow //else any one or both the permissions are not granted } else { Log.d("in fragment on request", "Some permissions are not granted ask again ") //permission is denied (this is the first time, when "never ask again" is not checked) so ask again explaining the usage of permission // // shouldShowRequestPermissionRationale will return true //show the dialog or snackbar saying its necessary and try again otherwise proceed with setup. if (ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.WRITE_EXTERNAL_STORAGE) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.CAMERA) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.READ_EXTERNAL_STORAGE) || ActivityCompat.shouldShowRequestPermissionRationale(this@MainActivity, Manifest.permission.ACCESS_FINE_LOCATION)) { showDialogOK("Call permission is required for this app", DialogInterface.OnClickListener { dialog, which -> when (which) { DialogInterface.BUTTON_POSITIVE -> checkAndRequestPermissions() DialogInterface.BUTTON_NEGATIVE -> { } }// proceed with logic by disabling the related features or quit the app. }) } else { Toast.makeText(this@MainActivity, "Go to settings and enable permissions", Toast.LENGTH_LONG) .show() // //proceed with logic by disabling the related features or quit the app. }//permission is denied (and never ask again is checked) //shouldShowRequestPermissionRationale will return false } } } } } fun showDialogOK(message: String, okListener: DialogInterface.OnClickListener) { AlertDialog.Builder(this@MainActivity) .setMessage(message) .setPositiveButton("OK", okListener) .setNegativeButton("Cancel", okListener) .create() .show() } ``` > > > ``` > **Step 3**:- On button click > > ``` > > ``` fun buChargeEvent(view: View){ if(checkAndRequestPermissions(){ var number: Int = txtCharge.text.toString().toInt() val intentChrage = Intent(Intent.ACTION_CALL) intent.data = Uri.parse("tel:$number") startActivity(intentChrage) } } ```
23,222,762
I'm writing a library that uses [Function Hooking](http://en.wikipedia.org/wiki/Hooking) to inspect the arguments sent from 3rd-party code to a 3rd-party library. The caller and callee are both written in C, and the hooking is accomplished by providing the loader with an array of pairs of function pointers. Initially, I wrote this in C and duplicated a *lot* of code for each function hook. After writing a handful of hooks, I looked to C++ templates to lighten the load. The following example works with a fixed number of template arguments and demonstrates the idea: ``` #include <cstdio> #include <iostream> typedef int (*PrintCallback)(const char *); int PrintSingle(const char *input) { return printf("%s", input); } template<typename Result, typename Arg, Result(*callback)(Arg)> Result WrapFunc(Arg arg) { std::cout << "Logging logic goes here..." << std::endl; return callback(std::forward<Arg>(arg)); } int main(int argc, char *argv[]) { PrintCallback pc = WrapFunc<int, const char *, PrintSingle>; pc("Hello, World!\n"); return 0; } ``` Outputs: ``` Logging logic goes here... Hello, World! ``` My trouble comes when I try to use C++11's Variadic Templates to generalize this solution to a variable number of template arguments: ``` template<typename Result, typename... Args, Result(*callback)(Args...)> Result WrapFunc(Args... args) { std::cout << "Logging logic goes here..." << std::endl; return callback(std::forward<Args>(args)...); } ``` When I try to instantiate this template, it's not clear whether the last template argument is the "callback" value or a part of "Args". My compiler returns this error message: ``` variadic.cpp:22:21: error: address of overloaded function 'WrapFunc' does not match required type 'int (const char *)' PrintCallback pc = WrapFunc<int,const char *, PrintSingle>; ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ variadic.cpp:14:8: note: candidate template ignored: invalid explicitly-specified argument for template parameter 'Args' Result WrapFunc(Args... args) ``` However, I can't swap the order of "Args" and "callback" in the template because the callback's type is dependent on the definition of Args. Is there a way to do this properly, so that I can provide a table of C function pointers to the loader? Solutions that rely on returning std::functions at run-time are off the table, since the calling code won't be able to use them.
2014/04/22
[ "https://Stackoverflow.com/questions/23222762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/773/" ]
You need a correlated subselect, as @IanBjorhovde mentioned: ``` update ssa.psa_xtn xtn set xtn.c_id = ( select psa.c_id from ccc.p_s_a psa where psa.a_id = xtn.a_id ) ```
You may want to consider looking at the `MERGE` statement: ``` MERGE INTO ssa.psa_xtn xtn using ccc.p_s_a psa on (xtn.a_id = psa.a_id) when matched then update set xtn.c_id = psa.c_id; ``` It offers a lot more flexibility that a correlated subselect, and also provides the ability to perform upserts (update else insert).
46,393,867
I have several scripts(jobs) that run on a linux system (bash scripts or python scripts). Each script is continuously performing the same task over and over again, with some sleep in between (performing a task, then sleeping for a while, then performing the task again). I want to have a web interface where I can at least a) start/stop each script and b)see the status of each script.Ideally I can see the log files in the web as well. Is there a free (open source) tool that does this?
2017/09/24
[ "https://Stackoverflow.com/questions/46393867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8527441/" ]
To quote [Java 9's javadoc](https://docs.oracle.com/javase/9/docs/api/java/lang/Class.html#newInstance--): > > The call > > > `clazz.newInstance()` > > > can be replaced by > > > `clazz.getDeclaredConstructor().newInstance()` > > >
``` Class.getDeclaredConstructor(...).newInstance(...) ``` Refer to [Google errorprone's documentation](http://errorprone.info/bugpattern/ClassNewInstance) (for example) for a description of why.
55,595
In my beamer block the title always comes left alined. Is that possible to make the title of the block centered? ``` \documentclass[final, 12pt]{beamer} \usepackage[size=custom,width=120,height=120,scale=1.7,orientation=portrait]{beamerposter} \usepackage{ragged2e} \usepackage{graphicx} \usepackage{lipsum} \usepackage{etoolbox} \apptocmd{\frame}{\justifying}{}{} \usebackgroundtemplate{\centering \includegraphics[width=\paperwidth, height=\paperheight]{Figure/LG_PKL.jpg}} \addtobeamertemplate{block begin}{\pgfsetfillopacity{0.65}}{\pgfsetfillopacity{1}} \addtobeamertemplate{block beamercolorbox begin}{\pgfsetfillopacity{0.65}}{\pgfsetfillopacity{1}} \newenvironment<>{varblock}[2][\textwidth]{% \setlength{\textwidth}{#1} \begin{actionenv}#3% \def\insertblocktitle{#2}% \par% \usebeamertemplate{block begin}} {\par% \usebeamertemplate{block end}% \end{actionenv}} \usecolortheme{rose} \begin{document} \begin{frame} \begin{textblock}{}(0.2,1.15)% \begin{varblock}[35cm]{\textbf{1. Introduction}} \justifying Space for discussion. \end{varblock} \end{textblock} \end{document} ```
2012/05/13
[ "https://tex.stackexchange.com/questions/55595", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/10060/" ]
Actually - I found a much easier way of centering the title within a block. I have to say, I'm not sure if this is sort of hacky, but it works. ``` \begin{block}{\centering 1. Introduction} ... ... \end{block} ``` I'm imagining it would work similarly for your varblock environment. ``` \begin{varblock}[35cm]{\centering \textbf{1. Introduction}} \justifying Space for discussion. \end{varblock} ```
Change your definition of `varblock` to ``` \newenvironment<>{varblock}[2][\textwidth]{% \setlength{\textwidth}{#1} \begin{actionenv}#3% \def\insertblocktitle{\centering#2\par}% \par% \usebeamertemplate{block begin}} {\par% \usebeamertemplate{block end}% \end{actionenv}} ```
55,595
In my beamer block the title always comes left alined. Is that possible to make the title of the block centered? ``` \documentclass[final, 12pt]{beamer} \usepackage[size=custom,width=120,height=120,scale=1.7,orientation=portrait]{beamerposter} \usepackage{ragged2e} \usepackage{graphicx} \usepackage{lipsum} \usepackage{etoolbox} \apptocmd{\frame}{\justifying}{}{} \usebackgroundtemplate{\centering \includegraphics[width=\paperwidth, height=\paperheight]{Figure/LG_PKL.jpg}} \addtobeamertemplate{block begin}{\pgfsetfillopacity{0.65}}{\pgfsetfillopacity{1}} \addtobeamertemplate{block beamercolorbox begin}{\pgfsetfillopacity{0.65}}{\pgfsetfillopacity{1}} \newenvironment<>{varblock}[2][\textwidth]{% \setlength{\textwidth}{#1} \begin{actionenv}#3% \def\insertblocktitle{#2}% \par% \usebeamertemplate{block begin}} {\par% \usebeamertemplate{block end}% \end{actionenv}} \usecolortheme{rose} \begin{document} \begin{frame} \begin{textblock}{}(0.2,1.15)% \begin{varblock}[35cm]{\textbf{1. Introduction}} \justifying Space for discussion. \end{varblock} \end{textblock} \end{document} ```
2012/05/13
[ "https://tex.stackexchange.com/questions/55595", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/10060/" ]
Change your definition of `varblock` to ``` \newenvironment<>{varblock}[2][\textwidth]{% \setlength{\textwidth}{#1} \begin{actionenv}#3% \def\insertblocktitle{\centering#2\par}% \par% \usebeamertemplate{block begin}} {\par% \usebeamertemplate{block end}% \end{actionenv}} ```
As the `block begin` beamertemplate sets the font associated with `block title` before outputting the block title, one can hijack this function to sneak in a `\centering` command instead of redefining the entire template: ``` \setbeamerfont{block title}{size={\centering}} ``` This will directly affect all three types of blocks.
55,595
In my beamer block the title always comes left alined. Is that possible to make the title of the block centered? ``` \documentclass[final, 12pt]{beamer} \usepackage[size=custom,width=120,height=120,scale=1.7,orientation=portrait]{beamerposter} \usepackage{ragged2e} \usepackage{graphicx} \usepackage{lipsum} \usepackage{etoolbox} \apptocmd{\frame}{\justifying}{}{} \usebackgroundtemplate{\centering \includegraphics[width=\paperwidth, height=\paperheight]{Figure/LG_PKL.jpg}} \addtobeamertemplate{block begin}{\pgfsetfillopacity{0.65}}{\pgfsetfillopacity{1}} \addtobeamertemplate{block beamercolorbox begin}{\pgfsetfillopacity{0.65}}{\pgfsetfillopacity{1}} \newenvironment<>{varblock}[2][\textwidth]{% \setlength{\textwidth}{#1} \begin{actionenv}#3% \def\insertblocktitle{#2}% \par% \usebeamertemplate{block begin}} {\par% \usebeamertemplate{block end}% \end{actionenv}} \usecolortheme{rose} \begin{document} \begin{frame} \begin{textblock}{}(0.2,1.15)% \begin{varblock}[35cm]{\textbf{1. Introduction}} \justifying Space for discussion. \end{varblock} \end{textblock} \end{document} ```
2012/05/13
[ "https://tex.stackexchange.com/questions/55595", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/10060/" ]
Actually - I found a much easier way of centering the title within a block. I have to say, I'm not sure if this is sort of hacky, but it works. ``` \begin{block}{\centering 1. Introduction} ... ... \end{block} ``` I'm imagining it would work similarly for your varblock environment. ``` \begin{varblock}[35cm]{\centering \textbf{1. Introduction}} \justifying Space for discussion. \end{varblock} ```
As the `block begin` beamertemplate sets the font associated with `block title` before outputting the block title, one can hijack this function to sneak in a `\centering` command instead of redefining the entire template: ``` \setbeamerfont{block title}{size={\centering}} ``` This will directly affect all three types of blocks.
29,553
Is it possible for a Reform Jew to violate a mitzva in the Torah or break halacha, like breaking shabbos by doing melacha ('labor')? After all, from what I understand, Reform Jews don't believe in a requirement to keep mitzvos; thus, I don't understand how a Reform Jew can violate something there's no requirement to keep. Can he? How does that work?
2013/06/24
[ "https://judaism.stackexchange.com/questions/29553", "https://judaism.stackexchange.com", "https://judaism.stackexchange.com/users/1059/" ]
Yes, a Reform Jew can absolutely transgress a mitzvah. That Reform doesn't say up front "here are all the mitzvot you must accept" does not mean that no Reform Jew accepts any. Reform (as taught today; I can't speak to early history) isn't about rejecting mitzvot. A Reform Jew reaches an understanding of halacha through a different path than others, and some may never reach it at all, but one who does is *exactly as liable for it* as any other Jew. If I drive on Shabbat I am guilty of violating Shabbat; I don't get to say "I'm Reform so I don't have to do that". The difference is that for any random Reform Jew, we don't know what his understanding of Shabbat is. So "is it possible to violate Shabbat?" Yes. "Is that guy over there driving his car violating Shabbat?" Yes, per halacha. "Does that guy over there driving his car believe he is violating Shabbat?" That depends on what he has accepted. If "that guy" is Orthodox, on the other hand (and not uneducated etc), then we can presume that he agrees he is transgressing. (Assuming in all cases, of course, that there isn't a *pikuach nefesh* issue that trumps Shabbat.) You may or may not agree with the approach, but your question didn't ask for a judgement, only an analysis.
It is possible for a Reform Jew to break Shabbos. From the perspective of traditional Judaism, it doesn't matter whether they believe in the laws. If they don't follow them, they are transgressing. From a Reform perspective, your description of Reform Judaism is inaccurate. The fact that hilchot Shabbos exist is indisputable. They're written in the Torah, therefore they exist. Reform Jews might deny their authenticity or their obligatory nature, but they cannot deny that they exist. As such, whenever a Reform Jew does something in violation of Shabbos, he is breaking Shabbos, even according to himself. He might not think that it matters though. The position of the Reform movement is not that halachot like those of Shabbos do not exist. Rather, it is that people should follow the halachot that have meaning to them.
64,159,919
I have two Pandas series which I merged using the following code: ``` HS4_Tariffs_16=pd.concat([df_tariff_HS4_16_PT,df_tariff_HS4_16_MFN],axis=1) ``` If you are wondering why I used concat in place of merge, the error 'Series' object has no attribute 'merge' showed up when I used merge. So anyway, I merged the two series using concat which resulted in a dataframe. Thereafter, I reset the index using the code: ``` HS4_Tariffs_16.reset_index() ``` Now the real problem cropped up when I tried to rename a column using the code: ``` HS4_Tariffs_16=HS4_Tariffs_16.rename(columns={'ProductCode':'HSCode'}) ``` Instead of renaming the column, it converted the column to an index. The output was someting like this: ``` Preferential tariff for APTA countries MFN duties (Applied) ProductCode 101 0.3 0.3 102 0.3 0.3 103 0.3 0.3 104 0.3 0.3 105 0.3 0.3 ... ... ... 9702 0.1 0.1 9703 0.1 0.1 9704 0.0 0.0 9705 0.1 0.1 9706 0.1 0.1 1224 rows × 2 columns ``` Could you tell me which part of the code I need to correct if the final result that I want is a dataframe with the column named 'ProductCode' renamed as 'HSCode'.
2020/10/01
[ "https://Stackoverflow.com/questions/64159919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14375190/" ]
The OnWillPop scope expects you to give it a proper function so try doing any one of these 2 changes **CHANGE1** ``` return WillPopScope( onWillPop:(){ widget.returnIndex; }, child: Scaffold( )); ``` OR **CHANGE2** ``` void returnIndex(){//Future<void> if it is a future object } ```
I can't comment yet, but to add on to the above answer: WillPopScope does indeed look for a future in response to onWillPop, so you'll need to take that into account as you define a(n async) function / assign a value of a Future.
1,383,737
I've got a WiiU/Swtich GameCube USB adapter that I'd like to plug into my computer (device 1) as well as my Nintendo Switch (device 2) and have both devices recognize that USB adapter and the controller plugged into it. I have seen some products that allow you to switch between the two devices but they won’t work for what I'm trying to do unless I can trick it to broadcast data to both connections. It is not advertised on box as a feature so it is probably not possible. Those switches look something like this [Kensington ShareCentral 2 (K33900US)](https://www.kensington.com/news/news-press-center/2008-news--press-center/kensington-sharecentral-brings-greater-convenience-and-ease-to-home-computing-shared-tasks/) and can be picked up for $9 to $20 bucks at my local Fry’s electronics shop: [![Kensington ShareCentral 2 for USB Device Sharing (K33900US)](https://i.stack.imgur.com/kVtKH.jpg)](https://i.stack.imgur.com/kVtKH.jpg) I was reading up on folks with similar situations and ran into this other Super User post, “[Connect USB storage device to 2 computers at a time.](https://superuser.com/questions/604911/connect-usb-storage-device-to-2-computers-at-a-time)” not exactly what I'm trying to do but it had lots of good information including [this possible solution from @UğurKırçıl](https://superuser.com/a/738168/167207) using 2 spliced USB wires. > > Thats so simple -if you know a little bit soldering- first get one female usb socket then get two usb cable solder two cables GND,DATA+,DATA- pins together solder 2 piece diodes for 2 cables + pin then connect flash drive to female socket and connect 2 cables to 2 pc NOTE 1 : why we use diode , because we didnt want burn anything NOTE 2 : NEVER OVERLOAD FLASH DRIVE IN THIS MODE > > > Well I do know some soldering and the guy at Fry’s seemed to think it could work. He also suggested splitting the USB on the GameCube adapter and going that route. So my question is [the above answer](https://superuser.com/a/738168/167207) describes a solution that would splice 2 wires together to produce a single female end and a 2 pronged male side. Would this cable configuration properly transfer USB data to both devices that it is plugged into? Why not do something like this if I have all the kit? Anything glaringly obvious I'm overlooking with USB 2.0 vs 3.0? I also noticed that the answer shown above seems to only reference a few of the cables in the USB 3.0 diagram. So I am ssuming this description is for USB 2.0?
2018/12/15
[ "https://superuser.com/questions/1383737", "https://superuser.com", "https://superuser.com/users/319595/" ]
No, you cannot connect one (slave) USB device to two or more (master or host) USB systems. That's impossible given how the USB bus works. And that's why you have switches like the one you describe; if it was simpler, people wouldn't need those switches. The "splicing two USB wires" hardware hack will only work as long as you don't power up the two master USB systems at the same time. If you do, at best neither will work reliably with the slave USB device, and at worst you'll damage the USB chips on the master devices And that's just at the USB bus level, and ignoring any higher protocol levels (which also would need to allow two master devices).
I cannot say with 100% certainty, but *very highly* doubt this will work. Electrically, you could connect the device that way, splitting the send and receive wires to the computer and the Switch. With two physical connections, it is possible the voltage could drop below necessary levels and cause connection issues. However, the real issue is going to be at the software level. Specifically, with the device drivers and the USB protocol. There is two-way communication going back and forth from the computer and the Switch to the controller. I *highly* doubt the communication is going to be the same from both devices, not will it be synchronized.
19,490,552
For an object of class `CheckToString`, when I give the object or the `objectName.toString()` as an argument to `System.out.println()` ie try to print it, i get the output as follows: ``` CheckToString@19821f ``` Here, the text after '@' is the hashCode, what all does it comprise of and what is the best practice of overriding hashCode()?
2013/10/21
[ "https://Stackoverflow.com/questions/19490552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2575434/" ]
> > what all does it comprise of > > > It comprises "the unsigned hexadecimal representation of the hash code of the object", as it says in the [Javadoc](http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#toString%28%29). > > does it contains memory address of the object > > > No. Also documented. > > and what is the best practice of overriding hashCode()? > > > This is also documented clearly in the [Javadoc](http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#hashCode%28%29). I suggest you read it. It would be otiose to repeat it all here.
In `java.lang.Object`'s hashcode it basically returns the memory address.(32 bit hexadecimal). ``` what is the best practice of overriding hashCode()? ``` When you start using collections then there would be time you want to match one object to another. As you may know two different objects can't be same but by overriding `hashcode()` and `equals()` you can make comparison possible.
19,490,552
For an object of class `CheckToString`, when I give the object or the `objectName.toString()` as an argument to `System.out.println()` ie try to print it, i get the output as follows: ``` CheckToString@19821f ``` Here, the text after '@' is the hashCode, what all does it comprise of and what is the best practice of overriding hashCode()?
2013/10/21
[ "https://Stackoverflow.com/questions/19490552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2575434/" ]
This is there in the Java docs > > As much as is reasonably practical, the hashCode method defined by class Object does return distinct integers for distinct objects. (This is typically implemented by converting the internal address of the object into an integer, but this implementation technique is not required by the JavaTM programming language.) > > > You will find this link useful <http://www.jusfortechies.com/java/core-java/hashcode.php>
In `java.lang.Object`'s hashcode it basically returns the memory address.(32 bit hexadecimal). ``` what is the best practice of overriding hashCode()? ``` When you start using collections then there would be time you want to match one object to another. As you may know two different objects can't be same but by overriding `hashcode()` and `equals()` you can make comparison possible.
19,490,552
For an object of class `CheckToString`, when I give the object or the `objectName.toString()` as an argument to `System.out.println()` ie try to print it, i get the output as follows: ``` CheckToString@19821f ``` Here, the text after '@' is the hashCode, what all does it comprise of and what is the best practice of overriding hashCode()?
2013/10/21
[ "https://Stackoverflow.com/questions/19490552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2575434/" ]
Going by the spec, there is no relationship between hashcode and memory location. However a memory address of an object could be a good (if not better) value for a hashcode. The general contract of hashCode is: > > Whenever it is invoked on the same object more than once during an > execution of a Java application, the hashCode method must consistently > return the same integer, provided no information used in equals > comparisons on the object is modified. This integer need not remain > consistent from one execution of an application to another execution > of the same application. If two objects are equal according to the > equals(Object) method, then calling the hashCode method on each of the > two objects must produce the same integer result. It is not required > that if two objects are unequal according to the > equals(java.lang.Object) method, then calling the hashCode method on > each of the two objects must produce distinct integer results. > However, the programmer should be aware that producing distinct > integer results for unequal objects may improve the performance of > hashtables. > > > [Source](http://docs.oracle.com/javase/1.4.2/docs/api/java/lang/Object.html#hashCode%28%29)
In `java.lang.Object`'s hashcode it basically returns the memory address.(32 bit hexadecimal). ``` what is the best practice of overriding hashCode()? ``` When you start using collections then there would be time you want to match one object to another. As you may know two different objects can't be same but by overriding `hashcode()` and `equals()` you can make comparison possible.
19,490,552
For an object of class `CheckToString`, when I give the object or the `objectName.toString()` as an argument to `System.out.println()` ie try to print it, i get the output as follows: ``` CheckToString@19821f ``` Here, the text after '@' is the hashCode, what all does it comprise of and what is the best practice of overriding hashCode()?
2013/10/21
[ "https://Stackoverflow.com/questions/19490552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2575434/" ]
As the Javadoc specify. hashcode is nothing but integer converted internal address. > > As much as is reasonably practical, the hashCode method defined by class Object does return distinct integers for distinct objects. (This is typically implemented by converting the internal address of the object into an integer, but this implementation technique is not required by the JavaTM programming language.) > > > <http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#hashCode()>
In `java.lang.Object`'s hashcode it basically returns the memory address.(32 bit hexadecimal). ``` what is the best practice of overriding hashCode()? ``` When you start using collections then there would be time you want to match one object to another. As you may know two different objects can't be same but by overriding `hashcode()` and `equals()` you can make comparison possible.
20,844,659
I'd like to know how other people are automating their JavaScript build process in CMS website builds. I currently use Grunt to automate SASS->CSS minification which works really well because your not overwriting your sass files with compressed code. You can't really do the same thing with JavaScript though because you would be overwriting your JS if you compressed it. Obviously you could build .min.js files instead, but then you have to manually update file names in your html templates. I've also seen and done builds were you have a www folder and then a build folder; the build folder getting all the minified assets. This is nice, but tends to only work for smaller static sites in my experience. One idea we've played around with is having an unminified-js folder for production JS, then grunt builds into a script folder which is referenced from your templates. Any insight into a neat process would be gratefully received. Thanks!
2013/12/30
[ "https://Stackoverflow.com/questions/20844659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/180405/" ]
Rewrite your ``` $bdd->exec('UPDATE Trajet SET name = \'echo $_SESSION[\'name\'];\''); ``` to ``` $bdd->exec("UPDATE Trajet SET name = ".$_SESSION['name']); ```
you should have: ``` $bdd->exec("UPDATE Trajet SET name = ".$_SESSION['name']); ``` not: ``` $bdd->exec('UPDATE Trajet SET name = \'echo $_SESSION[\'name\'];\''); ```
3,013,664
> > For what values of $a$ and $b$ is the following limit true? > $$\lim\_{x\to0}\left(\frac{\tan2x}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}\right)=0$$ > > > This question is really confusing me. I know that $\tan(2x)/x^3$ approaches infinity as $x$ goes to $0$ (L'Hôpital's). I also understand that $\sin(bx)/x$ goes to $b$ as $x$ approaches $0$. However, I am not sure how to get rid of this infinity with the middle term. Any ideas? Thanks!
2018/11/26
[ "https://math.stackexchange.com/questions/3013664", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533510/" ]
Since $\lim\_{x\to 0}\dfrac{\sin bx} {x} =b$ the given condition is equivalent to $$\lim\_{x\to 0}\frac{\tan 2x+ax}{x^3}=-b$$ or $$\lim\_{x\to 0}\frac{\tan 2x-2x}{x^3}+\frac{a+2}{x^2}=-b$$ Substituting $t=2x$ we get $$\lim\_{t\to 0}8\cdot \frac{\tan t-t} {t^3}+\dfrac{4a+8}{t^2}=-b$$ The first fraction tends to $1/3$ (as can be easily seen via L'Hospital's Rule or Taylor series) and therefore the above equation is equivalent to $$\lim\_{t\to 0}\frac{4a+8}{t^2}=-b-\frac{8}{3}$$ Multiplication by $t^2$ now gives us $4a+8=0$ or $a=-2$ and then from the above equation we get $b=-8/3$.
First, notice that your limit exists if and only if it exists without the last term (since, as you said, the limit of the last term exists). You have $$\tag1 \frac{\tan2x}{x^3}+\frac a{x^2}=\frac{\tan2x+ax}{x^3}. $$ Note that if $a\ne0$ you cannot apply L'Hôpital. The expression in $(1)$ is, close to $0$, $$\tag2 \frac{\tan2x+ax}{x^3}=\frac{x+o(x^3)+ax}{x^3}. $$ This requires $a=-1$ for the limit to exist. In more detail, $$\tag2 \frac{\tan2x+ax}{x^3}=\frac{2x+\tfrac{(2x)^3}3+o(x^5)+ax}{x^3}=\frac{2+a}{x^2}+\tfrac83+o(x^2). $$ So it converges to $\tfrac13$ when $a=-2$. For the whole limit to be zero, we need that $$\tag3\lim\_{x\to0}\frac{\sin bx}{x}=-\frac83.$$ In the end, we need $a=-2$, $b=-\tfrac83$.
3,013,664
> > For what values of $a$ and $b$ is the following limit true? > $$\lim\_{x\to0}\left(\frac{\tan2x}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}\right)=0$$ > > > This question is really confusing me. I know that $\tan(2x)/x^3$ approaches infinity as $x$ goes to $0$ (L'Hôpital's). I also understand that $\sin(bx)/x$ goes to $b$ as $x$ approaches $0$. However, I am not sure how to get rid of this infinity with the middle term. Any ideas? Thanks!
2018/11/26
[ "https://math.stackexchange.com/questions/3013664", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533510/" ]
First, notice that your limit exists if and only if it exists without the last term (since, as you said, the limit of the last term exists). You have $$\tag1 \frac{\tan2x}{x^3}+\frac a{x^2}=\frac{\tan2x+ax}{x^3}. $$ Note that if $a\ne0$ you cannot apply L'Hôpital. The expression in $(1)$ is, close to $0$, $$\tag2 \frac{\tan2x+ax}{x^3}=\frac{x+o(x^3)+ax}{x^3}. $$ This requires $a=-1$ for the limit to exist. In more detail, $$\tag2 \frac{\tan2x+ax}{x^3}=\frac{2x+\tfrac{(2x)^3}3+o(x^5)+ax}{x^3}=\frac{2+a}{x^2}+\tfrac83+o(x^2). $$ So it converges to $\tfrac13$ when $a=-2$. For the whole limit to be zero, we need that $$\tag3\lim\_{x\to0}\frac{\sin bx}{x}=-\frac83.$$ In the end, we need $a=-2$, $b=-\tfrac83$.
We have that $$\frac{\tan (2x)}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}=\frac{\tan (2x)+ax+x^2\sin bx}{x^3}$$ and from here since $x^2\sin bx\sim bx^3$ and $\tan (2x)\sim 2x$ we need $a=-2$ as a necessary condition for the limit to exist then we can consider by standard limits $$\frac{\tan (2x)-2x+x^2\sin bx}{x^3}=8\frac{\tan (2x)-2x}{(2x)^3}+b\frac{\sin bx}{bx}\to \frac83+b=0$$
3,013,664
> > For what values of $a$ and $b$ is the following limit true? > $$\lim\_{x\to0}\left(\frac{\tan2x}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}\right)=0$$ > > > This question is really confusing me. I know that $\tan(2x)/x^3$ approaches infinity as $x$ goes to $0$ (L'Hôpital's). I also understand that $\sin(bx)/x$ goes to $b$ as $x$ approaches $0$. However, I am not sure how to get rid of this infinity with the middle term. Any ideas? Thanks!
2018/11/26
[ "https://math.stackexchange.com/questions/3013664", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533510/" ]
Since $\lim\_{x\to 0}\dfrac{\sin bx} {x} =b$ the given condition is equivalent to $$\lim\_{x\to 0}\frac{\tan 2x+ax}{x^3}=-b$$ or $$\lim\_{x\to 0}\frac{\tan 2x-2x}{x^3}+\frac{a+2}{x^2}=-b$$ Substituting $t=2x$ we get $$\lim\_{t\to 0}8\cdot \frac{\tan t-t} {t^3}+\dfrac{4a+8}{t^2}=-b$$ The first fraction tends to $1/3$ (as can be easily seen via L'Hospital's Rule or Taylor series) and therefore the above equation is equivalent to $$\lim\_{t\to 0}\frac{4a+8}{t^2}=-b-\frac{8}{3}$$ Multiplication by $t^2$ now gives us $4a+8=0$ or $a=-2$ and then from the above equation we get $b=-8/3$.
Using the usual Taylor series for $$y=\frac{\tan(2x)}{x^3}+\frac{a}{x^2}+\frac{\sin (bx)}{x}=\frac{\tan(2x)+ax+x^2\sin (bx)}{x^3}$$ The numerator write $$\left(2 x+\frac{8 x^3}{3}+\frac{64 x^5}{15}+O\left(x^7\right) \right)+a x+x^2\left(b x-\frac{b^3 x^3}{6}+\frac{b^5 x^5}{120}+O\left(x^7\right) \right)$$ that is to say $$(a+2) x+\left(b+\frac{8}{3}\right) x^3+\left(\frac{64}{15}-\frac{b^3}{6}\right) x^5+O\left(x^7\right)$$ making $$y=\frac{a+2}{x^2}+\left(b+\frac{8}{3}\right)+\left(\frac{64}{15}-\frac{b^3}{6}\right) x^2+O\left(x^4\right)$$ So, in order to have a limit equal to $0$, you need $a=-2$, $b=-\frac{8}{3}$ and then $$y=\frac{3008 }{405}x^2+O\left(x^4\right)$$
3,013,664
> > For what values of $a$ and $b$ is the following limit true? > $$\lim\_{x\to0}\left(\frac{\tan2x}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}\right)=0$$ > > > This question is really confusing me. I know that $\tan(2x)/x^3$ approaches infinity as $x$ goes to $0$ (L'Hôpital's). I also understand that $\sin(bx)/x$ goes to $b$ as $x$ approaches $0$. However, I am not sure how to get rid of this infinity with the middle term. Any ideas? Thanks!
2018/11/26
[ "https://math.stackexchange.com/questions/3013664", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533510/" ]
Since $\lim\_{x\to 0}\dfrac{\sin bx} {x} =b$ the given condition is equivalent to $$\lim\_{x\to 0}\frac{\tan 2x+ax}{x^3}=-b$$ or $$\lim\_{x\to 0}\frac{\tan 2x-2x}{x^3}+\frac{a+2}{x^2}=-b$$ Substituting $t=2x$ we get $$\lim\_{t\to 0}8\cdot \frac{\tan t-t} {t^3}+\dfrac{4a+8}{t^2}=-b$$ The first fraction tends to $1/3$ (as can be easily seen via L'Hospital's Rule or Taylor series) and therefore the above equation is equivalent to $$\lim\_{t\to 0}\frac{4a+8}{t^2}=-b-\frac{8}{3}$$ Multiplication by $t^2$ now gives us $4a+8=0$ or $a=-2$ and then from the above equation we get $b=-8/3$.
We have that $$\frac{\tan (2x)}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}=\frac{\tan (2x)+ax+x^2\sin bx}{x^3}$$ and from here since $x^2\sin bx\sim bx^3$ and $\tan (2x)\sim 2x$ we need $a=-2$ as a necessary condition for the limit to exist then we can consider by standard limits $$\frac{\tan (2x)-2x+x^2\sin bx}{x^3}=8\frac{\tan (2x)-2x}{(2x)^3}+b\frac{\sin bx}{bx}\to \frac83+b=0$$
3,013,664
> > For what values of $a$ and $b$ is the following limit true? > $$\lim\_{x\to0}\left(\frac{\tan2x}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}\right)=0$$ > > > This question is really confusing me. I know that $\tan(2x)/x^3$ approaches infinity as $x$ goes to $0$ (L'Hôpital's). I also understand that $\sin(bx)/x$ goes to $b$ as $x$ approaches $0$. However, I am not sure how to get rid of this infinity with the middle term. Any ideas? Thanks!
2018/11/26
[ "https://math.stackexchange.com/questions/3013664", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533510/" ]
Using the usual Taylor series for $$y=\frac{\tan(2x)}{x^3}+\frac{a}{x^2}+\frac{\sin (bx)}{x}=\frac{\tan(2x)+ax+x^2\sin (bx)}{x^3}$$ The numerator write $$\left(2 x+\frac{8 x^3}{3}+\frac{64 x^5}{15}+O\left(x^7\right) \right)+a x+x^2\left(b x-\frac{b^3 x^3}{6}+\frac{b^5 x^5}{120}+O\left(x^7\right) \right)$$ that is to say $$(a+2) x+\left(b+\frac{8}{3}\right) x^3+\left(\frac{64}{15}-\frac{b^3}{6}\right) x^5+O\left(x^7\right)$$ making $$y=\frac{a+2}{x^2}+\left(b+\frac{8}{3}\right)+\left(\frac{64}{15}-\frac{b^3}{6}\right) x^2+O\left(x^4\right)$$ So, in order to have a limit equal to $0$, you need $a=-2$, $b=-\frac{8}{3}$ and then $$y=\frac{3008 }{405}x^2+O\left(x^4\right)$$
We have that $$\frac{\tan (2x)}{x^3}+\frac{a}{x^2}+\frac{\sin bx}{x}=\frac{\tan (2x)+ax+x^2\sin bx}{x^3}$$ and from here since $x^2\sin bx\sim bx^3$ and $\tan (2x)\sim 2x$ we need $a=-2$ as a necessary condition for the limit to exist then we can consider by standard limits $$\frac{\tan (2x)-2x+x^2\sin bx}{x^3}=8\frac{\tan (2x)-2x}{(2x)^3}+b\frac{\sin bx}{bx}\to \frac83+b=0$$
57,752,517
How would I read deep JSON data nested deep inside a file? I've tried different methods and can't seem to get this to work. ```html <template> <div> <div v-for="edu in info" :key="edu"> <div>{{ edu.section.title }}</div> // this is what im trying to get to work </div> <div class="card container"> {{ info }} </div> </div> </template> <script> import axios from 'axios'; export default { data() { return { info: null } }, mounted() { axios .get('./calculus.json') // Data Below .then(response => (this.info = response.data.edu)) .catch(error => console.log(error)) } } </script> ``` My JSON looks like this: ```json { "edu": { "1": { "title": "Title One", "subtitle": "Subtitle One", "description": "Description One", "section": { "1": { "title": "Section One Title", "content": "Section One Content" } } }, "2": { "title": "Title Two", "subtitle": "Subtitle Two", "description": "Description Two", "section": { "1": { "title": "Section One Title", "content": "Section One Content" } } } } } ``` How can I use vue-for and get the data inside the section to get it to display under the title? For example: title, section>title, section>subtitle, etc.
2019/09/02
[ "https://Stackoverflow.com/questions/57752517", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8322364/" ]
how about that? ``` select * from( select col1,col2,col3,row_number() over (partition by col1,col2 order by col3 desc) rn from a ) where rn>1 ```
The way I understood it: ``` SQL> with test (id, col1, col2) as 2 (select 123123, to_date('23.07.2019 22:00', 'dd.mm.yyyy hh24:mi'), to_date('31.12.9999 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 3 select 123123, to_date('23.07.2019 22:00', 'dd.mm.yyyy hh24:mi'), to_date('04.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 4 select 123123, to_date('23.07.2019 22:00', 'dd.mm.yyyy hh24:mi'), to_date('05.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 5 -- 6 select 123123, to_date('25.07.2019 04:05', 'dd.mm.yyyy hh24:mi'), to_date('31.12.9999 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 7 select 123123, to_date('25.07.2019 04:05', 'dd.mm.yyyy hh24:mi'), to_date('04.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 8 select 123123, to_date('25.07.2019 04:05', 'dd.mm.yyyy hh24:mi'), to_date('05.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual 9 ) 10 select id, col1, col2 11 from (select id, col1, col2, 12 row_number() over (partition by id, col1 order by id, col1 desc, col2 desc) rn 13 from test 14 ) 15 where rn > 1 16 order by id, col1, col2; ID COL1 COL2 ---------- ---------------- ---------------- 123123 2019-07-23 22:00 2019-07-04 00:00 123123 2019-07-23 22:00 2019-07-05 00:00 123123 2019-07-25 04:05 2019-07-04 00:00 123123 2019-07-25 04:05 2019-07-05 00:00 ``` or (worse, as it fetches from the same table twice) ``` <snip> 10 select * from test t 11 where (t.id, t.col1, t.col2) not in 12 (select t1.id, max(t1.col1), max(t1.col2) 13 from test t1 14 where t1.id = t.id 15 group by t1.id, t1.col1 16 ) 17 order by t.id, t.col1, t.col2; ID COL1 COL2 ---------- ---------------- ---------------- 123123 2019-07-23 22:00 2019-07-04 00:00 123123 2019-07-23 22:00 2019-07-05 00:00 123123 2019-07-25 04:05 2019-07-04 00:00 123123 2019-07-25 04:05 2019-07-05 00:00 SQL> ```
57,752,517
How would I read deep JSON data nested deep inside a file? I've tried different methods and can't seem to get this to work. ```html <template> <div> <div v-for="edu in info" :key="edu"> <div>{{ edu.section.title }}</div> // this is what im trying to get to work </div> <div class="card container"> {{ info }} </div> </div> </template> <script> import axios from 'axios'; export default { data() { return { info: null } }, mounted() { axios .get('./calculus.json') // Data Below .then(response => (this.info = response.data.edu)) .catch(error => console.log(error)) } } </script> ``` My JSON looks like this: ```json { "edu": { "1": { "title": "Title One", "subtitle": "Subtitle One", "description": "Description One", "section": { "1": { "title": "Section One Title", "content": "Section One Content" } } }, "2": { "title": "Title Two", "subtitle": "Subtitle Two", "description": "Description Two", "section": { "1": { "title": "Section One Title", "content": "Section One Content" } } } } } ``` How can I use vue-for and get the data inside the section to get it to display under the title? For example: title, section>title, section>subtitle, etc.
2019/09/02
[ "https://Stackoverflow.com/questions/57752517", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8322364/" ]
how about that? ``` select * from( select col1,col2,col3,row_number() over (partition by col1,col2 order by col3 desc) rn from a ) where rn>1 ```
I don't think the other answers are correct, because they do not explicitly check for the maximum date. You can get your desired results most simply with: ``` select t.* from t where t.col3 <> '9999-12-31 00:00:00'; ``` I need to assume that you really want to require that the maximum date is present (because you mention this particular value in the query) *and* then you want the other rows. For this, I think `exists` might be appropriate: ``` select t.* from t where t.col3 <> '9999-12-31 00:00:00' and exists (select 1 from t t2 where t2.col1 = t.col1 and t2.col2 = t.col2 and t2.col3 = '9999-12-31 00:00:00' ); ``` You can also phrase this with window functions: ``` select t.* from (select t.*, max(col3) over (partition by col1, col2) as max_col3 from t ) t where max_col3 = '9999-12-31 00:00:00' and col3 <> '9999-12-31 00:00:00'; ```
57,752,517
How would I read deep JSON data nested deep inside a file? I've tried different methods and can't seem to get this to work. ```html <template> <div> <div v-for="edu in info" :key="edu"> <div>{{ edu.section.title }}</div> // this is what im trying to get to work </div> <div class="card container"> {{ info }} </div> </div> </template> <script> import axios from 'axios'; export default { data() { return { info: null } }, mounted() { axios .get('./calculus.json') // Data Below .then(response => (this.info = response.data.edu)) .catch(error => console.log(error)) } } </script> ``` My JSON looks like this: ```json { "edu": { "1": { "title": "Title One", "subtitle": "Subtitle One", "description": "Description One", "section": { "1": { "title": "Section One Title", "content": "Section One Content" } } }, "2": { "title": "Title Two", "subtitle": "Subtitle Two", "description": "Description Two", "section": { "1": { "title": "Section One Title", "content": "Section One Content" } } } } } ``` How can I use vue-for and get the data inside the section to get it to display under the title? For example: title, section>title, section>subtitle, etc.
2019/09/02
[ "https://Stackoverflow.com/questions/57752517", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8322364/" ]
The way I understood it: ``` SQL> with test (id, col1, col2) as 2 (select 123123, to_date('23.07.2019 22:00', 'dd.mm.yyyy hh24:mi'), to_date('31.12.9999 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 3 select 123123, to_date('23.07.2019 22:00', 'dd.mm.yyyy hh24:mi'), to_date('04.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 4 select 123123, to_date('23.07.2019 22:00', 'dd.mm.yyyy hh24:mi'), to_date('05.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 5 -- 6 select 123123, to_date('25.07.2019 04:05', 'dd.mm.yyyy hh24:mi'), to_date('31.12.9999 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 7 select 123123, to_date('25.07.2019 04:05', 'dd.mm.yyyy hh24:mi'), to_date('04.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual union all 8 select 123123, to_date('25.07.2019 04:05', 'dd.mm.yyyy hh24:mi'), to_date('05.07.2019 00:00', 'dd.mm.yyyy hh24:mi') from dual 9 ) 10 select id, col1, col2 11 from (select id, col1, col2, 12 row_number() over (partition by id, col1 order by id, col1 desc, col2 desc) rn 13 from test 14 ) 15 where rn > 1 16 order by id, col1, col2; ID COL1 COL2 ---------- ---------------- ---------------- 123123 2019-07-23 22:00 2019-07-04 00:00 123123 2019-07-23 22:00 2019-07-05 00:00 123123 2019-07-25 04:05 2019-07-04 00:00 123123 2019-07-25 04:05 2019-07-05 00:00 ``` or (worse, as it fetches from the same table twice) ``` <snip> 10 select * from test t 11 where (t.id, t.col1, t.col2) not in 12 (select t1.id, max(t1.col1), max(t1.col2) 13 from test t1 14 where t1.id = t.id 15 group by t1.id, t1.col1 16 ) 17 order by t.id, t.col1, t.col2; ID COL1 COL2 ---------- ---------------- ---------------- 123123 2019-07-23 22:00 2019-07-04 00:00 123123 2019-07-23 22:00 2019-07-05 00:00 123123 2019-07-25 04:05 2019-07-04 00:00 123123 2019-07-25 04:05 2019-07-05 00:00 SQL> ```
I don't think the other answers are correct, because they do not explicitly check for the maximum date. You can get your desired results most simply with: ``` select t.* from t where t.col3 <> '9999-12-31 00:00:00'; ``` I need to assume that you really want to require that the maximum date is present (because you mention this particular value in the query) *and* then you want the other rows. For this, I think `exists` might be appropriate: ``` select t.* from t where t.col3 <> '9999-12-31 00:00:00' and exists (select 1 from t t2 where t2.col1 = t.col1 and t2.col2 = t.col2 and t2.col3 = '9999-12-31 00:00:00' ); ``` You can also phrase this with window functions: ``` select t.* from (select t.*, max(col3) over (partition by col1, col2) as max_col3 from t ) t where max_col3 = '9999-12-31 00:00:00' and col3 <> '9999-12-31 00:00:00'; ```
547,629
I've got some SQL which performs complex logic on combinations of GL account numbers and cost centers like this: ``` WHEN (@IntGLAcct In ( 882001, 882025, 83000154, 83000155, 83000120, 83000130, 83000140, 83000157, 83000010, 83000159, 83000160, 83000161, 83000162, 83000011, 83000166, 83000168, 83000169, 82504000, 82504003, 82504005, 82504008, 82504029, 82530003, 82530004, 83000000, 83000100, 83000101, 83000102, 83000103, 83000104, 83000105, 83000106, 83000107, 83000108, 83000109, 83000110, 83000111, 83000112, 83000113, 83100005, 83100010, 83100015, 82518001, 82552004, 884424, 82550072, 82552000, 82552001, 82552002, 82552003, 82552005, 82552012, 82552015, 884433, 884450, 884501, 82504025, 82508010, 82508011, 82508012, 83016003, 82552014, 81000021, 80002222, 82506001, 82506005, 82532001, 82550000, 82500009, 82532000)) ``` Overall, the whole thing is poorly performing in a UDF, especially when it's all nested and the order of the steps is important etc. I can't make it table-driven just yet, because the business logic is so terribly convoluted. So I'm doing a little exploratory work in moving it into SSIS to see about doing it in a little bit of a different way. Inside my script task, however, I've got to use VB.NET, so I'm looking for an alternative to this: ``` Select Case IntGLAcct = 882001 OR IntGLAcct = 882025 OR ... ``` Which is obviously a lot more verbose, and would make it terribly hard to port the process. Even something like `({90605, 90607, 90610} AS List(Of Integer)).Contains(IntGLAcct)` would be easier to port, but I can't get the initializer to give me an anonymous array like that. And there are so many of these little collections, I'm not sure I can create them all in advance. It really all NEEDS to be in one place. The business changes this logic regularly. My strategy was to use the udf to mirror their old "include" file, but performance has been poor. Now each of the functions takes just 2 or three parameters. It turns out that in a dark corner of the existing system they actually build a multi-million row table of all these results - even though the pre-calced table is not used much. So my new experiment is to (since I'm still building the massive cross join table to reconcile that part of the process) go ahead and use the table instead of the code, but go ahead and populate this table during an SSIS phase instead of calling the udf 12 million times - because my udf version just basically stopped working within a reasonable time frame and the DBAs are not of much help right now. Yet, I know that SSIS can process these rows pretty efficiently - because each month I bring in the known good results dozens of multi-million row tables from the legacy system in minutes AND run queries to reconcile that there are no differences with the new versions. The SSIS code would theoretically become the keeper of the business logic, and the efficient table would be built from that (based on all known parameter combinations). Of course, if I can simplify the logic down to a real logic table, that would be the ultimate design - but that's not really foreseeable at this point.
2009/02/13
[ "https://Stackoverflow.com/questions/547629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18255/" ]
Try this: ``` Array.IndexOf(New Integer() {90605, 90607, 90610}, IntGLAcct) >-1 ```
What if you used a conditional split transform on your incoming data set and then used expressions or something similar (I'm not sure if your GL Accounts are fixed or if you're going to dynamically pass them in) to apply to the results? You can then take the resulting data from that and process as necessary.
63,976,246
My function that asks the user to input a 7 character number. but when i define it, it wont run. reason why? any help is appreciated. ``` def eticketNum(): while True: eTickNum = input('please enter eticket number:') if len(eTickNum) == 7: break print ('Sorry, entry is invalid. Your e-ticket number has to be 7 characters long.') eTickNum = eTickNum.lower() # next part of function (I guess) ```
2020/09/20
[ "https://Stackoverflow.com/questions/63976246", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14306496/" ]
You have to call the function, add a `eticketNum()` to the last line of your code, It will be something like this: ``` def eticketNum(): # your code eticketNum() ``` **edit:** if you want to use return value: ``` def eticketNum(): # your code x = eticketNum() # do whatever you want with 'x' variable. ```
Indentation is not correct, select what's after the def line and hit "tab" on your keyboard should work fine it's the alternative for {} if java
9,129,109
Can I use a Java Project in an Android Project even if some classes from the java project are not recognized in a normal android project? For example javax.xml package? There are 2 possibilities as I see it: 1. Either create a jar with that java project and import it in android 2. Or reference the project in android(as i understood it's possible). But in either way, will those classes that are found in java but not in android be ok in my Android Application? Will this work? Thank you. ![Printscreen from build path of my android app](https://i.stack.imgur.com/OPR59.png) ![Printscreen from build path of my android app](https://i.stack.imgur.com/PFkA2.png) I made the JavaProject, and then imported it in my Android Project as a reference. I get an error i did not expect, it does not recognize my .classes from my Java Project.
2012/02/03
[ "https://Stackoverflow.com/questions/9129109", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1140656/" ]
~~If you are using Eclipse (with the ADT plugin) you can reference the Java project in the build path of your Android project. This will package those classes in your .apk output and allow you to use them in your app.~~ As you pointed out in your comments; referencing a Java project as an Android library will not work in your case, since the presence of the javax.\* packages will result in compile time errors in Eclipse. In this case, you will need to opt for option 1 and build a Jar file from your Java project and include it [as explained here](https://stackoverflow.com/questions/2694392/importing-external-jar-file-to-android-project). This way, the class files requiring javax.\* should not produce any compile/runtime errors **unless you try to use them**. Either way, using the *Java build path* will not work as you expect - the class files are not bundled this way. The Android platform provides [some of javax.xml](http://developer.android.com/reference/javax/xml/package-summary.html), but not the whole package (read [this document](http://code.google.com/p/dalvik/wiki/JavaxPackages) for more detail on this). Often the easiest solution is to write an Android equivalent of your affected Java code that does not require the missing dependencies, and [bridge](http://en.wikipedia.org/wiki/Bridge_pattern) the 2 implementations so the correct one is used for each project.
It finally worked, my biggest issue was the url i was passing to HttpPost and ksoap2 with SAP did not work for me at all. ``` private void testCallingService() { DefaultHttpClient httpclient = new DefaultHttpClient(); httpclient.getCredentialsProvider().setCredentials( new AuthScope("ip here", port here), new UsernamePasswordCredentials(username, password)); try { String buffer = "<soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/' xmlns:urn='namespace'><soapenv:Header/><soapenv:Body><urn:methodname><USERNAME>test</USERNAME></urn:methodname></soapenv:Body></soapenv:Envelope>"; HttpPost httppost = new HttpPost(url); StringEntity se = new StringEntity(buffer, HTTP.UTF_8); se.setContentType("text/xml"); httppost.setHeader("Content-Type", "application/soap+xml;charset=UTF-8"); httppost.setEntity(se); CredentialsProvider credsProvider = new BasicCredentialsProvider(); credsProvider.setCredentials(new AuthScope("ip", port), new UsernamePasswordCredentials(username, password)); httpclient.setCredentialsProvider(credsProvider); BasicHttpResponse httpResponse = (BasicHttpResponse) httpclient .execute(httppost); if (httpResponse.getStatusLine() != null) { System.out.println("Http status: " + httpResponse.getStatusLine()); } RequestLine requestLine = httppost.getRequestLine(); String response = getResponseBody(httpResponse); // read server response. response.getEntity().getContent();... System.out.println("response: " + response); } catch (IOException e) { // TODO Auto-generated catch block } httpclient.getConnectionManager().shutdown(); } ``` So, I constructed myself the SOAP envelope, will try to stop doing that in the nearest future. Made an instance of CredentialsProvider and set my user/pass details. The status from server is 200 and i receive information that i need. One problem remains that the response is apparently too large(and it's not going to stop here) so my response is truncated. Working to solve that now. I really hope this helps someone.
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Some payment methods people are able to cancel or the money doesn't fully exist until 'cleared', some not. This scam works by sending you money via one that *can* such as a check, followed by you sending them money back via one that *can't*, typically Western Union, cash in person or via crypto etc. They then cancel the money you got, leaving you out of pocket and with no way to cancel the money you gave away.
Sometimes the scammer doesn't even need to intentionally cancel the original payment. They might send you a check, which you deposit in your account. Your bank accepts the check and it may even post to your account because the bank assumes that it is legitimate (and it does appear that way at first). A couple days/weeks later the bank finds out that the account tied to the check does not exist or doesn't have any money in it, and they reverse the deposit. Other times, the scammer requires you (the scammee?) to send them payment via a non-reversible method, like [Philip stated](https://money.stackexchange.com/a/107004/36669).
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Here’s what happens: When you thought the scammer deposited money in your account, it didn’t really happen. It was essentially fake money. This could happen in a few different ways. It could be that the scammer gave you a bad/fake check, which isn’t discovered until several days after you deposited it. Banks don’t know for sure if checks are good until they send it on to the issuing bank. As a courtesy, your bank will assume it is good as soon as you deposit it and credit your account immediately, but if the check turns out to be bad, they will withdraw that amount back from your account, and if you have already spent that money, the bank will want you to pay it back. There is also an electronic version of this scenario: If you are electronically sent money from an account by mistake or an account that was hacked, this can be reversed. On the other side of the scam, when you send money out of your account to the scammer, you are doing so willingly, even if you were tricked into doing so. You went to the bank and withdrew the cash, or you wrote the check, or you initiated the electronic transfer out. Your account was not hijacked; you told the bank what you wanted to do with your account. You can’t normally just change your mind after the fact. If the scammer took control of your account electronically and stole the money, or the scammer forged a check without your knowledge, you would generally be able to get your money back from the bank. But if you willingly hand money over to the scammer, you will need to go after the scammer to get your money back.
Some payment methods people are able to cancel or the money doesn't fully exist until 'cleared', some not. This scam works by sending you money via one that *can* such as a check, followed by you sending them money back via one that *can't*, typically Western Union, cash in person or via crypto etc. They then cancel the money you got, leaving you out of pocket and with no way to cancel the money you gave away.
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Some payment methods people are able to cancel or the money doesn't fully exist until 'cleared', some not. This scam works by sending you money via one that *can* such as a check, followed by you sending them money back via one that *can't*, typically Western Union, cash in person or via crypto etc. They then cancel the money you got, leaving you out of pocket and with no way to cancel the money you gave away.
The only entity who can revert done money transfer is the bank itself, and they will do it only if there was something very wrong with the transfer itself, for example transfer resulted from system failure or the money came from criminal activity (from the account, that was closed by the law enforcement). However, if you send the money from your bank account that is not blacklisted and contains legitimate money, so there's no reason to cancel that transfer. Well, if your account was used to wash money, it might be blacklisted then, but you'd loose your money anyway, and the money sent to another account was most likely taken out. Notice that it's a money washing schema. Criminals can't revert done money transfer on their own, and they have no interest in having their accounts confiscated by the police, so reverting the transfer is only a side effect (and most likely, only the beginning of your transfer).
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Some payment methods people are able to cancel or the money doesn't fully exist until 'cleared', some not. This scam works by sending you money via one that *can* such as a check, followed by you sending them money back via one that *can't*, typically Western Union, cash in person or via crypto etc. They then cancel the money you got, leaving you out of pocket and with no way to cancel the money you gave away.
> > How do scammers [do something...], while you can’t? > > > A major factor here is the willingness to commit a crime, particularly fraud. For example, you can reverse a credit card charge by calling the customer service number and complaining that a purchase was unauthorized or fraudulent. Normal people do not lie about this because there are real consequences. In fact almost all bank based consumer electronic transfers can be reversed long after the fact in case of fraud. Some electronic transfers like Western Union or Bitcoin can **NEVER** be reversed. Withdrawing the money as cash is another non-reversible action. Paying money from a reversible transaction using a never reversible one is a common theme in scams. Many fraud schemes are based on transactions where a bank will show money in a consumer's account before the bank actually receives the funds from the transfer. This is sort of a loan. Checks from another bank, for example, take several days to *clear*. If the check is fake, the other bank will not honor it and the consumer's bank will take back the money they loaned.
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Here’s what happens: When you thought the scammer deposited money in your account, it didn’t really happen. It was essentially fake money. This could happen in a few different ways. It could be that the scammer gave you a bad/fake check, which isn’t discovered until several days after you deposited it. Banks don’t know for sure if checks are good until they send it on to the issuing bank. As a courtesy, your bank will assume it is good as soon as you deposit it and credit your account immediately, but if the check turns out to be bad, they will withdraw that amount back from your account, and if you have already spent that money, the bank will want you to pay it back. There is also an electronic version of this scenario: If you are electronically sent money from an account by mistake or an account that was hacked, this can be reversed. On the other side of the scam, when you send money out of your account to the scammer, you are doing so willingly, even if you were tricked into doing so. You went to the bank and withdrew the cash, or you wrote the check, or you initiated the electronic transfer out. Your account was not hijacked; you told the bank what you wanted to do with your account. You can’t normally just change your mind after the fact. If the scammer took control of your account electronically and stole the money, or the scammer forged a check without your knowledge, you would generally be able to get your money back from the bank. But if you willingly hand money over to the scammer, you will need to go after the scammer to get your money back.
Sometimes the scammer doesn't even need to intentionally cancel the original payment. They might send you a check, which you deposit in your account. Your bank accepts the check and it may even post to your account because the bank assumes that it is legitimate (and it does appear that way at first). A couple days/weeks later the bank finds out that the account tied to the check does not exist or doesn't have any money in it, and they reverse the deposit. Other times, the scammer requires you (the scammee?) to send them payment via a non-reversible method, like [Philip stated](https://money.stackexchange.com/a/107004/36669).
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Sometimes the scammer doesn't even need to intentionally cancel the original payment. They might send you a check, which you deposit in your account. Your bank accepts the check and it may even post to your account because the bank assumes that it is legitimate (and it does appear that way at first). A couple days/weeks later the bank finds out that the account tied to the check does not exist or doesn't have any money in it, and they reverse the deposit. Other times, the scammer requires you (the scammee?) to send them payment via a non-reversible method, like [Philip stated](https://money.stackexchange.com/a/107004/36669).
The only entity who can revert done money transfer is the bank itself, and they will do it only if there was something very wrong with the transfer itself, for example transfer resulted from system failure or the money came from criminal activity (from the account, that was closed by the law enforcement). However, if you send the money from your bank account that is not blacklisted and contains legitimate money, so there's no reason to cancel that transfer. Well, if your account was used to wash money, it might be blacklisted then, but you'd loose your money anyway, and the money sent to another account was most likely taken out. Notice that it's a money washing schema. Criminals can't revert done money transfer on their own, and they have no interest in having their accounts confiscated by the police, so reverting the transfer is only a side effect (and most likely, only the beginning of your transfer).
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
> > How do scammers [do something...], while you can’t? > > > A major factor here is the willingness to commit a crime, particularly fraud. For example, you can reverse a credit card charge by calling the customer service number and complaining that a purchase was unauthorized or fraudulent. Normal people do not lie about this because there are real consequences. In fact almost all bank based consumer electronic transfers can be reversed long after the fact in case of fraud. Some electronic transfers like Western Union or Bitcoin can **NEVER** be reversed. Withdrawing the money as cash is another non-reversible action. Paying money from a reversible transaction using a never reversible one is a common theme in scams. Many fraud schemes are based on transactions where a bank will show money in a consumer's account before the bank actually receives the funds from the transfer. This is sort of a loan. Checks from another bank, for example, take several days to *clear*. If the check is fake, the other bank will not honor it and the consumer's bank will take back the money they loaned.
Sometimes the scammer doesn't even need to intentionally cancel the original payment. They might send you a check, which you deposit in your account. Your bank accepts the check and it may even post to your account because the bank assumes that it is legitimate (and it does appear that way at first). A couple days/weeks later the bank finds out that the account tied to the check does not exist or doesn't have any money in it, and they reverse the deposit. Other times, the scammer requires you (the scammee?) to send them payment via a non-reversible method, like [Philip stated](https://money.stackexchange.com/a/107004/36669).
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Here’s what happens: When you thought the scammer deposited money in your account, it didn’t really happen. It was essentially fake money. This could happen in a few different ways. It could be that the scammer gave you a bad/fake check, which isn’t discovered until several days after you deposited it. Banks don’t know for sure if checks are good until they send it on to the issuing bank. As a courtesy, your bank will assume it is good as soon as you deposit it and credit your account immediately, but if the check turns out to be bad, they will withdraw that amount back from your account, and if you have already spent that money, the bank will want you to pay it back. There is also an electronic version of this scenario: If you are electronically sent money from an account by mistake or an account that was hacked, this can be reversed. On the other side of the scam, when you send money out of your account to the scammer, you are doing so willingly, even if you were tricked into doing so. You went to the bank and withdrew the cash, or you wrote the check, or you initiated the electronic transfer out. Your account was not hijacked; you told the bank what you wanted to do with your account. You can’t normally just change your mind after the fact. If the scammer took control of your account electronically and stole the money, or the scammer forged a check without your knowledge, you would generally be able to get your money back from the bank. But if you willingly hand money over to the scammer, you will need to go after the scammer to get your money back.
The only entity who can revert done money transfer is the bank itself, and they will do it only if there was something very wrong with the transfer itself, for example transfer resulted from system failure or the money came from criminal activity (from the account, that was closed by the law enforcement). However, if you send the money from your bank account that is not blacklisted and contains legitimate money, so there's no reason to cancel that transfer. Well, if your account was used to wash money, it might be blacklisted then, but you'd loose your money anyway, and the money sent to another account was most likely taken out. Notice that it's a money washing schema. Criminals can't revert done money transfer on their own, and they have no interest in having their accounts confiscated by the police, so reverting the transfer is only a side effect (and most likely, only the beginning of your transfer).
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
Here’s what happens: When you thought the scammer deposited money in your account, it didn’t really happen. It was essentially fake money. This could happen in a few different ways. It could be that the scammer gave you a bad/fake check, which isn’t discovered until several days after you deposited it. Banks don’t know for sure if checks are good until they send it on to the issuing bank. As a courtesy, your bank will assume it is good as soon as you deposit it and credit your account immediately, but if the check turns out to be bad, they will withdraw that amount back from your account, and if you have already spent that money, the bank will want you to pay it back. There is also an electronic version of this scenario: If you are electronically sent money from an account by mistake or an account that was hacked, this can be reversed. On the other side of the scam, when you send money out of your account to the scammer, you are doing so willingly, even if you were tricked into doing so. You went to the bank and withdrew the cash, or you wrote the check, or you initiated the electronic transfer out. Your account was not hijacked; you told the bank what you wanted to do with your account. You can’t normally just change your mind after the fact. If the scammer took control of your account electronically and stole the money, or the scammer forged a check without your knowledge, you would generally be able to get your money back from the bank. But if you willingly hand money over to the scammer, you will need to go after the scammer to get your money back.
> > How do scammers [do something...], while you can’t? > > > A major factor here is the willingness to commit a crime, particularly fraud. For example, you can reverse a credit card charge by calling the customer service number and complaining that a purchase was unauthorized or fraudulent. Normal people do not lie about this because there are real consequences. In fact almost all bank based consumer electronic transfers can be reversed long after the fact in case of fraud. Some electronic transfers like Western Union or Bitcoin can **NEVER** be reversed. Withdrawing the money as cash is another non-reversible action. Paying money from a reversible transaction using a never reversible one is a common theme in scams. Many fraud schemes are based on transactions where a bank will show money in a consumer's account before the bank actually receives the funds from the transfer. This is sort of a loan. Checks from another bank, for example, take several days to *clear*. If the check is fake, the other bank will not honor it and the consumer's bank will take back the money they loaned.
107,003
I see a lot of question about scams with the same premises: 1. Someone puts money in your account. 2. You send money somewhere. 3. They take their money back, while you lost your money. I don't know a whole lot about banking and money transfers, but it seems odd to me that they can retract money back while you can't do that. Can someone explain this?
2019/03/27
[ "https://money.stackexchange.com/questions/107003", "https://money.stackexchange.com", "https://money.stackexchange.com/users/83829/" ]
> > How do scammers [do something...], while you can’t? > > > A major factor here is the willingness to commit a crime, particularly fraud. For example, you can reverse a credit card charge by calling the customer service number and complaining that a purchase was unauthorized or fraudulent. Normal people do not lie about this because there are real consequences. In fact almost all bank based consumer electronic transfers can be reversed long after the fact in case of fraud. Some electronic transfers like Western Union or Bitcoin can **NEVER** be reversed. Withdrawing the money as cash is another non-reversible action. Paying money from a reversible transaction using a never reversible one is a common theme in scams. Many fraud schemes are based on transactions where a bank will show money in a consumer's account before the bank actually receives the funds from the transfer. This is sort of a loan. Checks from another bank, for example, take several days to *clear*. If the check is fake, the other bank will not honor it and the consumer's bank will take back the money they loaned.
The only entity who can revert done money transfer is the bank itself, and they will do it only if there was something very wrong with the transfer itself, for example transfer resulted from system failure or the money came from criminal activity (from the account, that was closed by the law enforcement). However, if you send the money from your bank account that is not blacklisted and contains legitimate money, so there's no reason to cancel that transfer. Well, if your account was used to wash money, it might be blacklisted then, but you'd loose your money anyway, and the money sent to another account was most likely taken out. Notice that it's a money washing schema. Criminals can't revert done money transfer on their own, and they have no interest in having their accounts confiscated by the police, so reverting the transfer is only a side effect (and most likely, only the beginning of your transfer).
15,677
I'd like to know if someone ported the bibliography styles used in revtex 4.1 (like apsrev4-1.bst) so they can be used in other classes, like book. Thanks in advance!
2011/04/12
[ "https://tex.stackexchange.com/questions/15677", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/4844/" ]
It looks if the revtex bst's are highly customized for for the class file. I have found that the following gives more or less the same output with a standard class ``` \usepackage[sort&compress,numbers]{natbib} \bibliographystyle{apsrev4-1} \usepackage{doi}%<---------- \usepackage{hyperref} ``` It also needs an additional bibtex file with a custom setup especially if you want the full article names included (the `longbibliography` option in `revtex4-1`). Create the following file, say `revtex-custom.bib` with contents ``` @CONTROL{REVTEX41Control} @CONTROL{apsrev41Control,author="48",editor="1",pages="1",title="0",year="0"} ``` Define your bib then as follows: ``` \nocite{apsrev41Control} \bibliography{my-bib,revtex-custom} ```
I haven't been able to make the `@CONTROL` sequences work as above, but I found a way to tweak `apsrev4-1.bst` to output what you want. First, copy `apsrev4-1.bst` to a new file named, for example, `apsrev4-1_custom.bst`, and put it in the same directory as your `tex` file. In it, find the function `control.init`, and modify any options you'd like -- `#0` is false, `#1` is true. To get the titles to display, change the `#0` above `control.title` to `#1`. Use this newly modified by running `\bibliographystyle{apsrev4-1custom}`.
34,048,513
Below is my code: ``` gem_package 'cucumber' do clear_sources true source https://chefrubyaehq.kdc.example.com/ gem_binary '/opt/chef/embedded/bin/gem' action :install end ``` And I am getting the following error: ``` FATAL: zng937-test/recipes/default.rb:43: unknown regexp options - chfrbyahq FATAL: zng937-test/recipes/default.rb:44: syntax error, unexpected tSTRING_BEG, expecting keyword_do or '{' or '(' FATAL: gem_binary '/opt/chef/embedded/bin/gem' ``` Does anyone have any idea why I would be getting this?
2015/12/02
[ "https://Stackoverflow.com/questions/34048513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1666529/" ]
You need to put the source (`https://chefrubyaehq.kdc.capitalone.com/`) inside quotes to make it a string. Either single or double quotes is fine for this case.
Just as the message says. * Your regex `//chefrubyaehq` is invalid. Ruby regex only has `i`, `o`, `x`, `m` options. * You forgot to put a period after your regex `/\n gem_binary '/` before the method `opt`. And when you fix those, you will still encounter more errors.
315,313
I'm making a machine learning model (right now I'm using average weighted neural networks) to predict a binary variable. I have historical data on which I can train this model, but when new models are trained and used for predictions about the "real future", there will be some bias in the selection of the data that will be used. I can introduce the same bias in the historical data. So my plan is to compare the outcome of two models: One that uses a biased sample and another one that uses an unbiased sample (both using the same sample size n). I would like to compare both the outcomes of the model, but what I'm also very interested in is to to say something about the level of agreement of both models. For this last part I'm having difficulty finding more information. Both models will output a probability on the same test set, so I'm looking for a method to quantify their agreement or some other techniques that might be of use here. Thanks!
2017/11/23
[ "https://stats.stackexchange.com/questions/315313", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/155702/" ]
You have two continuous variables: sets of predicted probabilities. You can use standard methods to assess the agreement between two continuous variables. I would start by using [Bland-Altman methods](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4470095/). Note that these are primarily qualitative, though. If you want to quantify the agreement, you can use [Lin's concordance coefficient](https://en.wikipedia.org/wiki/Concordance_correlation_coefficient). Fewer people are familiar with it, so if you want metrics that are easier for an audience to grasp, you can compute the Pearson correlation, $r$, plus the two means and the two standard deviations. If you want to read further, I might suggest [1](http://www.john-uebersax.com/stat/agree.htm) and/or [2](https://stats.stackexchange.com/q/199678/7290).
It depends on the context of your audience. I think a really nice simple way to explain to non statistical audiences is to compute your predictions with both models (0 or 1) and see what % of the predictions are the same. Otherwise you can compare the RMS value between them (again using the predictions or the probabilities). A slightly better measure is [Cohen's Kappa](https://en.wikipedia.org/wiki/Cohen%27s_kappa), because this takes into account the probability of the models agreeing purely by chance. With this method, you will need to use the predicted values, not the probabilities. I hope this helps!
30,837,055
I 'm intending to fix bugs on Elastic Search open-source project. I forked it and cloned the forked copy . Then I imported it as Maven project on Eclipse and then did Maven build . So far so good. I opened ElasticSearchF.java file and tried to run it as a Java application.(This is as per directions written in <http://www.lindstromhenrik.com/debugging-elasticsearch-in-eclipse/>). But I get an error saying path.home is not set for ElasticSearch and throws an error saying IllegalStateException. My question is 1. Why is this error coming in the first place. 2. As I said , I want to fix bugs in ElasticSearch project.Is this the right way to set-up environment for my goal? Or should I have a client send the requests to the ElasticSearch server and then set-up debug points in Elastic Search source code. How to achieve this? Thanks for your patience. Update: I did add VM argument as mentioned by one of the answerers. Then it throws different errors and clue-less about why its throwing that. ``` java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: sun.misc.Launcher$AppClassLoader@29578426 at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210) at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127) at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86) at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514) at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413) at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216) at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151) at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79) at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:266) at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:968) at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:672) at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:560) at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:644) at org.elasticsearch.Build.<clinit>(Build.java:53) at org.elasticsearch.node.Node.<init>(Node.java:138) at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:157) at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:177) at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:278) at org.elasticsearch.bootstrap.ElasticsearchF.main(ElasticsearchF.java:30) [2015-06-16 18:51:36,892][INFO ][node ] [Kismet Deadly] version[2.0.0-SNAPSHOT], pid[2516], build[9b833fd/2015-06-15T03:38:40Z] [2015-06-16 18:51:36,892][INFO ][node ] [Kismet Deadly] initializing ... [2015-06-16 18:51:36,899][INFO ][plugins ] [Kismet Deadly] loaded [], sites [] {2.0.0-SNAPSHOT}: Initialization Failed ... - ExceptionInInitializerError IllegalArgumentException[An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [es090, completion090, XBloomFilter]] ```
2015/06/15
[ "https://Stackoverflow.com/questions/30837055", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783893/" ]
I got help from the developer community in <https://github.com/elastic/elasticsearch/issues/12737> and was able to debug it. procedure in short would be : 1) Search for the file Elasticsearch.java/ElasticsearchF.java inside the package org.elasticsearch.bootstrap . 2) Right click -> Run Configurations... 3) In the window that pops up , Click the "Arguments" tab and under "Program arguments:" section give the value as start and under "VM arguments:" section give the value as -Des.path.home={path to your elasticsearch code base root folder}/core -Des.security.manager.enabled=false 4) Click "Apply" and then click "Run". It runs now. to check , go to localhost:9200 and you will get a message something like ``` { "name" : "Raza", "cluster_name" : "elasticsearch", "version" : { "number" : "2.0.0-beta1", "build_hash" : "${buildNumber}", "build_timestamp" : "NA", "build_snapshot" : true, "lucene_version" : "5.2.1" }, "tagline" : "You Know, for Search" } ``` for more info on arguments see : <https://github.com/elastic/elasticsearch/commit/2b9ef26006c0e4608110164480b8127dffb9d6ad>
Edit your debug/run configurations,put it on the vm arguments: -Des.path.home=C:\github\elasticsearch\ change the C:\github\elasticsearch\ to your elasticsearch root path the reason is some arguments in the elasticsearch.bat is missed when you debug/run it in eclipse
47,115,816
Description =========== I have a docker container with this **nginx.conf** ``` server { listen 80; index index.php index.html; root /var/www/public; location / { try_files $uri /index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass app:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } } ``` --- **docker-compose.yaml** ``` version: '2' services: # The Application app: build: context: ./ dockerfile: app.dockerfile working_dir: /var/www volumes: - ./:/var/www environment: - "DB_PORT=3306" - "DB_HOST=database" # The Web Server web: build: context: ./ dockerfile: web.dockerfile working_dir: /var/www volumes_from: - app ports: - 85:80 # The Database database: image: mysql:5.6 volumes: - dbdata:/var/lib/mysql environment: - "MYSQL_DATABASE=homestead" - "MYSQL_USER=homestead" - "MYSQL_PASSWORD=secret" - "MYSQL_ROOT_PASSWORD=secret" ports: - "33062:3306" volumes: dbdata: ``` --- The docker seems to build and start successfully ``` docker-compose up -d docker_app_1 is up-to-date docker_database_1 is up-to-date Recreating docker_web_1 ... Recreating docker_web_1 ... done ``` but I kept getting > > File not found. > > > How would one go about debugging this?
2017/11/04
[ "https://Stackoverflow.com/questions/47115816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4480164/" ]
All **volumes** and other directory settings ideally point to the same location. In your nginx.conf you have root /var/www/public; but in your yal you use /var/www. That might be you problem. As for steps to proceed, you can check what directories your service actually uses by using the command the following command after you spin up your docker-compose.yml file : `docker-compose exec yourServiceName sh` replace yourServiceName with any service you have defined in your yaml. So *app* , *web* or *database* in your yaml above. And that command will take you into the shell (sh) of the container speicified. You can also replace *sh* with other commands to perform other actions with your container.
start your container with docker exec -it xxxxxx bash once you do that you will be inside the container. Now check your files if they are at the location you are putting them according to your docker-compose file.
6,324
Is it accurate to call the [mapReduce](http://en.wikipedia.org/wiki/MapReduce) framework a type of [bulk synchronous parallel](http://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel) programming framework with no local memory retention within processors between synchronizations? If not, what parallel programming model most accurately encapsulates the mapReduce framework?
2011/04/29
[ "https://cstheory.stackexchange.com/questions/6324", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4590/" ]
In section 2 of <http://arxiv.org/abs/1101.1902>, the authors define a model of MapReduce that is intentionally structured like BSP. They prove simulation theorems as well. May be a good place to start.
Since in MapReduce there is a simple and structured graph underlying the computation, this can IMHO classified as a data-flow model.
6,324
Is it accurate to call the [mapReduce](http://en.wikipedia.org/wiki/MapReduce) framework a type of [bulk synchronous parallel](http://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel) programming framework with no local memory retention within processors between synchronizations? If not, what parallel programming model most accurately encapsulates the mapReduce framework?
2011/04/29
[ "https://cstheory.stackexchange.com/questions/6324", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4590/" ]
In section 2 of <http://arxiv.org/abs/1101.1902>, the authors define a model of MapReduce that is intentionally structured like BSP. They prove simulation theorems as well. May be a good place to start.
Yes, my opinion is that classical MapReduce is a BSP model (and therefore has its inherent limitations on the maximum possible parallel performance that can be achieved). However, newer work on MapReduce seems to be focused on looser notions of synchronization, which would take this "generalized MapReduce" out of the strict BSP framework. In particular, if one replicates some of the data then the synchronization structure can be relaxed, yielding performance gains. See for instance work by Foto Afrati and Jeff Ullman: [*Optimizing joins in a map-reduce environment*](http://portal.acm.org/citation.cfm?id=1739041.1739056), EDBT 2010. ([preprint](http://infolab.stanford.edu/~ullman/pub/join-mr.pdf))
6,324
Is it accurate to call the [mapReduce](http://en.wikipedia.org/wiki/MapReduce) framework a type of [bulk synchronous parallel](http://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel) programming framework with no local memory retention within processors between synchronizations? If not, what parallel programming model most accurately encapsulates the mapReduce framework?
2011/04/29
[ "https://cstheory.stackexchange.com/questions/6324", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4590/" ]
Yes, my opinion is that classical MapReduce is a BSP model (and therefore has its inherent limitations on the maximum possible parallel performance that can be achieved). However, newer work on MapReduce seems to be focused on looser notions of synchronization, which would take this "generalized MapReduce" out of the strict BSP framework. In particular, if one replicates some of the data then the synchronization structure can be relaxed, yielding performance gains. See for instance work by Foto Afrati and Jeff Ullman: [*Optimizing joins in a map-reduce environment*](http://portal.acm.org/citation.cfm?id=1739041.1739056), EDBT 2010. ([preprint](http://infolab.stanford.edu/~ullman/pub/join-mr.pdf))
Since in MapReduce there is a simple and structured graph underlying the computation, this can IMHO classified as a data-flow model.
9,815,901
I recently saw an image of an app that was capable of displaying a view above the status bar and was also able to cover it with a view. I know you can get a view right below the status bar from a view with align parent top. But how would you get a view on top of the status bar?? [Example](http://tombarrasso.com/wordpress/wp-content/uploads/2012/02/ChargeBarBanner3-225x225.png)
2012/03/22
[ "https://Stackoverflow.com/questions/9815901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1190019/" ]
The answer by @Sadeshkumar is incorrect for ICS and above (perhaps GB as well). A view created with `TYPE_SYSTEM_ALERT` and `FLAG_LAYOUT_IN_SCREEN` is covered by the `StatusBar`. To get an overlay on top of the `StatusBar`, you need to use `TYPE_SYSTEM_OVERLAY` instead of `TYPE_SYSTEM_ALERT`. The problem being then, how to get clicks/touches?
``` int statusBarHeight = (int) Math.ceil(25 * getResources().getDisplayMetrics().density); View statusBarView = new View(MyActivity.this); statusBarView.setBackgroundColor(Color.GREEN); WindowManager.LayoutParams params = null; params = new WindowManager.LayoutParams(WindowManager.LayoutParams.FILL_PARENT,statusBarHeight,WindowManager.LayoutParams.TYPE_SYSTEM_ALERT,WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL |WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN, PixelFormat.TRANSLUCENT); params.gravity = Gravity.RIGHT | Gravity.TOP; WindowManager wm = (WindowManager) getSystemService(WINDOW_SERVICE); wm.addView(statusBarView, params); ```
9,815,901
I recently saw an image of an app that was capable of displaying a view above the status bar and was also able to cover it with a view. I know you can get a view right below the status bar from a view with align parent top. But how would you get a view on top of the status bar?? [Example](http://tombarrasso.com/wordpress/wp-content/uploads/2012/02/ChargeBarBanner3-225x225.png)
2012/03/22
[ "https://Stackoverflow.com/questions/9815901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1190019/" ]
A view created with `TYPE_SYSTEM_ERROR` and `FLAG_LAYOUT_IN_SCREEN` is covered by the `StatusBar`.
``` int statusBarHeight = (int) Math.ceil(25 * getResources().getDisplayMetrics().density); View statusBarView = new View(MyActivity.this); statusBarView.setBackgroundColor(Color.GREEN); WindowManager.LayoutParams params = null; params = new WindowManager.LayoutParams(WindowManager.LayoutParams.FILL_PARENT,statusBarHeight,WindowManager.LayoutParams.TYPE_SYSTEM_ALERT,WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL |WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN, PixelFormat.TRANSLUCENT); params.gravity = Gravity.RIGHT | Gravity.TOP; WindowManager wm = (WindowManager) getSystemService(WINDOW_SERVICE); wm.addView(statusBarView, params); ```
9,815,901
I recently saw an image of an app that was capable of displaying a view above the status bar and was also able to cover it with a view. I know you can get a view right below the status bar from a view with align parent top. But how would you get a view on top of the status bar?? [Example](http://tombarrasso.com/wordpress/wp-content/uploads/2012/02/ChargeBarBanner3-225x225.png)
2012/03/22
[ "https://Stackoverflow.com/questions/9815901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1190019/" ]
**Disable the System Status Bar - Without Root** ------------------------------------------------ --- After two full days of searching through SO posts and reading the Android docs over and over.. Here is the solution that I came up with. (tested) ``` mView= new TextView(this); mView.setText("........................................................................."); mLP = new WindowManager.LayoutParams( WindowManager.LayoutParams.MATCH_PARENT, 100, // Allows the view to be on top of the StatusBar WindowManager.LayoutParams.TYPE_SYSTEM_ERROR, // Keeps the button presses from going to the background window WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE | // Enables the notification to recieve touch events WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL | // Draws over status bar WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN, PixelFormat.TRANSLUCENT); mLP.gravity = Gravity.TOP|Gravity.CENTER; mWindowManager = (WindowManager) getSystemService(WINDOW_SERVICE); mWindowManager.addView(mView, mLP); ``` Dont forget the permissions: ``` <uses-permission android:name="android.permission.SYSTEM_OVERLAY_WINDOW" /> ``` Note: Tested upto kitkat.
``` int statusBarHeight = (int) Math.ceil(25 * getResources().getDisplayMetrics().density); View statusBarView = new View(MyActivity.this); statusBarView.setBackgroundColor(Color.GREEN); WindowManager.LayoutParams params = null; params = new WindowManager.LayoutParams(WindowManager.LayoutParams.FILL_PARENT,statusBarHeight,WindowManager.LayoutParams.TYPE_SYSTEM_ALERT,WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL |WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN, PixelFormat.TRANSLUCENT); params.gravity = Gravity.RIGHT | Gravity.TOP; WindowManager wm = (WindowManager) getSystemService(WINDOW_SERVICE); wm.addView(statusBarView, params); ```
9,815,901
I recently saw an image of an app that was capable of displaying a view above the status bar and was also able to cover it with a view. I know you can get a view right below the status bar from a view with align parent top. But how would you get a view on top of the status bar?? [Example](http://tombarrasso.com/wordpress/wp-content/uploads/2012/02/ChargeBarBanner3-225x225.png)
2012/03/22
[ "https://Stackoverflow.com/questions/9815901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1190019/" ]
The answer by @Sadeshkumar is incorrect for ICS and above (perhaps GB as well). A view created with `TYPE_SYSTEM_ALERT` and `FLAG_LAYOUT_IN_SCREEN` is covered by the `StatusBar`. To get an overlay on top of the `StatusBar`, you need to use `TYPE_SYSTEM_OVERLAY` instead of `TYPE_SYSTEM_ALERT`. The problem being then, how to get clicks/touches?
A view created with `TYPE_SYSTEM_ERROR` and `FLAG_LAYOUT_IN_SCREEN` is covered by the `StatusBar`.
9,815,901
I recently saw an image of an app that was capable of displaying a view above the status bar and was also able to cover it with a view. I know you can get a view right below the status bar from a view with align parent top. But how would you get a view on top of the status bar?? [Example](http://tombarrasso.com/wordpress/wp-content/uploads/2012/02/ChargeBarBanner3-225x225.png)
2012/03/22
[ "https://Stackoverflow.com/questions/9815901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1190019/" ]
**Disable the System Status Bar - Without Root** ------------------------------------------------ --- After two full days of searching through SO posts and reading the Android docs over and over.. Here is the solution that I came up with. (tested) ``` mView= new TextView(this); mView.setText("........................................................................."); mLP = new WindowManager.LayoutParams( WindowManager.LayoutParams.MATCH_PARENT, 100, // Allows the view to be on top of the StatusBar WindowManager.LayoutParams.TYPE_SYSTEM_ERROR, // Keeps the button presses from going to the background window WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE | // Enables the notification to recieve touch events WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL | // Draws over status bar WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN, PixelFormat.TRANSLUCENT); mLP.gravity = Gravity.TOP|Gravity.CENTER; mWindowManager = (WindowManager) getSystemService(WINDOW_SERVICE); mWindowManager.addView(mView, mLP); ``` Dont forget the permissions: ``` <uses-permission android:name="android.permission.SYSTEM_OVERLAY_WINDOW" /> ``` Note: Tested upto kitkat.
A view created with `TYPE_SYSTEM_ERROR` and `FLAG_LAYOUT_IN_SCREEN` is covered by the `StatusBar`.
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
You are going to find some slowdown on a machine running Windows 7 with only 2GB of RAM, a Pentium(R) Dual-Core T4500 @2.3Ghz and an Intel GMA4500M when running high-quality, heavily compressed video. The reason you're able to play normal `.mkv` files more smoothly is likely to do with the level of compression. This from [Wikipedia](https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding): > > In comparison to AVC, HEVC offers about double the data compression ratio at the same level of video quality, or substantially improved video quality at the same bit rate. > > > So because your HEVC files are more compressed, your computer is having to do a lot more work per frame to uncompress everything. What I might suggest is uploading these videos to something like Google Drive and watch them from there, letting the cloud servers do the hard work of decompressing. It's maybe not the best possible solution, but it's at least one solid workaround, and it's worked for me when I had similar problems in the past.
I have no problem playing HEVC 720p videos but I have a similar problem when playing H.265 1080p videos. * Win 7 Home * 2 GB RAM * Celeron® Dual-Core CPU T3500 @ 2.1 GHz. The solution: Install [HandBrake](https://handbrake.fr/downloads.php). Select your video and convert your HEVC video to H.264 The solution works but it's gonna take a long time and the size of the video will increase.
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
You are going to find some slowdown on a machine running Windows 7 with only 2GB of RAM, a Pentium(R) Dual-Core T4500 @2.3Ghz and an Intel GMA4500M when running high-quality, heavily compressed video. The reason you're able to play normal `.mkv` files more smoothly is likely to do with the level of compression. This from [Wikipedia](https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding): > > In comparison to AVC, HEVC offers about double the data compression ratio at the same level of video quality, or substantially improved video quality at the same bit rate. > > > So because your HEVC files are more compressed, your computer is having to do a lot more work per frame to uncompress everything. What I might suggest is uploading these videos to something like Google Drive and watch them from there, letting the cloud servers do the hard work of decompressing. It's maybe not the best possible solution, but it's at least one solid workaround, and it's worked for me when I had similar problems in the past.
I have also same issue so for the workaround i install [KM Player](http://filehippo.com/download_kmplayer/) and it works fine
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
You are going to find some slowdown on a machine running Windows 7 with only 2GB of RAM, a Pentium(R) Dual-Core T4500 @2.3Ghz and an Intel GMA4500M when running high-quality, heavily compressed video. The reason you're able to play normal `.mkv` files more smoothly is likely to do with the level of compression. This from [Wikipedia](https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding): > > In comparison to AVC, HEVC offers about double the data compression ratio at the same level of video quality, or substantially improved video quality at the same bit rate. > > > So because your HEVC files are more compressed, your computer is having to do a lot more work per frame to uncompress everything. What I might suggest is uploading these videos to something like Google Drive and watch them from there, letting the cloud servers do the hard work of decompressing. It's maybe not the best possible solution, but it's at least one solid workaround, and it's worked for me when I had similar problems in the past.
try CCCP with their media player. <http://www.cccp-project.net/> When setting up the settings, go to video codec-> hardware decoding -> directx(native)-> check hevc and uhd-> apply. works fine with me.
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
I have no problem playing HEVC 720p videos but I have a similar problem when playing H.265 1080p videos. * Win 7 Home * 2 GB RAM * Celeron® Dual-Core CPU T3500 @ 2.1 GHz. The solution: Install [HandBrake](https://handbrake.fr/downloads.php). Select your video and convert your HEVC video to H.264 The solution works but it's gonna take a long time and the size of the video will increase.
I have also same issue so for the workaround i install [KM Player](http://filehippo.com/download_kmplayer/) and it works fine
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
Firstly install [Windows 10](https://www.microsoft.com/en-in/software-download/windows10) instead of using outdated Windows 7. If you don't want to buy use [this](https://www.wikihow.com/Turn-Off-Windows-Activation). Install *CNX player* and [HEVC video extension](https://www.microsoft.com/en-us/store/p/hevc-video-extension/9n4wgh0z6vhq) from store. HEVC videos play much better in UWP apps than System32 apps. Since your PC specs are weak, I would suggest to install two separate windows. In one don't install clutter as much as possible and use UWP apps (see [this](https://www.howtogeek.com/302352/how-to-allow-only-apps-from-the-store-on-windows-10-and-whitelist-desktop-apps/)). Your PC will always run smoothly. Another one can be used as your messy garage. This obviously requires periodic reinstallations and not mention sluggish performance over time.
I have no problem playing HEVC 720p videos but I have a similar problem when playing H.265 1080p videos. * Win 7 Home * 2 GB RAM * Celeron® Dual-Core CPU T3500 @ 2.1 GHz. The solution: Install [HandBrake](https://handbrake.fr/downloads.php). Select your video and convert your HEVC video to H.264 The solution works but it's gonna take a long time and the size of the video will increase.
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
I have no problem playing HEVC 720p videos but I have a similar problem when playing H.265 1080p videos. * Win 7 Home * 2 GB RAM * Celeron® Dual-Core CPU T3500 @ 2.1 GHz. The solution: Install [HandBrake](https://handbrake.fr/downloads.php). Select your video and convert your HEVC video to H.264 The solution works but it's gonna take a long time and the size of the video will increase.
try CCCP with their media player. <http://www.cccp-project.net/> When setting up the settings, go to video codec-> hardware decoding -> directx(native)-> check hevc and uhd-> apply. works fine with me.
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
Firstly install [Windows 10](https://www.microsoft.com/en-in/software-download/windows10) instead of using outdated Windows 7. If you don't want to buy use [this](https://www.wikihow.com/Turn-Off-Windows-Activation). Install *CNX player* and [HEVC video extension](https://www.microsoft.com/en-us/store/p/hevc-video-extension/9n4wgh0z6vhq) from store. HEVC videos play much better in UWP apps than System32 apps. Since your PC specs are weak, I would suggest to install two separate windows. In one don't install clutter as much as possible and use UWP apps (see [this](https://www.howtogeek.com/302352/how-to-allow-only-apps-from-the-store-on-windows-10-and-whitelist-desktop-apps/)). Your PC will always run smoothly. Another one can be used as your messy garage. This obviously requires periodic reinstallations and not mention sluggish performance over time.
I have also same issue so for the workaround i install [KM Player](http://filehippo.com/download_kmplayer/) and it works fine
1,079,320
I am having trouble playing HEVC 1080p file on my system. I don't have a really strong PC, but I will give the facts: * Ram: 2GB * OS: Win 7 ultimate * processor: Pentium(R) Dual-Core T4500 @2.3Ghz * Graphic Chipset: Intel GMA4500M I tried the latest version of the said three players to no avail. My players can smoothly play 1080p videos but sadly not the 1080p HEVC files. * First stop: VLC: + Very choppy playback. Skipping frames. Grey screens. Audio and Video out of sync. Takes too long to skip. Unwatchable. * Second stop: MPC-HC (CCCP) + Better than VLC. Doesn't take too long to skip. Audio and Video still out of sync. Choppy playback. * Third and final stop: Potplayer + A lot better than MPC. Skips fast. But a bit choppy. Very much watchable. Only problem: Audio and Video out of sync. What I tried further(in MPC): I tried the madVR codec, instead of CCCP; but, to my surprise, I found the playback worse and my CPU usage skyrocketing to about 90%. I now use the EVR instead of madVR. I really want to watch the video. But how do I? --- Possible Answer: > > Maybe the file is broken. > > > Well, I tried the same file on a 1080p TV, and it played smoothly with audio and video in sync. Second, I can easily play 1080p .mkv files smoothly. It is just those HEVC mkv files that won't play smoothly :(
2016/05/21
[ "https://superuser.com/questions/1079320", "https://superuser.com", "https://superuser.com/users/526065/" ]
Firstly install [Windows 10](https://www.microsoft.com/en-in/software-download/windows10) instead of using outdated Windows 7. If you don't want to buy use [this](https://www.wikihow.com/Turn-Off-Windows-Activation). Install *CNX player* and [HEVC video extension](https://www.microsoft.com/en-us/store/p/hevc-video-extension/9n4wgh0z6vhq) from store. HEVC videos play much better in UWP apps than System32 apps. Since your PC specs are weak, I would suggest to install two separate windows. In one don't install clutter as much as possible and use UWP apps (see [this](https://www.howtogeek.com/302352/how-to-allow-only-apps-from-the-store-on-windows-10-and-whitelist-desktop-apps/)). Your PC will always run smoothly. Another one can be used as your messy garage. This obviously requires periodic reinstallations and not mention sluggish performance over time.
try CCCP with their media player. <http://www.cccp-project.net/> When setting up the settings, go to video codec-> hardware decoding -> directx(native)-> check hevc and uhd-> apply. works fine with me.
117,590
Ubuntu 9.10 USB Pendrive everything works good even after `sudo apt-get update` and I try to `sudo apt-get install dsniff` and I get the following message: ``` ubuntu@ubuntu:~$ sudo apt-get install dsniff Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package dsniff ``` It's not finding the package for some weird reason. Please help I like 9.10 Ubuntu and want Dsniff to run on this failing to fetch packages.
2012/03/31
[ "https://askubuntu.com/questions/117590", "https://askubuntu.com", "https://askubuntu.com/users/52828/" ]
The reason why you can't find the package you want is because 9.10 has reached its end of life and its repository has been removed - actually its been moved. You can follow the instructions provided in this question and answer for installing `dsniff` for 9.10. * [Installing software on an old Ubuntu version that's no longer supported](https://superuser.com/questions/301432/installing-software-on-an-old-ubuntu-version-thats-no-longer-supported-2009-ja)
Try to install dsniff from a deb package. <http://packages.debian.org/search?keywords=dsniff>
3,942,496
$F$ is CDF (probability distribution function) $\int \_{\mathbb{R}}[F(x+c) - F(x)] dx $ $= \int \_{\mathbb{R}}F(x+c) dx -\int \_{\mathbb{R}} F(x) dx$ ( **(by linearity of the integral)** $=\int \_{\mathbb{R}}F(x) dx -\int \_{\mathbb{R}}F(x) dx $ **( By change of variables theorem )** $=0$ For change of variables we used the known theorem (see below) with $\Omega = \Omega' = \mathbb{R} $ , $T(x) = x + c $ and $ \mu= \lambda$ ( Lebesgue measure) **Question: Why is the above Proof incorrect? How can we show it is not correct analytically?** *My thought:* I am not sure how to show it, maybe because the density does not exist? ( Even assumption of continuity would not imply the existence of the density) ( e.g. Cantor function is a continuous CDF which does not have a density with respect to Lebesgue measure) ``` Below is the known Theorem that we used ``` [![enter image description here](https://i.stack.imgur.com/BZCe6.png)](https://i.stack.imgur.com/BZCe6.png) > > Why is the above proof wrong ? > > > In other words why is this step $=\int \_{\mathbb{R}}F(x) dx -\int \_{\mathbb{R}}F(x) > > dx $ **( By change of variables theorem )** Incorrect ? > > We use the change of variable theorem (see above) with $\Omega = > \Omega' = \mathbb{R} $ , $T(x) = x + c $ and $ \mu= \lambda$ ( > Lebesgue measure) > > >
2020/12/10
[ "https://math.stackexchange.com/questions/3942496", "https://math.stackexchange.com", "https://math.stackexchange.com/users/709039/" ]
1. When the function is rational, *the standard procedure* is to start with finding its *expansion into partial fractions*. In the present case, you get $$f(z)=\dfrac{1}{z(z-1)}=\dfrac{1}{z-1}-\dfrac{1}{z}.$$ Hence, making use of *your substitution* $w=z-1$ (or $z=1+w$), we obtain \begin{align} \dfrac{1}{z-1}-\dfrac{1}{z}&=\frac{1}{w}-\frac{1}{w+1}=\frac{1}{w}-\frac{1}{w}\frac{1}{1-(-\dfrac{1}{w} ) }\\&=\frac{1}{z-1}- \frac{1}{z-1}\frac{1}{1-(-\dfrac{1}{z-1})}.\tag 1 \end{align} 2. And so, you just need to expand $\dfrac{1}{1-( -\dfrac{1}{w} ) }=\dfrac{1}{1-( -\dfrac{1}{z-1}) }$ for $|w|=|z-1|>1$: $$\frac{1}{1-(-\dfrac{1}{z-1})}=\sum\_{n\geq 0}\frac{(-1)^{n}}{(z-1)^{n}}=1+\sum\_{n\geq 1}\frac{(-1)^{n}}{(z-1)^{n}}. \tag 2 $$ *Please notice that this is an alternating series*. 3. Therefore, combining $(1)$ and $(2)$, for $|z-1|>1$ the original function may be expanded into \begin{align} f(z)&=\frac{1}{z-1}-\frac{1}{z-1}-\frac{1}{z-1}\sum\_{n\geq 1}\frac{(-1)^{n}}{\left( z-1\right) ^{n}} \tag 3\\&=\sum\_{n\geq 1}\frac{(-1)^{n+1}}{\left( z-1\right) ^{n+1}}=\sum\_{n\geq 2}\frac{(-1)^{n}}{\left( z-1\right) ^{n}}\\&=\frac{1}{(z-1)^2}-\frac{1}{(z-1)^3}\pm\cdots\tag{4} \end{align}
It is not clear for what you are using $w = z-1$ and the link between $f(z)$ and the remainder of your argument is unclear. However, you have nearly got the right answer. Instead, you could justify the result by writing, \begin{align} f(z) &= \frac{1}{z(z-1)} \\ &= \frac{1}{(z-1)^2(1+\frac{1}{z-1})} \end{align} and when $\lvert z-1 \rvert > 1$ the second parentheses in the denominator can be expanded as a convergent infinite series, so that, \begin{align} f(z) &= \frac{1}{(z-1)^2} \cdot \Big(1 - \frac{1}{z-1}+\frac{1}{(z-1)^2}-\cdots\Big) \\ &=\frac{1}{(z-1)^2} - \frac{1}{(z-1)^3}+\frac{1}{(z-1)^4} -\cdots \end{align} which is nearly the same as your result, derived directly from the original expression. Notice the alternating sign in the sum.
3,009,452
Let $f\_n(x)$, for all n>=1, be a sequence of non-negative continuous functions on [0,1] such that $$\lim\_{n→\infty}\int^1\_0 f\_n (x)dx=0$$ Which of the following is always correct ? A. $f\_n→0$ uniformly on $[0,1]$ B. $f\_n$ may not converge uniformly but converges to $0$ pointwise C. $f\_n$ will converge point-wise and limit may be non zero D. $f\_n$ is not guaranteed to have pointwise limit. I can find example where all four statements are true but can't find any counterexample. Option A is certainly false. Consider a function (geometrically) a isosceles triangle whose one vertex is on $0$, other on $2/n$. And height $1$ unit. Its area is $2/n →0$ . But $f$ is not uniformly continuous For others, i am completely stuck. I need ideas to solve this question rather than solution.please help!.
2018/11/22
[ "https://math.stackexchange.com/questions/3009452", "https://math.stackexchange.com", "https://math.stackexchange.com/users/517603/" ]
A. You could also consider $f\_n(x) = x^n.$ B. Let $f\_n$ be the line-segment graph connecting the points $(0,1), (1/n,0), (1,0).$ This sequence does not converge to $0$ at $x=0.$ C. Let $f\_n$ be as in B. Then consider the sequence $f\_1,0,f\_2,0,f\_3,0,\dots.$ At $x=0,$ this sequence is $1,0,1,0,\dots.$ D. See C.
> > I can find example where **all four** statements are true ... > > > All four? It seems that the statements are worded in a way that (independent of the rest of the question) **exactly** one of the four statements must be true. > > ... but can't find any counterexample. > > > I even found an example for $f\_n$ that does not converge in any point $x$: Let $f^\*\_{c,w}(x)$ with $x\in [-10,10]$ be defined as the function with the graph connecting the points: $(-10,0)$, $((\frac c{2w}-\frac 1w),0)$, $((\frac c{2w}),1)$, $((\frac c{2w}+\frac 1w),0)$, $(10,0)$. Use $f\_{c,w}(x)=f^\*\_{c,w}(x)$ for $x\in[0,1]$. Now use the following sequence: $f\_{0,1}$, $f\_{1,1}$, $f\_{2,1}$, $f\_{0,2}$, $f\_{1,2}$, $f\_{2,2}$, $f\_{3,2}$, $f\_{4,2}$, $f\_{0,4}$, $f\_{1,4}$, ..., $f\_{8,4}$, $f\_{0,8}$, $f\_{2,8}$, ..., $f\_{16,8}$, ... $f\_{0,(2^k)}$, $f\_{1,(2^k)}$, ..., $f\_{(2^{k+1}-1),(2^k)}$, $f\_{(2^{k+1}),(2^k)}$, ... If I'm correct this example can be modified by using $f\_{c,w}(x)=\sqrt wf^\*\_{c,w}(x)$. The resulting sequence should meet the conditions and for *every* value of $x\in[0,1]$ the sequence $f\_1(x)$, $f\_2(x)$, $f\_3(x)$ ... should even be *unbounded*! (However maybe I made a mistake here.)
3,009,452
Let $f\_n(x)$, for all n>=1, be a sequence of non-negative continuous functions on [0,1] such that $$\lim\_{n→\infty}\int^1\_0 f\_n (x)dx=0$$ Which of the following is always correct ? A. $f\_n→0$ uniformly on $[0,1]$ B. $f\_n$ may not converge uniformly but converges to $0$ pointwise C. $f\_n$ will converge point-wise and limit may be non zero D. $f\_n$ is not guaranteed to have pointwise limit. I can find example where all four statements are true but can't find any counterexample. Option A is certainly false. Consider a function (geometrically) a isosceles triangle whose one vertex is on $0$, other on $2/n$. And height $1$ unit. Its area is $2/n →0$ . But $f$ is not uniformly continuous For others, i am completely stuck. I need ideas to solve this question rather than solution.please help!.
2018/11/22
[ "https://math.stackexchange.com/questions/3009452", "https://math.stackexchange.com", "https://math.stackexchange.com/users/517603/" ]
Expanding on the comment I made. **Hint:** Think of placing the peak of your isosceles triangle alternating between two (or maybe infinitely many) pre-determined points. **Solution:** A more elaborate construction of isosceles triangles suffice to build a sequence of $f\_n$ that does not converge. Each $f\_n$ can be for instance: non-zero only on an interval $I\_n = (a\_n - \delta\_n, a\_n + \delta\_n)$, piecewise linear, with $f\_n(x)$ being $0$ on the extremes of the interval and $f\_n(a\_n) = 1$. Notice that we have freedom to choose $a\_n$ and $\delta\_n$. Since $\int f\_n = \delta\_n$, we just need to ask $\delta\_n \to 0$ for the limit above to hold. The main point to force that $f\_n$ do not converge pointwise is the choice of $a\_n$. If for instance $a\_{2n} = \frac13$ and $a\_{2n+1} = \frac23$, then $f\_n(\frac13)$ and $f\_n(\frac23)$ will oscilate between $0$ and $1$ infinitely many times, while for all other $x$ we have $f\_n(x) = 0$. **To Think:** If you want to build a sequence of functions with infinitely many points that do not converge, you can try a different choice of $a\_n$. For instance, let $q\_i$ be an enumeration of $\mathbb{Q} \cap (0, 1)$ and $p\_i$ be the $i$-th prime number. Then, we can try defining $a\_n = q\_i$ iff $p\_i$ is the smallest prime that divides $n$.
> > I can find example where **all four** statements are true ... > > > All four? It seems that the statements are worded in a way that (independent of the rest of the question) **exactly** one of the four statements must be true. > > ... but can't find any counterexample. > > > I even found an example for $f\_n$ that does not converge in any point $x$: Let $f^\*\_{c,w}(x)$ with $x\in [-10,10]$ be defined as the function with the graph connecting the points: $(-10,0)$, $((\frac c{2w}-\frac 1w),0)$, $((\frac c{2w}),1)$, $((\frac c{2w}+\frac 1w),0)$, $(10,0)$. Use $f\_{c,w}(x)=f^\*\_{c,w}(x)$ for $x\in[0,1]$. Now use the following sequence: $f\_{0,1}$, $f\_{1,1}$, $f\_{2,1}$, $f\_{0,2}$, $f\_{1,2}$, $f\_{2,2}$, $f\_{3,2}$, $f\_{4,2}$, $f\_{0,4}$, $f\_{1,4}$, ..., $f\_{8,4}$, $f\_{0,8}$, $f\_{2,8}$, ..., $f\_{16,8}$, ... $f\_{0,(2^k)}$, $f\_{1,(2^k)}$, ..., $f\_{(2^{k+1}-1),(2^k)}$, $f\_{(2^{k+1}),(2^k)}$, ... If I'm correct this example can be modified by using $f\_{c,w}(x)=\sqrt wf^\*\_{c,w}(x)$. The resulting sequence should meet the conditions and for *every* value of $x\in[0,1]$ the sequence $f\_1(x)$, $f\_2(x)$, $f\_3(x)$ ... should even be *unbounded*! (However maybe I made a mistake here.)
30,672,355
I have the following html ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ``` I want to change the pre style from display:block to display:none when the h2 tag above it is clicked. But, I cannot for the life of me figure out how to do this. I know I need to do something along the lines of: ```js function changeStyle() { pre.style.display = 'none' == pre.style.display ? 'block' : 'none'; } ``` But I cannot figure out how to attach it to the h2. I've tried: ```js var h2 = getElementsByTagName('h2'); var pre = h2.getElementById('pre'); ``` But that isn't right since pre is not a child of h2. I'm not sure what to do. I tried adding a variety of click event listeners to h2 and calling the changeStyle function. Note: I would really prefer not to have to use jQuery. Edit: Alright, so I'm not crazy. I have tried almost all of these methods before posting this question and they didn't work. There has to be something else going on here. For example, I tried Josh Beam's method and I get: "Cannot read property 'addEventListener' of undefined" error. I even tried wrapping it with ``` document.onLoad = function () { //code } ``` and it still doesn't work.
2015/06/05
[ "https://Stackoverflow.com/questions/30672355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966168/" ]
Will work for any H2 element inside `ul#report` ``` var main_ul = document.querySelector("#report"); main_ul.onclick = function(event){ var elm = event.target; if(elm.tagName === "H2"){ var pre = elm.nextElementSibling; // tada magic! changeStyle(pre); } }; ```
You can make use of `getElementsByTagName` like this: ```js var flag = 0 document.getElementsByTagName('h2')[0].onclick = function() { if (flag == 1) { document.getElementsByTagName('pre')[0].style.display = "none"; flag = 0; } else { document.getElementsByTagName('pre')[0].style.display = "block"; flag = 1; } } ``` ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ``` This will also allow you to toggle the `display`.
30,672,355
I have the following html ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ``` I want to change the pre style from display:block to display:none when the h2 tag above it is clicked. But, I cannot for the life of me figure out how to do this. I know I need to do something along the lines of: ```js function changeStyle() { pre.style.display = 'none' == pre.style.display ? 'block' : 'none'; } ``` But I cannot figure out how to attach it to the h2. I've tried: ```js var h2 = getElementsByTagName('h2'); var pre = h2.getElementById('pre'); ``` But that isn't right since pre is not a child of h2. I'm not sure what to do. I tried adding a variety of click event listeners to h2 and calling the changeStyle function. Note: I would really prefer not to have to use jQuery. Edit: Alright, so I'm not crazy. I have tried almost all of these methods before posting this question and they didn't work. There has to be something else going on here. For example, I tried Josh Beam's method and I get: "Cannot read property 'addEventListener' of undefined" error. I even tried wrapping it with ``` document.onLoad = function () { //code } ``` and it still doesn't work.
2015/06/05
[ "https://Stackoverflow.com/questions/30672355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966168/" ]
If you want to make sure you're getting the sibling `pre` element, just do this (see the [jsfiddle](http://jsfiddle.net/t89vb06h/1)): ``` document.getElementsByTagName('h2')[0].addEventListener('click', function(e) { var pre = e.target.nextElementSibling; pre.style.display = pre.style.display === 'block' ? 'none' : 'block'; }); ``` This will make sure you're not selecting any other `pre` elements that might be present on the page.
You can make use of `getElementsByTagName` like this: ```js var flag = 0 document.getElementsByTagName('h2')[0].onclick = function() { if (flag == 1) { document.getElementsByTagName('pre')[0].style.display = "none"; flag = 0; } else { document.getElementsByTagName('pre')[0].style.display = "block"; flag = 1; } } ``` ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ``` This will also allow you to toggle the `display`.
30,672,355
I have the following html ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ``` I want to change the pre style from display:block to display:none when the h2 tag above it is clicked. But, I cannot for the life of me figure out how to do this. I know I need to do something along the lines of: ```js function changeStyle() { pre.style.display = 'none' == pre.style.display ? 'block' : 'none'; } ``` But I cannot figure out how to attach it to the h2. I've tried: ```js var h2 = getElementsByTagName('h2'); var pre = h2.getElementById('pre'); ``` But that isn't right since pre is not a child of h2. I'm not sure what to do. I tried adding a variety of click event listeners to h2 and calling the changeStyle function. Note: I would really prefer not to have to use jQuery. Edit: Alright, so I'm not crazy. I have tried almost all of these methods before posting this question and they didn't work. There has to be something else going on here. For example, I tried Josh Beam's method and I get: "Cannot read property 'addEventListener' of undefined" error. I even tried wrapping it with ``` document.onLoad = function () { //code } ``` and it still doesn't work.
2015/06/05
[ "https://Stackoverflow.com/questions/30672355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966168/" ]
Will work for any H2 element inside `ul#report` ``` var main_ul = document.querySelector("#report"); main_ul.onclick = function(event){ var elm = event.target; if(elm.tagName === "H2"){ var pre = elm.nextElementSibling; // tada magic! changeStyle(pre); } }; ```
You can do something like this ``` var report = document.getElementById("report"); // <--- get the parent var h2 = report.getElementsByTagName('h2')[0]; // <--- search for h2 within parent var pre = report.getElementsByTagName('pre')[0];// <--- search for pre within parent h2.addEventListener('click', function(){ // <--- attach event to h2 pre.style.display = pre.style.display === 'none' ? '' : 'none'; // <--- toggle display }); ``` ```js var report = document.getElementById("report"); var h2 = report.getElementsByTagName('h2')[0]; var pre = report.getElementsByTagName('pre')[0]; h2.addEventListener('click', function() { pre.style.display = pre.style.display === 'none' ? '' : 'none'; }); ``` ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ```
30,672,355
I have the following html ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ``` I want to change the pre style from display:block to display:none when the h2 tag above it is clicked. But, I cannot for the life of me figure out how to do this. I know I need to do something along the lines of: ```js function changeStyle() { pre.style.display = 'none' == pre.style.display ? 'block' : 'none'; } ``` But I cannot figure out how to attach it to the h2. I've tried: ```js var h2 = getElementsByTagName('h2'); var pre = h2.getElementById('pre'); ``` But that isn't right since pre is not a child of h2. I'm not sure what to do. I tried adding a variety of click event listeners to h2 and calling the changeStyle function. Note: I would really prefer not to have to use jQuery. Edit: Alright, so I'm not crazy. I have tried almost all of these methods before posting this question and they didn't work. There has to be something else going on here. For example, I tried Josh Beam's method and I get: "Cannot read property 'addEventListener' of undefined" error. I even tried wrapping it with ``` document.onLoad = function () { //code } ``` and it still doesn't work.
2015/06/05
[ "https://Stackoverflow.com/questions/30672355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966168/" ]
If you want to make sure you're getting the sibling `pre` element, just do this (see the [jsfiddle](http://jsfiddle.net/t89vb06h/1)): ``` document.getElementsByTagName('h2')[0].addEventListener('click', function(e) { var pre = e.target.nextElementSibling; pre.style.display = pre.style.display === 'block' ? 'none' : 'block'; }); ``` This will make sure you're not selecting any other `pre` elements that might be present on the page.
You can do something like this ``` var report = document.getElementById("report"); // <--- get the parent var h2 = report.getElementsByTagName('h2')[0]; // <--- search for h2 within parent var pre = report.getElementsByTagName('pre')[0];// <--- search for pre within parent h2.addEventListener('click', function(){ // <--- attach event to h2 pre.style.display = pre.style.display === 'none' ? '' : 'none'; // <--- toggle display }); ``` ```js var report = document.getElementById("report"); var h2 = report.getElementsByTagName('h2')[0]; var pre = report.getElementsByTagName('pre')[0]; h2.addEventListener('click', function() { pre.style.display = pre.style.display === 'none' ? '' : 'none'; }); ``` ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ```
30,672,355
I have the following html ```html <ul id="report"> <li class="suite"> <h1>Level 2</h1> <ul> <li class="test pass fast"> <h2>it first</h2> <pre style="display: none;"> <code>('hello').should.be.a('string');</code> </pre> </li> </ul> </li> </ul> ``` I want to change the pre style from display:block to display:none when the h2 tag above it is clicked. But, I cannot for the life of me figure out how to do this. I know I need to do something along the lines of: ```js function changeStyle() { pre.style.display = 'none' == pre.style.display ? 'block' : 'none'; } ``` But I cannot figure out how to attach it to the h2. I've tried: ```js var h2 = getElementsByTagName('h2'); var pre = h2.getElementById('pre'); ``` But that isn't right since pre is not a child of h2. I'm not sure what to do. I tried adding a variety of click event listeners to h2 and calling the changeStyle function. Note: I would really prefer not to have to use jQuery. Edit: Alright, so I'm not crazy. I have tried almost all of these methods before posting this question and they didn't work. There has to be something else going on here. For example, I tried Josh Beam's method and I get: "Cannot read property 'addEventListener' of undefined" error. I even tried wrapping it with ``` document.onLoad = function () { //code } ``` and it still doesn't work.
2015/06/05
[ "https://Stackoverflow.com/questions/30672355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4966168/" ]
If you want to make sure you're getting the sibling `pre` element, just do this (see the [jsfiddle](http://jsfiddle.net/t89vb06h/1)): ``` document.getElementsByTagName('h2')[0].addEventListener('click', function(e) { var pre = e.target.nextElementSibling; pre.style.display = pre.style.display === 'block' ? 'none' : 'block'; }); ``` This will make sure you're not selecting any other `pre` elements that might be present on the page.
Will work for any H2 element inside `ul#report` ``` var main_ul = document.querySelector("#report"); main_ul.onclick = function(event){ var elm = event.target; if(elm.tagName === "H2"){ var pre = elm.nextElementSibling; // tada magic! changeStyle(pre); } }; ```
387,079
I am a PhD student in algebraic topology, and I would like to learn something about **group cohomology**. The final goal would be to present one or two seminars on this topic, in order to give my mates a gently introduction to this subject and at the same time showing them some striking result/application of this theory. Ideally, my plan for the seminar is: 1. Introduce group cohomology, with a lot of motivations and examples 2. Explain what makes group cohomology awesome 3. Focus on a specific result, and showing some pretty applications of it (something that could be interesting to an algebraic topologist if possible) in order to strenghthen point 2 I am not looking for books, which are already given in these questions: <https://math.stackexchange.com/questions/2697778/reference-for-group-cohomology> <https://math.stackexchange.com/questions/695613/reference-request-introduction-to-finite-group-cohomology?rq=1> So my questions are: 1. Does anyone know any **introductory papers/lecture notes** where I can find a **concise introduction** to group cohomology? I am looking for something which do not contains all the details but which gives me a general view of the main results and applications of the theory. Youtube videos/lecture series are also very welcome. Of course if you want to mention book that are not in the previous answers it is fine aswell. 2. Are there any suggestions about results/applications that I can put in points 2 and 3 of the seminar? As I said before the idea is to present this material to other students of algebraic topology, so I would prefer theorems/applications that will appeal to this kind of audience. **EDIT:** an answer of this kind [References and resources for (learning) chromatic homotopy theory and related areas](https://mathoverflow.net/questions/323889/references-and-resources-for-learning-chromatic-homotopy-theory-and-related-ar) is also very welcome and pertinent! Thank you in advance, Tommaso
2021/03/21
[ "https://mathoverflow.net/questions/387079", "https://mathoverflow.net", "https://mathoverflow.net/users/169319/" ]
Brown: *Lectures on the cohomology of groups*. Adem: *Lectures on the cohomology of finite groups.* Carlson: *The cohomology of groups* (from Handbook of Algebra, Vol.1, 1996). Rotman: *Homology of groups* (chapter 9 from one of his algebra books that I forget the name).
Chapter 2 of [these notes](https://www.jmilne.org/math/CourseNotes/cft.html) by Milne have been helpful to me.
5,817,179
Why doesn't the `Map` interface in Java have a `removeAll(Collection<?> c)` method to remove keys, like it has `map.remove(Object)`? I know I can always do `map.keySet().removeAll(..)` .. but is this a reason that `Map` doesn't have `removeAll()` and encourages us to go with `map.keySet().removeAll(..)` ?
2011/04/28
[ "https://Stackoverflow.com/questions/5817179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/617612/" ]
The philosophy behind the collections APIs is to be as small and simple as possible. The Collection views on Map allow you to perform this operation already, so there is no need for an extra method. The [keySet](http://download.oracle.com/javase/6/docs/api/java/util/Map.html#keySet%28%29) method returns a *view* of the Map. Operations on the key set are reflected on the map. The more general question on interface design: *Why doesn't interface X have convenient method Y?* is addressed in more depth by Martin Fowler's discussion of [MinimalInterface](http://martinfowler.com/bliki/MinimalInterface.html) vs [HumaneInterface](http://martinfowler.com/bliki/HumaneInterface.html).
Because Map is not Collection, not extends Collection interface. Maps implementations USE collection interface to provide they own functionallity. Think about situation like this: * you have Map with removeAll(..) method. * vou call this method and map removes... * so what they should remove? Keys, values or pairs - entries - key:value? Map can provide methods: * removeAllKeys() - parameter is collection of keys * removeAllValues() - parameter is collection of values * removeAllEntries() - parameter is collection of pair and remove entry if only value is mapped by key. If in map exist value with diffrend key or vice-versa then that entry isn't removed but in this case you have three methods not one. So put removeAll method to Map interface is not clear to understand wich types of objects should be check and remove - keys, value, both or pairs.
65,577,358
Currently, I can only use `jpeg` & `png` exports like: ```js stageRef.current?.getStage().toDataURL({ mimeType: 'image/jpeg', quality: 1 }) stageRef.current?.getStage().toDataURL({ mimeType: 'image/png', quality: 1 }) ``` I want to export Canvas to `svg` as well as `pdf` like [Figma](https://figma.com) does it. I found out about [Data URIs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs) which led to [MIME\_types](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types). In there, they have written `application/pdf` & `image/svg+xml` should work but when I do that I still get a `.png` image. Is there any way to achieve `.svg` & `.pdf` from Canvas in Konva?
2021/01/05
[ "https://Stackoverflow.com/questions/65577358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6141587/" ]
`stage.toDataURL()` is using [canvas.toDataURL() API](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toDataURL) to do the export. Most of the browsers support only `jpg` and `png` formats. For SVG or PDF exports you have to write your own implementation. For PDF exports you may use external libraries to generate a PDF file with an image from `stage.toDataURL()` method. As a demo take a look into [Saving Konva stage to PDF demo](https://konvajs.org/docs/sandbox/Canvas_to_PDF.html). There are no built-in methods for SVG exports in Konva library. You have to write your own implementation. If you use basic shapes such as `Rect`, `Circle` and `Text` without any fancy filters, writing such conversions shouldn't be hard, because there are similar tags in SVG spec.
`toDataUrl()` exports a bitmap, rather than a vector. You can generate an svg by using the canvas2svg package. You can set your Layer's context equal to a c2s instance, rendering it, and resetting your Layer's ref to what it was previously, as shown [here](https://stackoverflow.com/questions/70841507/react-konva-custom-context-canvas-for-use-with-canvas2svg).
31,057,012
I have a camera app where I am trying to limit the capture length to **exactly** 15 seconds. I have tried two different approaches, and neither of them are working to my satisfaction. The first approach is to fire a repeating timer every second: ``` self.timer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(countTime:) userInfo:[NSDate date] repeats:YES]; - (void)countTime:(NSTimer*)sender { NSDate *start = sender.userInfo; NSTimeInterval duration = [[NSDate date] timeIntervalSinceDate:start]; NSInteger time = round(duration); if (time > 15) { [self capture:nil]; // this stops capture } } ``` this gives me a 15 second video 8/10 times, with a periodic 16 second one... and I have tried a mixture of the NSTimeInterval double and the rounded integer here, with no apparent difference... The second approach is to fire a selector once after the desired duration, like so: ``` self.timer = [NSTimer scheduledTimerWithTimeInterval:15.0f target:self selector:@selector(capture:) userInfo:nil repeats:NO]; ``` this just calls the capture method - which stops camera capture - directly, and gives me the same results... Is there something that I am overlooking here? Now, because I have tested with a number of tweaked floating point values as the cap (*14.5, 15.0, 15.1, 15.5, 16.0 etc*) and I almost always see a 16 second video after a few tries, I am starting to wonder whether it's just the AVFoundation taking a second to flush the buffer... ???
2015/06/25
[ "https://Stackoverflow.com/questions/31057012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1226095/" ]
NSTimer is not guaranteed to fire when you want it to, just after you want it to fire: From Apple's docs: > > A timer is not a real-time mechanism; it fires only when one of the run loop modes to which the timer has been added is running and able to check if the timer’s firing time has passed. Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds. If a timer’s firing time occurs during a long callout or while the run loop is in a mode that is not monitoring the timer, the timer does not fire until the next time the run loop checks the timer. Therefore, the actual time at which the timer fires potentially can be a significant period of time after the scheduled firing time. See also Timer Tolerance. > > > But to answer your question, I used to work for a company that had a max 15 seconds video. I didn't write the video code but I think we used AVComposition after the fact to ensure that the video was no more than 15 seconds. And even then it could be a frame shorter sometimes. See [How do I use AVFoundation to crop a video](https://stackoverflow.com/questions/5198245/how-do-i-use-avfoundation-to-crop-a-video)
Thanks to Paul and Linuxious for their comments and answers... and Rory for thinking outside the box (intriguing option). And yes, in the end it is clear that NSTimer isn't sufficient by itself for this. In the end, I listen for the captureOutput delegate method to fire, test for the length of the asset, and trim the composition appropriately. ``` - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error { _isRecording = NO; AVURLAsset *videoAsset = [AVURLAsset assetWithURL:outputFileURL]; CMTime length = [videoAsset duration]; CMTimeShow(length); if(CMTimeGetSeconds(length) > 15) { NSLog(@"Capture Longer Than 15 Seconds - Attempting to Trim"); Float64 preferredDuration = 15; int32_t preferredTimeScale = 30; CMTimeRange timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(preferredDuration, preferredTimeScale)); AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:videoAsset presetName:AVAssetExportPresetHighestQuality]; exportSession.outputURL = outputFileURL; exportSession.outputFileType = AVFileTypeQuickTimeMovie; exportSession.timeRange = timeRange; NSError *err = nil; [[NSFileManager defaultManager] removeItemAtURL:outputFileURL error:&err]; if (err) { NSLog(@"Error deleting File: %@", [err localizedDescription]); } else { [exportSession exportAsynchronouslyWithCompletionHandler:^{ if (exportSession.status == AVAssetExportSessionStatusCompleted) { NSLog(@"Export Completed - Passing URL to Delegate"); if ([self.delegate respondsToSelector:@selector(didFinishRecordingToOutputFileAtURL:error:)]) { [self.delegate didFinishRecordingToOutputFileAtURL:outputFileURL error:error]; } } else if(exportSession.status == AVAssetExportSessionStatusFailed) { NSLog(@"Export Error: %@", [exportSession.error localizedDescription]); if ([self.delegate respondsToSelector:@selector(didFinishRecordingToOutputFileAtURL:error:)]) { [self.delegate didFinishRecordingToOutputFileAtURL:outputFileURL error:exportSession.error ]; } } }]; } } } ```
56,657,176
I've spent a better part of a day trying to figure out the correct JSON body and arguments to update an OU; I'm actually trying to rename an OU. I'm close but in any case, the solution has so far escaped me. I've referenced these docs so far: * <https://developers.google.com/admin-sdk/directory/v1/reference/orgunits/update#http-request> * <https://developers.google.com/resources/api-libraries/documentation/admin/directory_v1/python/latest/admin_directory_v1.orgunits.html#update> I have tried a few variations on arguments in the object with parameters passed to AdminDirectory.Orgunits.update. Ultimately, there are no examples so I'm not 100% sure what the correct parameters are. Here's my test function thus far as well: ``` function test_renameOU(){ /* Args: customerId: string, Immutable ID of the G Suite account (required) orgUnitPath: string, Full path of the organizational unit or its ID (required) (repeated) body: object, The request body. (required) The object takes the form of: { # JSON template for Org Unit resource in Directory API. "kind": "admin#directory#orgUnit", # Kind of resource this is. "parentOrgUnitPath": "A String", # Path of parent OrgUnit "name": "A String", # Name of OrgUnit "etag": "A String", # ETag of the resource. "orgUnitPath": "A String", # Path of OrgUnit "parentOrgUnitId": "A String", # Id of parent OrgUnit "blockInheritance": True or False, # Should block inheritance "orgUnitId": "A String", # Id of OrgUnit "description": "A String", # Description of OrgUnit } */ /* Function to perform rename */ function renameOU(customerId, orgUnitPath, body){ Logger.log(customerId + ", " + orgUnitPath + ", " + JSON.stringify(body)) try{ var org = AdminDirectory.Orgunits.update(customerId, orgUnitPath, body) }catch(e){ Logger.log(JSON.stringify(e)); } } /* Arguments */ var customerId = 'my_customer'; var oldOUname = "Education"; var parentOrgUnitPath = "/Users/200 COGS"; var orgUnitId = "id:03ph8a2z39wdr3v"; var orgUnitPath = parentOrgUnitPath + "/" + oldOUname; var parentOrgUnitId = "id:03ph8a2z1lakohp"; var newOUname = "255 Education"; Logger.log(orgUnitPath); var body = { //# JSON template for Org Unit resource in Directory API. "kind": "admin#directory#orgUnit", //# Kind of resource this is. "parentOrgUnitPath": parentOrgUnitPath, //# Path of parent OrgUnit "name": newOUname, //# Name of OrgUnit "orgUnitPath": parentOrgUnitPath + "/" + newOUname, //# Path of OrgUnit "parentOrgUnitId": parentOrgUnitId, //# Id of parent OrgUnit "blockInheritance": false, //# Should block inheritance "orgUnitId": orgUnitId, //# Id of OrgUnit } /* Call Rename Function */ Logger.log(customerId + ", " + orgUnitId + ", " + JSON.stringify(body)) renameOU(customerId, orgUnitId, body) } ``` I expect the result of the OU to change from "/Users/200 COGS/Education" to "/Users/200 COGS/255 Education". The output is a parse error though: ``` [19-06-17 17:39:39:165 PDT] /Users/200 COGS/Education [19-06-17 17:39:39:166 PDT] my_customer, id:03ph8a2z39wdr3v, {"kind":"admin#directory#orgUnit","parentOrgUnitPath":"/Users/200 COGS","name":"255 Education","orgUnitPath":"/Users/200 COGS/255 Education","parentOrgUnitId":"id:03ph8a2z1lakohp","blockInheritance":false,"orgUnitId":"id:03ph8a2z39wdr3v"} [19-06-17 17:39:39:166 PDT] my_customer, id:03ph8a2z39wdr3v, {"kind":"admin#directory#orgUnit","parentOrgUnitPath":"/Users/200 COGS","name":"255 Education","orgUnitPath":"/Users/200 COGS/255 Education","parentOrgUnitId":"id:03ph8a2z1lakohp","blockInheritance":false,"orgUnitId":"id:03ph8a2z39wdr3v"} [19-06-17 17:39:39:198 PDT] {"message":"API call to directory.orgunits.update failed with error: Parse Error","name":"GoogleJsonResponseException","fileName":"GSuiteOrgUnits","lineNumber":573,"stack":"\tat GSuiteOrgUnits:573 (renameOU)\n\tat GSuiteOrgUnits:600 (test_renameOU)\n","details":{"message":"Parse Error","code":400,"errors":[{"domain":"global","reason":"parseError","message":"Parse Error"}]}} ```
2019/06/18
[ "https://Stackoverflow.com/questions/56657176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8508242/" ]
If you use the `patch` endpoint you only need to pass in the fields you want to change: ``` AdminDirectory.Orgunits.patch({ name: 'New Name' }, customerId, ouIdOrPath); ```
It worked for me. Do not pass the Unit Path org as it is automatically adjusted by the api based on the "name" field ``` var orgUnit = new OrgUnit() { Name = "new name", Description = "new description" }; var request = service.Orgunits.Patch(orgUnit, customerId, orgUnitPath); request.Execute(); ```
117,093
Suppose we have a convex lens for a middle school physics experiment, a screen, and an object being imaged, under what state will we find the hyperfocal state? The screen is the imaged screen. It can be a movie screen or a white piece of paper. The lens can be imaged on the screen. In hyperfocal distance, what is the distance between the object, lens, and screen? Is it calculated using the following formula? H=f^2/Nc+f Are clear images from half of H to infinity?
2020/05/20
[ "https://photo.stackexchange.com/questions/117093", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/91249/" ]
"Hyperfocal" refers to the condition where depth of field allows the lens to be "in focus" from some minimum distance to infinity. This depends on a core assumption: the size of the acceptable "circle of confusion," which is determined by the actual aperture diameter and lens focal length, but also by the amount of enlargement the image will receive before final viewing. That is, a 250 mm lens at f/11 will have greater depth of field on an 8x10 negative than it will on an APS-C or Micro 4/3 digital sensor, because the 8x10 is likely to be viewed as a contact print, while the crope sensor image will be enlarged at least to screen viewing size (around 20:1, give or take). Once this acceptable circle of confusion is determined, it's a fairly simple calculation to determine how far in front of or behind the plane of critical focus objects can be and still produce images with this size or smaller circles of confusion. This takes the form of a table or graph that is condensed into a depth of field marking on a lens or focusing rack. Finally, one can then set a focus that, at the specified aperture setting, will produced "in focus" depth of field that just extends to infinity, and generally to half the set focus distance (many old box cameras were actually focused at about 10 feet, but had aperture that made them hyperfocal, so they'd say "five feet to infinity" or "place your subject at least five feet from the camera"). It's important to remember that "depth of field" isn't a physical condition or quality -- it's a measure of how much defocus the user is willing to accept. If you scan an 8x10 negative that looks razor sharp all over to the eye at high enough resolution and examine it at 1:1 on a good monitor, you'll find that even at f/32 there's still a plane of critical focus, and everything not in that plane is at least a little bit fuzzy -- but if the fuzziness isn't visible in normal viewing, we consider that "within depth of field," and if that depth of field extends just to infinity, the setup was hyperfocal.
Your question seems to be based upon an assumption that a human viewer can see the difference between "blurry" and "in focus" at the system limits of a lens system. This is usually far from the case without magnifying the results by a large factor. > > Can we use the imaging of the convex lens on the screen to explain hyperfocal distance? > > > Not very well. Why? **Because the image on your viewing screen will not be very large. The human eyes observing it will not be able to discriminate between slight blur and the most sharply focused parts of the image.** Thus pretty much *everything* will look in focus at that very small display size, even parts of the image that, according to your depth of field and hyperfocal distance formulae should be outside the depth of field. Keep in mind that hyperfocal distance is a corollary of depth of field, which is an [illusion](https://photo.stackexchange.com/a/106134/15871) in the context of humans viewing an image. In the context of optical physics, depth of field and the corollary hyperfocal distance are determined by the limit of the system to discriminate blur that is less than a certain size. But the limits of such optical systems are usually far finer that the ability of human eyes to see the difference unless we highly magnify the image so that our eyes can perceive the system limits of the lenses and imaging medium (film, digital sensor, etc.). When we talk about a human observer viewing an image, the system limit is almost always the viewer's vision. The results of photography are meant to be observed by human eyes. The ability of the viewer's eyes to discriminate fine details is usually the weakest link in the full system that produces a perception in the mind of the viewer of a subject imaged by a lens. Photographs are thus judged in terms of depth of field and hyperfocal distance by the limits of the viewer's ability to discriminate fine details. If you take the exact same captured image and the same viewer looks at it at different display sizes from the same distance the depth of field and hyperfocal distance will be different for each display size! This is because as the image in enlarged by an increasing factor, blur that was too small to be seen as blur by the viewer is eventually large enough that the viewer can tell it is blurry. For your construction to work as a way to demonstrate, based on human perception, depth of field and hyperfocal distance you need to be able to magnify the resulting image sufficiently that human observers can perceive blur at the system limits of the lens.
170,636
Is it possible to make a report which would show the assignees of a bunch of Permission Sets ? I'm guessing the only way to do it would be to build a report based upon the PermissionSetAssignment object, but I don't think that's possible. So is my only option Workbench?
2017/04/19
[ "https://salesforce.stackexchange.com/questions/170636", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/2550/" ]
No, it's obviously not your *only* option. It might be the simplest/most feasible, though. That said, it wouldn't be incredibly complicated to run a script to email a CSV to yourself. Here's a simple POC: ``` List<String> rows = new List<String> { 'Username,PermissionSet.Name' }; for (PermissionSetAssignment assignment : [ SELECT Assignee.Name, PermissionSet.Name FROM PermissionSetAssignment ]) rows.add(assignment.Assignee.Name + ',' + assignment.PermissionSet.Name); List<Messaging.EmailFileAttachment> attachments = new List<Messaging.EmailFileAttachment> { new Messaging.EmailFileAttachment() }; attachments[0].setBody(Blob.valueOf(String.join(rows, '\n'))); attachments[0].setFileName('Assignments.csv'); Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage(); email.setTargetObjectId(UserInfo.getUserId()); email.setSubject('Demo Permission CSV'); email.setPlainTextBody('See attached'); email.setFileAttachments(attachments); Messaging.sendEmail(new List<Messaging.Email> { email }; ``` Of course you can make life even easier and pull this data using Data Loader.
If you are on Windows, you should take a look at the power query plugin for excel. You can dump a filtered set of permissionSetAssignment records to one table, the user object to another table, and do some simple vlookups to get the info you are after. One click to refresh the data.
170,636
Is it possible to make a report which would show the assignees of a bunch of Permission Sets ? I'm guessing the only way to do it would be to build a report based upon the PermissionSetAssignment object, but I don't think that's possible. So is my only option Workbench?
2017/04/19
[ "https://salesforce.stackexchange.com/questions/170636", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/2550/" ]
No, it's obviously not your *only* option. It might be the simplest/most feasible, though. That said, it wouldn't be incredibly complicated to run a script to email a CSV to yourself. Here's a simple POC: ``` List<String> rows = new List<String> { 'Username,PermissionSet.Name' }; for (PermissionSetAssignment assignment : [ SELECT Assignee.Name, PermissionSet.Name FROM PermissionSetAssignment ]) rows.add(assignment.Assignee.Name + ',' + assignment.PermissionSet.Name); List<Messaging.EmailFileAttachment> attachments = new List<Messaging.EmailFileAttachment> { new Messaging.EmailFileAttachment() }; attachments[0].setBody(Blob.valueOf(String.join(rows, '\n'))); attachments[0].setFileName('Assignments.csv'); Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage(); email.setTargetObjectId(UserInfo.getUserId()); email.setSubject('Demo Permission CSV'); email.setPlainTextBody('See attached'); email.setFileAttachments(attachments); Messaging.sendEmail(new List<Messaging.Email> { email }; ``` Of course you can make life even easier and pull this data using Data Loader.
When I run this in Developer Console "Execute Anonymous", I get an error, which is fixed if I just add 1 more line before the last sendEmail() line: The last 3 lines need to be: ``` email.setFileAttachments(attachments); email.saveAsActivity=false; //<----- this line is missing in the code above Messaging.sendEmail(new List<Messaging.Email> { email }); ``` Once I added this, it worked great!
170,636
Is it possible to make a report which would show the assignees of a bunch of Permission Sets ? I'm guessing the only way to do it would be to build a report based upon the PermissionSetAssignment object, but I don't think that's possible. So is my only option Workbench?
2017/04/19
[ "https://salesforce.stackexchange.com/questions/170636", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/2550/" ]
If you are on Windows, you should take a look at the power query plugin for excel. You can dump a filtered set of permissionSetAssignment records to one table, the user object to another table, and do some simple vlookups to get the info you are after. One click to refresh the data.
When I run this in Developer Console "Execute Anonymous", I get an error, which is fixed if I just add 1 more line before the last sendEmail() line: The last 3 lines need to be: ``` email.setFileAttachments(attachments); email.saveAsActivity=false; //<----- this line is missing in the code above Messaging.sendEmail(new List<Messaging.Email> { email }); ``` Once I added this, it worked great!
582,616
I am working on a surge protection circuit. My schematic is shown below. In this design, I used 3 poles GDT 350V DC spark ([this one](https://www.digikey.com/en/products/detail/bourns-inc/2026-35-C2LF/1220301?s=N4IgTCBcDa4AxgGwFoDMBWZBhMAZAYiALoC%2BQA)) over at the beginning, followed by some varistors (V1, V2, V3, V4.) Every time I plug in my circuit to a 220V/50Hz AC source my fuses immediately blow up. I used an oscilloscope to catch the pulse, and I saw a 400V spike appear at the time I plug in the AC source. That spike tripped the gas discharge tube and so blew my fuse. When I removed the GDT, everything worked fine. My fuses no longer blow up and I could not catch any spike larger than 330V. Does the problem come from the GDT? [![enter image description here](https://i.stack.imgur.com/PfMcm.jpg)](https://i.stack.imgur.com/PfMcm.jpg)
2021/08/22
[ "https://electronics.stackexchange.com/questions/582616", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/242736/" ]
For comparison you can [see](https://www.littelfuse.com/%7E/media/electronics/datasheets/gas_discharge_tubes/littelfuse_gdt_ac_cg3_datasheet.pdf.pdf) that a GDT for 240VAC mains has a rated DC breakdown voltage of 600VDC, compared to your's 350VDC. The same also applies to all of your varistors, they should have a breakdown voltage somewhere around 450VDC. The MOV is slower response compared to the GDT, so the GDT strikes first. But after the spike has left, the GDT will still conduct the mains until the mains drops to near zero volts - each zero cross of 50/60Hz mains. Therefore it may not withstand such energy dump if mounted alone, the MOV which is slower at the time starts conducting and unloads the GDT. The breakdown voltages have to be: MOV lower, GDT higher.
At 400V input a 350V GDT would turn on and short out the fuses. If I remember right the GDT should be at a much higher voltage (check the regulatory requirements for your product like IEC61010)
26,113
A couple days ago I booted up Left 4 Dead, but then heard a pop and my computer turned off. It smelled like dead electronics so I unplugged the computer. I figured it was a power supply failure, so I ordered a new Corsair HX520 and tried to install it today, hoping that was the only thing that needed replacing. However my computer most emphatically did not work. Case fans turned on, a few motherboard lights and optical drive power lights turned on, but neither my GPU fan or my CPU fan turned on. Also one of my hard drives sprayed a few sparks from its underside and a bit of smoke came from it. I figure that one is dead. There was also no signal to the monitor or beep codes. I quickly turned the computer off to avoid any potential CPU overheating and tried with only 1 stick of RAM and no hard drives and got the same result. What does this mean? Does the GPU fan not turning on mean the video card is shot? What does the CPU fan not turning on mean? Is the motherboard dead as well, or the CPU, or both? Should I just spring for a new computer at this point? (edit) Also, can a power supply failure hurt your RAM?
2009/08/20
[ "https://superuser.com/questions/26113", "https://superuser.com", "https://superuser.com/users/5206/" ]
The fans dying does not necessarily mean that the CPU and GPU are dead. It looks like either the CPU or Motherboard is fried, but unfortunately unless you can see visible damage (burnt wires, blown capacitors, etc.) you are pretty much going to have to test each component individually. Have a buddy with a similar system?
Sounds like your motherboard went. Are you sure the pop was from the power supply and not a cap on the motherboard?
26,113
A couple days ago I booted up Left 4 Dead, but then heard a pop and my computer turned off. It smelled like dead electronics so I unplugged the computer. I figured it was a power supply failure, so I ordered a new Corsair HX520 and tried to install it today, hoping that was the only thing that needed replacing. However my computer most emphatically did not work. Case fans turned on, a few motherboard lights and optical drive power lights turned on, but neither my GPU fan or my CPU fan turned on. Also one of my hard drives sprayed a few sparks from its underside and a bit of smoke came from it. I figure that one is dead. There was also no signal to the monitor or beep codes. I quickly turned the computer off to avoid any potential CPU overheating and tried with only 1 stick of RAM and no hard drives and got the same result. What does this mean? Does the GPU fan not turning on mean the video card is shot? What does the CPU fan not turning on mean? Is the motherboard dead as well, or the CPU, or both? Should I just spring for a new computer at this point? (edit) Also, can a power supply failure hurt your RAM?
2009/08/20
[ "https://superuser.com/questions/26113", "https://superuser.com", "https://superuser.com/users/5206/" ]
If you have another computer or a friend's, you could test the Video card on it. But looks like your motherboard is dead though.
Sounds like your motherboard went. Are you sure the pop was from the power supply and not a cap on the motherboard?
26,113
A couple days ago I booted up Left 4 Dead, but then heard a pop and my computer turned off. It smelled like dead electronics so I unplugged the computer. I figured it was a power supply failure, so I ordered a new Corsair HX520 and tried to install it today, hoping that was the only thing that needed replacing. However my computer most emphatically did not work. Case fans turned on, a few motherboard lights and optical drive power lights turned on, but neither my GPU fan or my CPU fan turned on. Also one of my hard drives sprayed a few sparks from its underside and a bit of smoke came from it. I figure that one is dead. There was also no signal to the monitor or beep codes. I quickly turned the computer off to avoid any potential CPU overheating and tried with only 1 stick of RAM and no hard drives and got the same result. What does this mean? Does the GPU fan not turning on mean the video card is shot? What does the CPU fan not turning on mean? Is the motherboard dead as well, or the CPU, or both? Should I just spring for a new computer at this point? (edit) Also, can a power supply failure hurt your RAM?
2009/08/20
[ "https://superuser.com/questions/26113", "https://superuser.com", "https://superuser.com/users/5206/" ]
Final outcome: PSU, system drive, motherboard, CPU and GPU were all dead. The memory survived, and a tech at Fry's repair department said that if you don't see burn marks on memory, it probably is fine and can withstand power surges. Also if you try a fried CPU in a perfectly good motherboard, *it can kill the motherboard*. Bottom line is if your motherboard is fried, you better replace the CPU as well. Just trying to diagnose it can burn more perfectly good equipment.
Sounds like your motherboard went. Are you sure the pop was from the power supply and not a cap on the motherboard?
26,113
A couple days ago I booted up Left 4 Dead, but then heard a pop and my computer turned off. It smelled like dead electronics so I unplugged the computer. I figured it was a power supply failure, so I ordered a new Corsair HX520 and tried to install it today, hoping that was the only thing that needed replacing. However my computer most emphatically did not work. Case fans turned on, a few motherboard lights and optical drive power lights turned on, but neither my GPU fan or my CPU fan turned on. Also one of my hard drives sprayed a few sparks from its underside and a bit of smoke came from it. I figure that one is dead. There was also no signal to the monitor or beep codes. I quickly turned the computer off to avoid any potential CPU overheating and tried with only 1 stick of RAM and no hard drives and got the same result. What does this mean? Does the GPU fan not turning on mean the video card is shot? What does the CPU fan not turning on mean? Is the motherboard dead as well, or the CPU, or both? Should I just spring for a new computer at this point? (edit) Also, can a power supply failure hurt your RAM?
2009/08/20
[ "https://superuser.com/questions/26113", "https://superuser.com", "https://superuser.com/users/5206/" ]
I have the same problem .Because of a power surge from the power supply my GIGABYTE motherboard fried with no sign of burn or capacitor leaking,but fortunately my CPU and RAM are working good.I bought a new power supply and ASUS motherboard and they work efficiently with the survived components.
Sounds like your motherboard went. Are you sure the pop was from the power supply and not a cap on the motherboard?
26,113
A couple days ago I booted up Left 4 Dead, but then heard a pop and my computer turned off. It smelled like dead electronics so I unplugged the computer. I figured it was a power supply failure, so I ordered a new Corsair HX520 and tried to install it today, hoping that was the only thing that needed replacing. However my computer most emphatically did not work. Case fans turned on, a few motherboard lights and optical drive power lights turned on, but neither my GPU fan or my CPU fan turned on. Also one of my hard drives sprayed a few sparks from its underside and a bit of smoke came from it. I figure that one is dead. There was also no signal to the monitor or beep codes. I quickly turned the computer off to avoid any potential CPU overheating and tried with only 1 stick of RAM and no hard drives and got the same result. What does this mean? Does the GPU fan not turning on mean the video card is shot? What does the CPU fan not turning on mean? Is the motherboard dead as well, or the CPU, or both? Should I just spring for a new computer at this point? (edit) Also, can a power supply failure hurt your RAM?
2009/08/20
[ "https://superuser.com/questions/26113", "https://superuser.com", "https://superuser.com/users/5206/" ]
Final outcome: PSU, system drive, motherboard, CPU and GPU were all dead. The memory survived, and a tech at Fry's repair department said that if you don't see burn marks on memory, it probably is fine and can withstand power surges. Also if you try a fried CPU in a perfectly good motherboard, *it can kill the motherboard*. Bottom line is if your motherboard is fried, you better replace the CPU as well. Just trying to diagnose it can burn more perfectly good equipment.
The fans dying does not necessarily mean that the CPU and GPU are dead. It looks like either the CPU or Motherboard is fried, but unfortunately unless you can see visible damage (burnt wires, blown capacitors, etc.) you are pretty much going to have to test each component individually. Have a buddy with a similar system?
26,113
A couple days ago I booted up Left 4 Dead, but then heard a pop and my computer turned off. It smelled like dead electronics so I unplugged the computer. I figured it was a power supply failure, so I ordered a new Corsair HX520 and tried to install it today, hoping that was the only thing that needed replacing. However my computer most emphatically did not work. Case fans turned on, a few motherboard lights and optical drive power lights turned on, but neither my GPU fan or my CPU fan turned on. Also one of my hard drives sprayed a few sparks from its underside and a bit of smoke came from it. I figure that one is dead. There was also no signal to the monitor or beep codes. I quickly turned the computer off to avoid any potential CPU overheating and tried with only 1 stick of RAM and no hard drives and got the same result. What does this mean? Does the GPU fan not turning on mean the video card is shot? What does the CPU fan not turning on mean? Is the motherboard dead as well, or the CPU, or both? Should I just spring for a new computer at this point? (edit) Also, can a power supply failure hurt your RAM?
2009/08/20
[ "https://superuser.com/questions/26113", "https://superuser.com", "https://superuser.com/users/5206/" ]
Final outcome: PSU, system drive, motherboard, CPU and GPU were all dead. The memory survived, and a tech at Fry's repair department said that if you don't see burn marks on memory, it probably is fine and can withstand power surges. Also if you try a fried CPU in a perfectly good motherboard, *it can kill the motherboard*. Bottom line is if your motherboard is fried, you better replace the CPU as well. Just trying to diagnose it can burn more perfectly good equipment.
If you have another computer or a friend's, you could test the Video card on it. But looks like your motherboard is dead though.
26,113
A couple days ago I booted up Left 4 Dead, but then heard a pop and my computer turned off. It smelled like dead electronics so I unplugged the computer. I figured it was a power supply failure, so I ordered a new Corsair HX520 and tried to install it today, hoping that was the only thing that needed replacing. However my computer most emphatically did not work. Case fans turned on, a few motherboard lights and optical drive power lights turned on, but neither my GPU fan or my CPU fan turned on. Also one of my hard drives sprayed a few sparks from its underside and a bit of smoke came from it. I figure that one is dead. There was also no signal to the monitor or beep codes. I quickly turned the computer off to avoid any potential CPU overheating and tried with only 1 stick of RAM and no hard drives and got the same result. What does this mean? Does the GPU fan not turning on mean the video card is shot? What does the CPU fan not turning on mean? Is the motherboard dead as well, or the CPU, or both? Should I just spring for a new computer at this point? (edit) Also, can a power supply failure hurt your RAM?
2009/08/20
[ "https://superuser.com/questions/26113", "https://superuser.com", "https://superuser.com/users/5206/" ]
Final outcome: PSU, system drive, motherboard, CPU and GPU were all dead. The memory survived, and a tech at Fry's repair department said that if you don't see burn marks on memory, it probably is fine and can withstand power surges. Also if you try a fried CPU in a perfectly good motherboard, *it can kill the motherboard*. Bottom line is if your motherboard is fried, you better replace the CPU as well. Just trying to diagnose it can burn more perfectly good equipment.
I have the same problem .Because of a power surge from the power supply my GIGABYTE motherboard fried with no sign of burn or capacitor leaking,but fortunately my CPU and RAM are working good.I bought a new power supply and ASUS motherboard and they work efficiently with the survived components.
169,680
I am trying to recreate this device [**from**](http://www.google.com/patents/US6899667): ![device picture](https://i.stack.imgur.com/lbEg9.jpg) ![circuit picture](https://i.stack.imgur.com/sW7In.jpg) The housing 14 is shown with a control panel 16 for purposes of illustration. In one embodiment of this invention, the control panel 16 includes only an on/off switch 22 which would turn on the circuit 20 allowing the coil 18 to produce a magnetic field at a predetermined, fixed flux density and frequency. Alternatively, the control panel 16 is provided with a control knob 24 coupled to a potentiometer 26 included within the circuit 20, as described below in connection with a discussion of FIG. 2, to permit variation of the output frequency of the circuit 20, and, hence, the frequency of the magnetic field produced by the coil 18. The control knob 24 may be adjusted manually to selected frequency settings, represented by the radial lines 28 on the control panel 16, or, alternatively, to an "automatic" setting in which a programmable microprocessor 30 within the circuit 20 is activated to sequentially vary the output frequency of the circuit 20, as described below. Referring now to FIG. 2, the circuit 20 contained within the housing 14 is shown coupled to the power supply 12 which includes an IC voltage regulator U1 and a filter capacitor C1. The power supply 12 provides a voltage output of 5 volts to an astable multivibrator 32 consisting of NAND gates U2-A, U2-B, U2-C, resistors R1 and R2, capacitor C2 and the potentiometer 26. The operating frequency of the astable multivibrator 32 is determined by the values of R1, R2, potentiometer 26 and capacitor C2, which can be varied over a range of 0.5 Hz to 45 Hz (preferably not beyond 20.1 Hz) by operation of the potentiometer 26. As schematically depicted with a phantom line in FIG. 2, the circuit 20 may optionally include a microprocessor. As noted above, the microprocessor 20 is operative to sequentially vary the frequency output of the astable multivibrator 32. The selected frequencies over which the output is varied are discussed below in connection with the description of a particular treatment method in accordance with this invention. The signal from the astable multivibrator 32 is input to the NAND gate U2-D which is configured as an inverter. U2-D is connected through resistor R4 to and NPN bipolar junction transistor Q1 configured as an emitter follower to serve as a level shifter, e.g. to convert the signal from 5 volts to the output voltage used in one embodiment or another. Q2 is an NPN bipolar junction transistor which is coupled to Q1 through resistors R5 and R6. It functions to invert the signal from Q1, thus producing a pulsed or time varying DC output signal in the range of 0.5 to 45 Hz, preferably to no more than 20.1 Hz. When the output signal is coupled to the coil 18, a pulsed magnetic field is produced having a flux density in the range of 0.0001 to 90 gauss depending upon the embodiment of the device 10 and the size of the coil 18 which is coupled to the device at a frequency of 0.5 to 45 Hz, and preferably to no more than 20.1 Hz. Since a DC output signal is provided to the coil 18 by the circuit 20, the north and south poles of the resulting magnetic field do not vary in position relative to the coil 18. I have replaced the astable multivibrator with a function/arb waveform generator that outputs 0-5 volt dc pulse at 50 milliamps. specs of generator: output amplitude=10vp-p - output impedance=50 ohms - DC bias =+-3v as in fig. 2 if I put the 5 volt pulse to the base of NPN bipolar junction transistor Q1, and supply the collector with 12 volts - 1 amp, can I expect the output transistor to bring the dc pulse to 12 volts - 1 amp? Is this how the amplifier works? I've searched the internet to answer this but get confused with all the math. Could someone recommend the proper npn transistor chip to use for this and the values of resistors R4,R5,R6? Trying to get an output of 12v - 1amp dc pulse to the coil. Please forgive my ignorance. Im not particularly smart and I'm just trying to build one of these pulsers because I cannot afford to buy one. I understand that the transistor part of the circuit is used to amplify the dc pulse just not sure how it is done?
2015/05/08
[ "https://electronics.stackexchange.com/questions/169680", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/76095/" ]
> > I've searched the internet to answer this but get confused with all the math. Could someone recommend the proper npn transistor chip to use for this and the values of resistors R4,R5,R6? Trying to get an output of 12v - 1amp dc pulse to the coil. > > > Q1 can be almost any small NPN bipolar transistor.You MAY be able to drive Q2 directly but Q1 does no great harm. Q1 say BC337, 2N2222 or many more similar. Q2 can be an N Channel MOSFET which can give some advantages. What country are you in? (Can affect availability) **Example MOSFETS:** The generator will drive these directly (no Q1 etc) - use a say 10 to 47 Ohm drive resistor (but they will drive directly with no damage.) Your generator 50 mA drive is low for fast switching BUT your frequency is so low that it does not matter. **Place a reverse biased diode across the inductor -** say 1N400X (x is any value). \*\* The generator will drive these directly - use a say 10 to 47 Ohm drive resistor (but they will drive directly with no damage.) Your generator 50 mA drive is low for fast switching BUT your frequency is so low that it does not matter. Place a reverse biased diode across the inductor - say 1N400X (x is any value). **Without this the transistor WILL be destroyed.** STP24NF10, 100V, 26A, **$US0.54/1** in stock Digikey Through hole TO220 N Channel MOSFET. MUCH harder to damage than SMD below. [Pricing](http://www.digikey.com/product-detail/en/STP24NF10/497-3185-5-ND/654527) [Datasheet](http://www.st.com/web/en/resource/technical/document/datasheet/CD00002071.pdf) SMD - example only. TO220 much better all round. eg IRLML6344 - widely available SOT23 surface mount package [IRLML6344 - Digikey pricing](http://www.digikey.com/product-detail/en/IRLML6344TRPBF/IRLML6344TRPBFCT-ND/2538168) [IRLML6344 datasheet](http://www.irf.com/product-info/datasheets/data/irlml6344pbf.pdf) --- The chances of this helping with what you want it to help with are, sadly, very ery very low indeed. The patent is waffle, they have no real idea about the subject, they are not doing anything that others have not done before or since. Their electronic design shows they have no real competence and adding eg "a microprocessor" as they do is just desperation and/or an attempt to meaninglessly cover all bases. By all means try it, but don't raise your hopes too high. [Method and apparatus for the treatment of physical and mental disorders with low frequency, low flux density magnetic fields US 20050027158 A1](http://www.google.tl/patents/US20050027158?hl=pt-PT) [Method and apparatus for the treatment of physical and mental disorders with low frequency, low flux density magnetic fields US 6899667 B2](http://www.google.com/patents/US6899667)
> > as in fig. 2 if I put the 5 volt pulse to the base of NPN bipolar junction transistor Q1, and supply the collector with 12 volts - 1 amp, can I expect the output transistor to bring the dc pulse to 12 volts - 1 amp? Is this how the amplifier works? > > > Yes. Feed the 5V DC pulse directly to resistor R4.
73,812,056
**Scenario 1** I have the following code I am trying to mock the function `setup_logger()` but if the logger is declared outside the function I am getting an error. The source file `my_source.py:` ``` from pda.utils.logging import setup_logger logger = setup_logger() def some_method(): some_code ``` The test file `test_my_source.py:` ``` from unittest.mock import patch import pytest from src.my_source import some_method def test_some_method(): with patch('src.my_source.setup_logger') as mock: some_method() ``` In the above scenario, I am getting the below strange errors: ``` _ tests/test_my_source.py:3: in <module> from src.my_source import some_method src/my_source.py:8: in <module> logger = setup_logger() /conda_envs/test_env/lib/python3.6/site-packages/pda/utils/logging.py:79: in setup_logger base_conf = get_config() /conda_envs/test_env/lib/python3.6/site-packages/pda/utils/generics.py:285: in get_config tmp_path = os.path.join(tmp_path, name) /conda_envs/test_env/lib/python3.6/posixpath.py:80: in join a = os.fspath(a) E TypeError: expected str, bytes or os.PathLike object, not NoneType =================================================================== short test summary info ==================================================================== ERROR tests/test_my_source.py - TypeError: expected str, bytes or os.PathLike object, not NoneType ``` **Scenario 2** If I change the source code and the test then it ***works***. The source file `my_source.py"` ``` from pda.utils.logging import setup_logger def some_method(): logger = setup_logger() some_code ``` The test file `test_my_source.py:` ``` from unittest.mock import patch import pytest from src.my_source import some_method def test_some_method(): with patch('src.my_source.setup_logger') as mock: some_method() ``` Changing the source code is not an option as in scenario 2 and I have to work with scenario 1. Do I have to modify the test in scenario 1 or what can be the issue?
2022/09/22
[ "https://Stackoverflow.com/questions/73812056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2910567/" ]
As you have lists, you cannot vectorize the operation. A list comprehension might be the fastest: ``` from itertools import chain df['out'] = [list(chain.from_iterable(x[1:])) for x in df.itertuples()] ``` Example: ``` A B C out 0 [a, b, c] [1, 4, a] [x, y] [a, b, c, 1, 4, a, x, y] 1 [e, f, g, g] [5, a] [z] [e, f, g, g, 5, a, z] ```
As an alternative to @mozway 's answer, you could try something like this: ``` df = pd.DataFrame({'A': [['a', 'b', 'c'], ['e', 'f', 'g','g']], 'B' : [['1', '4', 'a'], ['5', 'a']]}) df['C'] = df.sum(axis=1).astype(str) ``` use 'astype' as required for list contents
73,812,056
**Scenario 1** I have the following code I am trying to mock the function `setup_logger()` but if the logger is declared outside the function I am getting an error. The source file `my_source.py:` ``` from pda.utils.logging import setup_logger logger = setup_logger() def some_method(): some_code ``` The test file `test_my_source.py:` ``` from unittest.mock import patch import pytest from src.my_source import some_method def test_some_method(): with patch('src.my_source.setup_logger') as mock: some_method() ``` In the above scenario, I am getting the below strange errors: ``` _ tests/test_my_source.py:3: in <module> from src.my_source import some_method src/my_source.py:8: in <module> logger = setup_logger() /conda_envs/test_env/lib/python3.6/site-packages/pda/utils/logging.py:79: in setup_logger base_conf = get_config() /conda_envs/test_env/lib/python3.6/site-packages/pda/utils/generics.py:285: in get_config tmp_path = os.path.join(tmp_path, name) /conda_envs/test_env/lib/python3.6/posixpath.py:80: in join a = os.fspath(a) E TypeError: expected str, bytes or os.PathLike object, not NoneType =================================================================== short test summary info ==================================================================== ERROR tests/test_my_source.py - TypeError: expected str, bytes or os.PathLike object, not NoneType ``` **Scenario 2** If I change the source code and the test then it ***works***. The source file `my_source.py"` ``` from pda.utils.logging import setup_logger def some_method(): logger = setup_logger() some_code ``` The test file `test_my_source.py:` ``` from unittest.mock import patch import pytest from src.my_source import some_method def test_some_method(): with patch('src.my_source.setup_logger') as mock: some_method() ``` Changing the source code is not an option as in scenario 2 and I have to work with scenario 1. Do I have to modify the test in scenario 1 or what can be the issue?
2022/09/22
[ "https://Stackoverflow.com/questions/73812056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2910567/" ]
As you have lists, you cannot vectorize the operation. A list comprehension might be the fastest: ``` from itertools import chain df['out'] = [list(chain.from_iterable(x[1:])) for x in df.itertuples()] ``` Example: ``` A B C out 0 [a, b, c] [1, 4, a] [x, y] [a, b, c, 1, 4, a, x, y] 1 [e, f, g, g] [5, a] [z] [e, f, g, g, 5, a, z] ```
you can use the apply method ``` df['C']=df.apply(lambda x: [' '.join(i) for i in list(x[df.columns.to_list()])], axis=1) ```
73,812,056
**Scenario 1** I have the following code I am trying to mock the function `setup_logger()` but if the logger is declared outside the function I am getting an error. The source file `my_source.py:` ``` from pda.utils.logging import setup_logger logger = setup_logger() def some_method(): some_code ``` The test file `test_my_source.py:` ``` from unittest.mock import patch import pytest from src.my_source import some_method def test_some_method(): with patch('src.my_source.setup_logger') as mock: some_method() ``` In the above scenario, I am getting the below strange errors: ``` _ tests/test_my_source.py:3: in <module> from src.my_source import some_method src/my_source.py:8: in <module> logger = setup_logger() /conda_envs/test_env/lib/python3.6/site-packages/pda/utils/logging.py:79: in setup_logger base_conf = get_config() /conda_envs/test_env/lib/python3.6/site-packages/pda/utils/generics.py:285: in get_config tmp_path = os.path.join(tmp_path, name) /conda_envs/test_env/lib/python3.6/posixpath.py:80: in join a = os.fspath(a) E TypeError: expected str, bytes or os.PathLike object, not NoneType =================================================================== short test summary info ==================================================================== ERROR tests/test_my_source.py - TypeError: expected str, bytes or os.PathLike object, not NoneType ``` **Scenario 2** If I change the source code and the test then it ***works***. The source file `my_source.py"` ``` from pda.utils.logging import setup_logger def some_method(): logger = setup_logger() some_code ``` The test file `test_my_source.py:` ``` from unittest.mock import patch import pytest from src.my_source import some_method def test_some_method(): with patch('src.my_source.setup_logger') as mock: some_method() ``` Changing the source code is not an option as in scenario 2 and I have to work with scenario 1. Do I have to modify the test in scenario 1 or what can be the issue?
2022/09/22
[ "https://Stackoverflow.com/questions/73812056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2910567/" ]
As an alternative to @mozway 's answer, you could try something like this: ``` df = pd.DataFrame({'A': [['a', 'b', 'c'], ['e', 'f', 'g','g']], 'B' : [['1', '4', 'a'], ['5', 'a']]}) df['C'] = df.sum(axis=1).astype(str) ``` use 'astype' as required for list contents
you can use the apply method ``` df['C']=df.apply(lambda x: [' '.join(i) for i in list(x[df.columns.to_list()])], axis=1) ```
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and [wrote a little on 'why' at my blog](http://parallelpoints.com/node/70). In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates. My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and [wrote a little on 'why' at my blog](http://parallelpoints.com/node/70). In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc. I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it. And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it. What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random. The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true: 1. You have to manually edit the HDL files produced by Coregen. 2. The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an *output* file). 3. You have to copy-and-paste the output from Coregen into your top-level design. This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it. What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random. The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true: 1. You have to manually edit the HDL files produced by Coregen. 2. The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an *output* file). 3. You have to copy-and-paste the output from Coregen into your top-level design. This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts. For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) [FPGA open-source project on github](https://github.com/nfarring/cores) for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the \*.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the \*.xco files. There are a few other Xilinx-specific files you should version control too: \*.ucf, \*.coe, \*.xcf, etc. I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it. What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random. The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true: 1. You have to manually edit the HDL files produced by Coregen. 2. The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an *output* file). 3. You have to copy-and-paste the output from Coregen into your top-level design. This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc. I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it. And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it. What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random. The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true: 1. You have to manually edit the HDL files produced by Coregen. 2. The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an *output* file). 3. You have to copy-and-paste the output from Coregen into your top-level design. This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use. Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and [wrote a little on 'why' at my blog](http://parallelpoints.com/node/70). In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and [wrote a little on 'why' at my blog](http://parallelpoints.com/node/70). In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts. For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) [FPGA open-source project on github](https://github.com/nfarring/cores) for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the \*.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the \*.xco files. There are a few other Xilinx-specific files you should version control too: \*.ucf, \*.coe, \*.xcf, etc. I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and [wrote a little on 'why' at my blog](http://parallelpoints.com/node/70). In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use. Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
2,978,684
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
2010/06/05
[ "https://Stackoverflow.com/questions/2978684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358966/" ]
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and [wrote a little on 'why' at my blog](http://parallelpoints.com/node/70). In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it. What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random. The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true: 1. You have to manually edit the HDL files produced by Coregen. 2. The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an *output* file). 3. You have to copy-and-paste the output from Coregen into your top-level design. This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.