qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
list | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
---|---|---|---|---|---|
45,866,145 |
We are implementing a number of SDK's for our suite of hardware sensors.
Having successfully got a working C API for one of our sensors, we are now starting the arduous task of testing the SDK to ensure that we haven't introduced any fatal bugs, memory leaks or race conditions.
One of our engineers has reported that when developing the test application (a Qt Widgets application), an issue has occurred when hooking onto a callback which is executed from a separate thread within the DLL.
Here is the callback prototype:
```
#define API_CALL __cdecl
typedef struct {
// msg fields...
DWORD dwSize;
// etc...
} MSG_CONTEXT, *PMSG_CONTEXT;
typedef void (API_CALL *SYS_MSG_CALLBACK)(const PMSG_CONTEXT, LPVOID);
#define API_FUNC __declspec(dllexport)
API_FUNC void SYS_RegisterCallback(SYS_MSG_CALLBACK pHandler, LPVOID pContext);
```
And it is attached in Qt as follows:
```
static void callbackHandler(const PMSG_CONTEXT msg, LPVOID context) {
MainWindow *wnd = (MainWindow *)context;
// *wnd is valid
// Call a function here
}
MainWindow::MainWindow(QWidget *parent) {
SYS_RegisterCallback(callbackHandler, this);
}
```
My question is this: is the callback executed on the thread which creates it or on the thread which executes it? In either case, I guess it need of some kind of synchronization method. Googling has resulted in a plethora of C# examples, which isn't really whats needed.
One thing under consideration is using the `SendMessage` or `PostMessage` functions rather than going down the callback route.
Could anyone offer any suggestions please at to how cross-thread safety could be achieved using callbacks? Or is the message pump route the way to go for a Windows-based SDK?
|
2017/08/24
|
[
"https://Stackoverflow.com/questions/45866145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2321263/"
] |
Also looking for the same. Have been browsing around with no results. Wrote them feedback but I doubt that they will do anything as the page seems so empty...
Although I found that Amazon Alexa has all the needed documentation to access shopping lists and todo lists: <https://developer.amazon.com/docs/custom-skills/access-the-alexa-shopping-and-to-do-lists.html>
I think its shamefull for me, just this weekend I already gave up of using Google TTS and started using (Amazon) AWS Polly for the simplicity of installation and use, and now it seems Google has no API for shopping list while Alexa does... I guess its time to sell my Google Home
|
You can get the JSON from this endpoint, assuming you're authenticated. You'll have to pass the cookie and maybe a few other headers - not sure. But, it could get the job done...
Sign into your account and go to <https://shoppinglist.google.com>. From there, open up your networking tab in your dev console, check the requests. You'll see one called `lookup`:
`https://shoppinglist.google.com/u/0/_/list/textitem/lookup`
The query params are important for auth. I don't know how auth works here and if you can easily hit it or not, but it's there, the JSON in the request. You'll just need to auth and pass the right query params.
|
45,866,145 |
We are implementing a number of SDK's for our suite of hardware sensors.
Having successfully got a working C API for one of our sensors, we are now starting the arduous task of testing the SDK to ensure that we haven't introduced any fatal bugs, memory leaks or race conditions.
One of our engineers has reported that when developing the test application (a Qt Widgets application), an issue has occurred when hooking onto a callback which is executed from a separate thread within the DLL.
Here is the callback prototype:
```
#define API_CALL __cdecl
typedef struct {
// msg fields...
DWORD dwSize;
// etc...
} MSG_CONTEXT, *PMSG_CONTEXT;
typedef void (API_CALL *SYS_MSG_CALLBACK)(const PMSG_CONTEXT, LPVOID);
#define API_FUNC __declspec(dllexport)
API_FUNC void SYS_RegisterCallback(SYS_MSG_CALLBACK pHandler, LPVOID pContext);
```
And it is attached in Qt as follows:
```
static void callbackHandler(const PMSG_CONTEXT msg, LPVOID context) {
MainWindow *wnd = (MainWindow *)context;
// *wnd is valid
// Call a function here
}
MainWindow::MainWindow(QWidget *parent) {
SYS_RegisterCallback(callbackHandler, this);
}
```
My question is this: is the callback executed on the thread which creates it or on the thread which executes it? In either case, I guess it need of some kind of synchronization method. Googling has resulted in a plethora of C# examples, which isn't really whats needed.
One thing under consideration is using the `SendMessage` or `PostMessage` functions rather than going down the callback route.
Could anyone offer any suggestions please at to how cross-thread safety could be achieved using callbacks? Or is the message pump route the way to go for a Windows-based SDK?
|
2017/08/24
|
[
"https://Stackoverflow.com/questions/45866145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2321263/"
] |
Also looking for the same. Have been browsing around with no results. Wrote them feedback but I doubt that they will do anything as the page seems so empty...
Although I found that Amazon Alexa has all the needed documentation to access shopping lists and todo lists: <https://developer.amazon.com/docs/custom-skills/access-the-alexa-shopping-and-to-do-lists.html>
I think its shamefull for me, just this weekend I already gave up of using Google TTS and started using (Amazon) AWS Polly for the simplicity of installation and use, and now it seems Google has no API for shopping list while Alexa does... I guess its time to sell my Google Home
|
You can export shopping lists in a CSV format using <https://takeout.google.com>. A download link can be emailed or a file can be dropped in Drive, Dropbox etc. This can be configured to export every two months for 1 year.
The data contains the name of the item, the quantity, if it is checked or not, and any additional notes.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
Version control was always needed, even before you hacked together your "but, we backup really often!" kludge.
Version control lets you publish those changes across files that belong to a logical function as a unit. If you need to review "what was necessary for case-insensitive sorting in that mask?", it tells you all relevant changes and suppresses the irrelevant ones.
Good version control keeps track of file names, metadata, and of the provenance of every individual line of code.
Version control lets you tag all changes with the reason you made them.
Version control is not about allowing more than one person to work together. It is about guaranteeing the historical record of your codebase. Secure in the knowledge that you cannot lose anything, or even forget when you did it and how, you are free to refactor, invent and create without fear. And you don't know what fearlessness is before you've experienced it.
|
As other people have said, "now" is always a good time to start using version control. There's so many benefits to using a good version control system it's almost a no brainer.
You mention you use Mercurial. Like other distributed vcs, you can always initialize your own (private) repo and work there. Why not try that? If it starts working for you, it might work for your team. DVCS is all about building from the ground up.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
As you describe it, you already have some sort of version control, though currently there are some issues with it compared to a typical version control:
An intentional commit in version control indicates that the developer strongly believes that the current state of the system would build successfully.
(There are exceptions, as suggested by [Jacobm001's comment](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443727_223028). Indeed, several approaches are possible, and some teams would prefer not trying to make every commit possible to build. One approach is to have nightly builds, given that during the day, the system may receive several commits which don't build.)
Since you don't have commits, your system will often result in a state which doesn't build. This **prevents you from setting Continuous Integration**.
By the way, a distributed version control system has a benefit: one can do local commits as much as needed while bringing the system to a state where it cannot build, and then do a public commit when the system is able to build.
1. Version control lets you **enforce some rules on commit**. For example, for Python files, PEP 8 can be run, preventing the commit if the committed files are not compliant.
2. **Blame** is extremely hard to do with your approach.
3. Exploring what changes were made when, and by who is hard too. Version control **logs**, the list of changed files and **a `diff`** is an excellent way to find exactly what was done.
4. **Any merge would be a pain** (or maybe developers wouldn't even see that their colleagues were modifying the files before they save the changes). You stated that:
>
> It's rare that the same project is worked on by two programmers
>
>
>
Rare doesn't mean never, so merges would occur sooner or later.
5. A backup every fifteen minutes means that **developers may lose up to fifteen minutes of work**. This is always problematic: it's hard to remember exactly what changes were done meanwhile.
6. With source control you can have meaningful commit messages. With backups all you know is that it was x minutes since last backup.
A real version control ensures that you can always revert to the previous commit; this is a huge advantage. Reverting a backup using your system would be slightly more difficult than doing a **one-click rollback**, which you can do in most version control systems. Also, in your system **Branching** is impossible.
There's a better way to do version control, and you should certainly consider changing the way you currently do it. Especially since, like [Eric Lippert mentions](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443907_223027), your current system is probably a lot more painful to maintain than any common version control system is. Having a Git or Mercurial repository on a network drive is pretty easy for example.
***Note:*** Even if you switch to a common version control system, you should still have a daily/weekly backup of the repositories. If you're using a distributed system it's less important though, since then every developer's working copy is also a backup.
|
There is a great deal of value in using version control even as an individual developer and it could be quite a bit simpler than the backup/file copy based system you have now.
* Right now, you have the ability to get to older version of the code, but how do you find the version you want?
* Just the ability to do a diff between revisions will be very valuable. Integration with development tools is another benefit you aren't getting from you current tools.
* Another substantial benefit is the ability to branch, and experiment with new features or designs without having to worry about breaking anything.
* As was mentioned in other responses, the ability to intentionally commit the code that you want to share with others is substantially different than just saving all versions of the code at 15 minute intervals. You are no doubt saving off multiple non-working versions of code that you or others will later need to dig through to find the previous good version that you actually need.
It is pretty simple to get a version control system up and running,
particular in an environment as straightforward as this one. So the investment required isn't very high. As I mentioned, the backup based system you have now sounds like it could be needlessly complex and potentially fragile. **You should benefit from the years of investment the community has made in building tools like SVN, Git, or Mercurial to solve exactly the problem of maintaining multiple versions of the software** and providing a good deal of additional capability that is directly useful to developers.
By setting up and using using formal version control, you will develop a valuable set of skills that will serve you well throughout your career. Almost every professional software development environment uses a version control system. Knowing the ins and outs of how to set up and use a repository will help you over an over again.
I am not familiar with Mercurial, but from what I understand, it is a full blown revision control system. If you already have some familiarity with it, it might be worth starting to experiment with it as a VCS.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
As you describe it, you already have some sort of version control, though currently there are some issues with it compared to a typical version control:
An intentional commit in version control indicates that the developer strongly believes that the current state of the system would build successfully.
(There are exceptions, as suggested by [Jacobm001's comment](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443727_223028). Indeed, several approaches are possible, and some teams would prefer not trying to make every commit possible to build. One approach is to have nightly builds, given that during the day, the system may receive several commits which don't build.)
Since you don't have commits, your system will often result in a state which doesn't build. This **prevents you from setting Continuous Integration**.
By the way, a distributed version control system has a benefit: one can do local commits as much as needed while bringing the system to a state where it cannot build, and then do a public commit when the system is able to build.
1. Version control lets you **enforce some rules on commit**. For example, for Python files, PEP 8 can be run, preventing the commit if the committed files are not compliant.
2. **Blame** is extremely hard to do with your approach.
3. Exploring what changes were made when, and by who is hard too. Version control **logs**, the list of changed files and **a `diff`** is an excellent way to find exactly what was done.
4. **Any merge would be a pain** (or maybe developers wouldn't even see that their colleagues were modifying the files before they save the changes). You stated that:
>
> It's rare that the same project is worked on by two programmers
>
>
>
Rare doesn't mean never, so merges would occur sooner or later.
5. A backup every fifteen minutes means that **developers may lose up to fifteen minutes of work**. This is always problematic: it's hard to remember exactly what changes were done meanwhile.
6. With source control you can have meaningful commit messages. With backups all you know is that it was x minutes since last backup.
A real version control ensures that you can always revert to the previous commit; this is a huge advantage. Reverting a backup using your system would be slightly more difficult than doing a **one-click rollback**, which you can do in most version control systems. Also, in your system **Branching** is impossible.
There's a better way to do version control, and you should certainly consider changing the way you currently do it. Especially since, like [Eric Lippert mentions](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443907_223027), your current system is probably a lot more painful to maintain than any common version control system is. Having a Git or Mercurial repository on a network drive is pretty easy for example.
***Note:*** Even if you switch to a common version control system, you should still have a daily/weekly backup of the repositories. If you're using a distributed system it's less important though, since then every developer's working copy is also a backup.
|
Version control was always needed, even before you hacked together your "but, we backup really often!" kludge.
Version control lets you publish those changes across files that belong to a logical function as a unit. If you need to review "what was necessary for case-insensitive sorting in that mask?", it tells you all relevant changes and suppresses the irrelevant ones.
Good version control keeps track of file names, metadata, and of the provenance of every individual line of code.
Version control lets you tag all changes with the reason you made them.
Version control is not about allowing more than one person to work together. It is about guaranteeing the historical record of your codebase. Secure in the knowledge that you cannot lose anything, or even forget when you did it and how, you are free to refactor, invent and create without fear. And you don't know what fearlessness is before you've experienced it.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
Just my personal view: Version control is useful for anything that takes me more than half a day or that involves a lot of trial and error – or both, of course. If it involves two or more people who are not using the same keyboard and monitor all the time, it is essential.
The cost of using a formal versioning system, beyond the initial learning curve, is negligible. Initializing a repo? Two seconds. Adding files? One second. Being able to go back to what I tried this morning and discuss what I discarded with my colleague? Worth hours or days, easily.
|
As other people have said, "now" is always a good time to start using version control. There's so many benefits to using a good version control system it's almost a no brainer.
You mention you use Mercurial. Like other distributed vcs, you can always initialize your own (private) repo and work there. Why not try that? If it starts working for you, it might work for your team. DVCS is all about building from the ground up.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
As you describe it, you already have some sort of version control, though currently there are some issues with it compared to a typical version control:
An intentional commit in version control indicates that the developer strongly believes that the current state of the system would build successfully.
(There are exceptions, as suggested by [Jacobm001's comment](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443727_223028). Indeed, several approaches are possible, and some teams would prefer not trying to make every commit possible to build. One approach is to have nightly builds, given that during the day, the system may receive several commits which don't build.)
Since you don't have commits, your system will often result in a state which doesn't build. This **prevents you from setting Continuous Integration**.
By the way, a distributed version control system has a benefit: one can do local commits as much as needed while bringing the system to a state where it cannot build, and then do a public commit when the system is able to build.
1. Version control lets you **enforce some rules on commit**. For example, for Python files, PEP 8 can be run, preventing the commit if the committed files are not compliant.
2. **Blame** is extremely hard to do with your approach.
3. Exploring what changes were made when, and by who is hard too. Version control **logs**, the list of changed files and **a `diff`** is an excellent way to find exactly what was done.
4. **Any merge would be a pain** (or maybe developers wouldn't even see that their colleagues were modifying the files before they save the changes). You stated that:
>
> It's rare that the same project is worked on by two programmers
>
>
>
Rare doesn't mean never, so merges would occur sooner or later.
5. A backup every fifteen minutes means that **developers may lose up to fifteen minutes of work**. This is always problematic: it's hard to remember exactly what changes were done meanwhile.
6. With source control you can have meaningful commit messages. With backups all you know is that it was x minutes since last backup.
A real version control ensures that you can always revert to the previous commit; this is a huge advantage. Reverting a backup using your system would be slightly more difficult than doing a **one-click rollback**, which you can do in most version control systems. Also, in your system **Branching** is impossible.
There's a better way to do version control, and you should certainly consider changing the way you currently do it. Especially since, like [Eric Lippert mentions](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443907_223027), your current system is probably a lot more painful to maintain than any common version control system is. Having a Git or Mercurial repository on a network drive is pretty easy for example.
***Note:*** Even if you switch to a common version control system, you should still have a daily/weekly backup of the repositories. If you're using a distributed system it's less important though, since then every developer's working copy is also a backup.
|
In a **professional** environment where there is code written **you should always have** source control.
There is always the danger of an interview candidate asking what you use for version control and refusing the position because of the lack of a reasonable version control system.
Also... if you happen to be able to hire a professional they might also have a lot harder time understanding and using your current versioning environment.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
Just my personal view: Version control is useful for anything that takes me more than half a day or that involves a lot of trial and error – or both, of course. If it involves two or more people who are not using the same keyboard and monitor all the time, it is essential.
The cost of using a formal versioning system, beyond the initial learning curve, is negligible. Initializing a repo? Two seconds. Adding files? One second. Being able to go back to what I tried this morning and discuss what I discarded with my colleague? Worth hours or days, easily.
|
Version control was always needed, even before you hacked together your "but, we backup really often!" kludge.
Version control lets you publish those changes across files that belong to a logical function as a unit. If you need to review "what was necessary for case-insensitive sorting in that mask?", it tells you all relevant changes and suppresses the irrelevant ones.
Good version control keeps track of file names, metadata, and of the provenance of every individual line of code.
Version control lets you tag all changes with the reason you made them.
Version control is not about allowing more than one person to work together. It is about guaranteeing the historical record of your codebase. Secure in the knowledge that you cannot lose anything, or even forget when you did it and how, you are free to refactor, invent and create without fear. And you don't know what fearlessness is before you've experienced it.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
In a **professional** environment where there is code written **you should always have** source control.
There is always the danger of an interview candidate asking what you use for version control and refusing the position because of the lack of a reasonable version control system.
Also... if you happen to be able to hire a professional they might also have a lot harder time understanding and using your current versioning environment.
|
Version control really is one of the most critical pieces of a functional development team. Aside from the standpoint that your code is always backed up, you are exposed to features such as commit messages that help you understand exactly what the person before you (or you yourself) did to a particular file. You can also diff files with previous versions and create tags for particular releases. Tags are a HUGE benefit in that you can basically create a snapshot of version x.x.x of your app's source code. This makes tracking down old bugs much much easier.
Start reading up on different platforms and see what suits your needs best. We used SVN because there were tools integrated into our IDE to leverage SVN. Ironically we do not even use these tools now, we just use Tortoise SVN to check in and check out code.
To answer your question, version control is needed from the moment you write your first line of code.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
As other people have said, "now" is always a good time to start using version control. There's so many benefits to using a good version control system it's almost a no brainer.
You mention you use Mercurial. Like other distributed vcs, you can always initialize your own (private) repo and work there. Why not try that? If it starts working for you, it might work for your team. DVCS is all about building from the ground up.
|
Version control really is one of the most critical pieces of a functional development team. Aside from the standpoint that your code is always backed up, you are exposed to features such as commit messages that help you understand exactly what the person before you (or you yourself) did to a particular file. You can also diff files with previous versions and create tags for particular releases. Tags are a HUGE benefit in that you can basically create a snapshot of version x.x.x of your app's source code. This makes tracking down old bugs much much easier.
Start reading up on different platforms and see what suits your needs best. We used SVN because there were tools integrated into our IDE to leverage SVN. Ironically we do not even use these tools now, we just use Tortoise SVN to check in and check out code.
To answer your question, version control is needed from the moment you write your first line of code.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
As you describe it, you already have some sort of version control, though currently there are some issues with it compared to a typical version control:
An intentional commit in version control indicates that the developer strongly believes that the current state of the system would build successfully.
(There are exceptions, as suggested by [Jacobm001's comment](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443727_223028). Indeed, several approaches are possible, and some teams would prefer not trying to make every commit possible to build. One approach is to have nightly builds, given that during the day, the system may receive several commits which don't build.)
Since you don't have commits, your system will often result in a state which doesn't build. This **prevents you from setting Continuous Integration**.
By the way, a distributed version control system has a benefit: one can do local commits as much as needed while bringing the system to a state where it cannot build, and then do a public commit when the system is able to build.
1. Version control lets you **enforce some rules on commit**. For example, for Python files, PEP 8 can be run, preventing the commit if the committed files are not compliant.
2. **Blame** is extremely hard to do with your approach.
3. Exploring what changes were made when, and by who is hard too. Version control **logs**, the list of changed files and **a `diff`** is an excellent way to find exactly what was done.
4. **Any merge would be a pain** (or maybe developers wouldn't even see that their colleagues were modifying the files before they save the changes). You stated that:
>
> It's rare that the same project is worked on by two programmers
>
>
>
Rare doesn't mean never, so merges would occur sooner or later.
5. A backup every fifteen minutes means that **developers may lose up to fifteen minutes of work**. This is always problematic: it's hard to remember exactly what changes were done meanwhile.
6. With source control you can have meaningful commit messages. With backups all you know is that it was x minutes since last backup.
A real version control ensures that you can always revert to the previous commit; this is a huge advantage. Reverting a backup using your system would be slightly more difficult than doing a **one-click rollback**, which you can do in most version control systems. Also, in your system **Branching** is impossible.
There's a better way to do version control, and you should certainly consider changing the way you currently do it. Especially since, like [Eric Lippert mentions](https://softwareengineering.stackexchange.com/questions/223027/at-what-point-is-version-control-needed/223028?noredirect=1#comment443907_223027), your current system is probably a lot more painful to maintain than any common version control system is. Having a Git or Mercurial repository on a network drive is pretty easy for example.
***Note:*** Even if you switch to a common version control system, you should still have a daily/weekly backup of the repositories. If you're using a distributed system it's less important though, since then every developer's working copy is also a backup.
|
Version control really is one of the most critical pieces of a functional development team. Aside from the standpoint that your code is always backed up, you are exposed to features such as commit messages that help you understand exactly what the person before you (or you yourself) did to a particular file. You can also diff files with previous versions and create tags for particular releases. Tags are a HUGE benefit in that you can basically create a snapshot of version x.x.x of your app's source code. This makes tracking down old bugs much much easier.
Start reading up on different platforms and see what suits your needs best. We used SVN because there were tools integrated into our IDE to leverage SVN. Ironically we do not even use these tools now, we just use Tortoise SVN to check in and check out code.
To answer your question, version control is needed from the moment you write your first line of code.
|
223,027 |
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states.
My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
|
2014/01/03
|
[
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
] |
There is a great deal of value in using version control even as an individual developer and it could be quite a bit simpler than the backup/file copy based system you have now.
* Right now, you have the ability to get to older version of the code, but how do you find the version you want?
* Just the ability to do a diff between revisions will be very valuable. Integration with development tools is another benefit you aren't getting from you current tools.
* Another substantial benefit is the ability to branch, and experiment with new features or designs without having to worry about breaking anything.
* As was mentioned in other responses, the ability to intentionally commit the code that you want to share with others is substantially different than just saving all versions of the code at 15 minute intervals. You are no doubt saving off multiple non-working versions of code that you or others will later need to dig through to find the previous good version that you actually need.
It is pretty simple to get a version control system up and running,
particular in an environment as straightforward as this one. So the investment required isn't very high. As I mentioned, the backup based system you have now sounds like it could be needlessly complex and potentially fragile. **You should benefit from the years of investment the community has made in building tools like SVN, Git, or Mercurial to solve exactly the problem of maintaining multiple versions of the software** and providing a good deal of additional capability that is directly useful to developers.
By setting up and using using formal version control, you will develop a valuable set of skills that will serve you well throughout your career. Almost every professional software development environment uses a version control system. Knowing the ins and outs of how to set up and use a repository will help you over an over again.
I am not familiar with Mercurial, but from what I understand, it is a full blown revision control system. If you already have some familiarity with it, it might be worth starting to experiment with it as a VCS.
|
Version control really is one of the most critical pieces of a functional development team. Aside from the standpoint that your code is always backed up, you are exposed to features such as commit messages that help you understand exactly what the person before you (or you yourself) did to a particular file. You can also diff files with previous versions and create tags for particular releases. Tags are a HUGE benefit in that you can basically create a snapshot of version x.x.x of your app's source code. This makes tracking down old bugs much much easier.
Start reading up on different platforms and see what suits your needs best. We used SVN because there were tools integrated into our IDE to leverage SVN. Ironically we do not even use these tools now, we just use Tortoise SVN to check in and check out code.
To answer your question, version control is needed from the moment you write your first line of code.
|
29,535,566 |
We have pages which have been split into multiple pages as they are too in depth. The structure currently...
```
Page (www.domain.com/page)
```
We have split this up like so...
```
Page + Subtitle (www.new-domain.com/page-subtitle-1)
Page + Subtitle (www.new-domain.com/page-subtitle-2)
Page + Subtitle (www.new-domain.com/page-subtitle-3)
```
I need to know the correct way of adding in multiple canonical tags on the original page. Is it search engine friendly to add say 3/4 canonical tags linking to 3/4 separate pages?
|
2015/04/09
|
[
"https://Stackoverflow.com/questions/29535566",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
IntelliJ doesn't support Grails 3 yet - it has been requested [in their bug tracker](https://youtrack.jetbrains.com/issue/IDEA-136970).
You can easily run the app from a commandline and use the IDE as an editor.
|
Intellij Idea 15 supports Grails 3 or higher. You can see from the following:
<https://www.jetbrains.com/help/idea/2016.1/getting-started-with-grails-3.html>
|
2,020,608 |
I have problem with JDBC application that uses MONEY data type.
When I insert into MONEY column:
```
insert into _money_test (amt) values ('123.45')
```
I got exception:
```
Character to numeric conversion error
```
The same SQL works from native Windows application using ODBC driver.
I live in Poland and have Polish locale and in my country comma separates
decimal part of number, so I tried:
```
insert into _money_test (amt) values ('123,45')
```
And it worked.
I checked that in PreparedStatement I must use dot separator: `123.45`.
And of course I can use:
```
insert into _money_test (amt) values (123.45)
```
But some code is "general", it imports data from csv file and it was safe to put number into string literal.
How to force JDBC to use DBMONEY (or simply dot) in literals?
My workstation is WinXP.
I have ODBC and JDBC Informix client in version 3.50 TC5/JC5.
I have set DBMONEY to just dot:
```
DBMONEY=.
```
EDIT:
Test code in Jython:
```
import sys
import traceback
from java.sql import DriverManager
from java.lang import Class
Class.forName("com.informix.jdbc.IfxDriver")
QUERY = "insert into _money_test (amt) values ('123.45')"
def test_money(driver, db_url, usr, passwd):
try:
print("\n\n%s\n--------------" % (driver))
db = DriverManager.getConnection(db_url, usr, passwd)
c = db.createStatement()
c.execute("delete from _money_test")
c.execute(QUERY)
rs = c.executeQuery("select amt from _money_test")
while (rs.next()):
print('[%s]' % (rs.getString(1)))
rs.close()
c.close()
db.close()
except:
print("there were errors!")
s = traceback.format_exc()
sys.stderr.write("%s\n" % (s))
print(QUERY)
test_money("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'informix', 'passwd')
test_money("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test', 'informix', 'passwd')
```
Results when I run money literal with dot and comma:
```
C:\db_examples>jython ifx_jdbc_money.py
insert into _money_test (amt) values ('123,45')
com.informix.jdbc.IfxDriver
--------------
[123.45]
sun.jdbc.odbc.JdbcOdbcDriver
--------------
there were errors!
Traceback (most recent call last):
File "ifx_jdbc_money.py", line 16, in test_money
c.execute(QUERY)
SQLException: java.sql.SQLException: [Informix][Informix ODBC Driver][Informix]Character to numeric conversion error
C:\db_examples>jython ifx_jdbc_money.py
insert into _money_test (amt) values ('123.45')
com.informix.jdbc.IfxDriver
--------------
there were errors!
Traceback (most recent call last):
File "ifx_jdbc_money.py", line 16, in test_money
c.execute(QUERY)
SQLException: java.sql.SQLException: Character to numeric conversion error
sun.jdbc.odbc.JdbcOdbcDriver
--------------
[123.45]
```
|
2010/01/07
|
[
"https://Stackoverflow.com/questions/2020608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/22595/"
] |
The [Informix JDBC data type mapping documentation](http://publib.boulder.ibm.com/infocenter/idshelp/v115/index.jsp?topic=/com.ibm.jccids.doc/com.ibm.db2.luw.apdv.java.doc/doc/rjvjdata.htm) says the following:
>
> java.math.BigDecimal MONEY(p,s)1
>
>
>
Thus, you need to use [`java.math.BigDecimal`](http://java.sun.com/javase/6/docs/api/java/math/BigDecimal.html) instead of `java.lang.String` to represent the value, [`PreparedStatement#setBigDecimal()`](http://java.sun.com/javase/6/docs/api/java/sql/PreparedStatement.html#setBigDecimal%28int,%20java.math.BigDecimal%29) to set the value and [`ResultSet#getBigDecimal()`](http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#getBigDecimal%28java.lang.String%29) to get the value.
You can "convert" from `String` to `BigDecimal` by just passing it as [constructor](http://java.sun.com/javase/6/docs/api/java/math/BigDecimal.html#BigDecimal%28java.lang.String%29) argument. The other way round can be done by calling the [`toString()`](http://java.sun.com/javase/6/docs/api/java/math/BigDecimal.html#toString%28%29) method of `BigDecimal`.
|
I solved this problem by using PreparedStatement. I think that "Character to numeric conversion error" is a bug in Informix JDBC driver.
In other database I often use, PostgreSQL, there is no difference if I run query via native JDBC driver or via JDBC-ODBC bridge. I found that PostgreSQL do not accept numeric form `123.45`. PostgreSQL accepts string literal with dot, but this dot is handled as a thousand separator. The only correctly accepted value is string literal where comma separates decimal part.
**EDIT**:
It can be solved by setting `DBMONEY=.` on server side, then all connections (ODBC, JDBC) will work with that setting.
|
12,839,028 |
Please take a look at my fiddle: <http://jsfiddle.net/wWHz4/>
This is what I made so far with my little bit of jQuery knowledge. I want the following:
When I click\* on a button of a selected title, the other titles have to fade or toggle\* away. Then animate\* the selected title to the left (instead of static jumping), then show\* the content of that selected title to the front and change\* the button name from 'more...' to 'back'. When I click\* on back I want the content to fade\* away, animate the selected title back to his position\* and bring up the other titles back to place\*.
|
2012/10/11
|
[
"https://Stackoverflow.com/questions/12839028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360601/"
] |
To toggle text you can do like this
```
$('.group .title > div a.button').click(function(){
$(this).html(($(this).html() == "more...")?"back":"more...");
$(this).parent().siblings().slideToggle("slow");
$(this).parent().parent().siblings().slideToggle("slow");
});
```
Everything else is working in your demo
|
You can change the properties of the `<a>` tags using jquery `.html()` [ <http://api.jquery.com/html/> ] function also you can place a `<div>` which contains the full text and toggle the state of `display` of that div from `none` to `block` and back to show the full text.
You can have a look at the accordian code using jquery which does something similar to what you want: <http://jqueryui.com/accordion/>
|
12,839,028 |
Please take a look at my fiddle: <http://jsfiddle.net/wWHz4/>
This is what I made so far with my little bit of jQuery knowledge. I want the following:
When I click\* on a button of a selected title, the other titles have to fade or toggle\* away. Then animate\* the selected title to the left (instead of static jumping), then show\* the content of that selected title to the front and change\* the button name from 'more...' to 'back'. When I click\* on back I want the content to fade\* away, animate the selected title back to his position\* and bring up the other titles back to place\*.
|
2012/10/11
|
[
"https://Stackoverflow.com/questions/12839028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360601/"
] |
Replace this with js code
```
$('.group .title > div a.button').click(function() {
if ($(this).parent().siblings().is(":visible")) {
$(this).text('back');
} else {
$(this).text('more');
}
$(this).parent().siblings().slideToggle("slow");
var indexcount = $(this).parent().index() + 1;
$('.content').find('.columns:nth-child('+indexcount+')').slideToggle("slow");
});
```
|
You can change the properties of the `<a>` tags using jquery `.html()` [ <http://api.jquery.com/html/> ] function also you can place a `<div>` which contains the full text and toggle the state of `display` of that div from `none` to `block` and back to show the full text.
You can have a look at the accordian code using jquery which does something similar to what you want: <http://jqueryui.com/accordion/>
|
12,839,028 |
Please take a look at my fiddle: <http://jsfiddle.net/wWHz4/>
This is what I made so far with my little bit of jQuery knowledge. I want the following:
When I click\* on a button of a selected title, the other titles have to fade or toggle\* away. Then animate\* the selected title to the left (instead of static jumping), then show\* the content of that selected title to the front and change\* the button name from 'more...' to 'back'. When I click\* on back I want the content to fade\* away, animate the selected title back to his position\* and bring up the other titles back to place\*.
|
2012/10/11
|
[
"https://Stackoverflow.com/questions/12839028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360601/"
] |
I got it working like how I wanted it to be:
```
$('.group .title > div a.button').click(function() {
var self = this;
var speed = 500;
var indexcount = $(self).parent().index() + 1;
var subcontent = $('.content').find('.columns:nth-child(' + indexcount + ')');
if ($(self).parent().siblings().is(":visible")) {
$(self).text('back');
$(self).parent().siblings().toggle(speed);
subcontent.delay(speed).toggle("slide", speed);
} else {
$(self).text('more');
subcontent.toggle("slide", speed, function() {
$(self).parent().siblings().toggle(speed);
});
}
});
```
See fiddle: <http://jsfiddle.net/wWHz4/>
|
You can change the properties of the `<a>` tags using jquery `.html()` [ <http://api.jquery.com/html/> ] function also you can place a `<div>` which contains the full text and toggle the state of `display` of that div from `none` to `block` and back to show the full text.
You can have a look at the accordian code using jquery which does something similar to what you want: <http://jqueryui.com/accordion/>
|
12,839,028 |
Please take a look at my fiddle: <http://jsfiddle.net/wWHz4/>
This is what I made so far with my little bit of jQuery knowledge. I want the following:
When I click\* on a button of a selected title, the other titles have to fade or toggle\* away. Then animate\* the selected title to the left (instead of static jumping), then show\* the content of that selected title to the front and change\* the button name from 'more...' to 'back'. When I click\* on back I want the content to fade\* away, animate the selected title back to his position\* and bring up the other titles back to place\*.
|
2012/10/11
|
[
"https://Stackoverflow.com/questions/12839028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360601/"
] |
Replace this with js code
```
$('.group .title > div a.button').click(function() {
if ($(this).parent().siblings().is(":visible")) {
$(this).text('back');
} else {
$(this).text('more');
}
$(this).parent().siblings().slideToggle("slow");
var indexcount = $(this).parent().index() + 1;
$('.content').find('.columns:nth-child('+indexcount+')').slideToggle("slow");
});
```
|
To toggle text you can do like this
```
$('.group .title > div a.button').click(function(){
$(this).html(($(this).html() == "more...")?"back":"more...");
$(this).parent().siblings().slideToggle("slow");
$(this).parent().parent().siblings().slideToggle("slow");
});
```
Everything else is working in your demo
|
12,839,028 |
Please take a look at my fiddle: <http://jsfiddle.net/wWHz4/>
This is what I made so far with my little bit of jQuery knowledge. I want the following:
When I click\* on a button of a selected title, the other titles have to fade or toggle\* away. Then animate\* the selected title to the left (instead of static jumping), then show\* the content of that selected title to the front and change\* the button name from 'more...' to 'back'. When I click\* on back I want the content to fade\* away, animate the selected title back to his position\* and bring up the other titles back to place\*.
|
2012/10/11
|
[
"https://Stackoverflow.com/questions/12839028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1360601/"
] |
Replace this with js code
```
$('.group .title > div a.button').click(function() {
if ($(this).parent().siblings().is(":visible")) {
$(this).text('back');
} else {
$(this).text('more');
}
$(this).parent().siblings().slideToggle("slow");
var indexcount = $(this).parent().index() + 1;
$('.content').find('.columns:nth-child('+indexcount+')').slideToggle("slow");
});
```
|
I got it working like how I wanted it to be:
```
$('.group .title > div a.button').click(function() {
var self = this;
var speed = 500;
var indexcount = $(self).parent().index() + 1;
var subcontent = $('.content').find('.columns:nth-child(' + indexcount + ')');
if ($(self).parent().siblings().is(":visible")) {
$(self).text('back');
$(self).parent().siblings().toggle(speed);
subcontent.delay(speed).toggle("slide", speed);
} else {
$(self).text('more');
subcontent.toggle("slide", speed, function() {
$(self).parent().siblings().toggle(speed);
});
}
});
```
See fiddle: <http://jsfiddle.net/wWHz4/>
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
You cannot create a file greater than 4GiB (2^32-1 bytes) on FAT32 partition, period. So if you want to use that image file with some VM software, you are probably out of luck, as I known no VMs which can work around limitations of braindead filesystems.
But if you are just trying to store the image there temporarily, you can create it with `dd` by 4GiB chunks, or split an existing one with a command like this:
```
split -b 4095M /source/file /target/files
```
Note that I've used 4095M and not 4096M/4G, as the maximal size of file is one byte less.
[This](http://tldp.org/LDP/abs/html/) is a guide I learned bash with. (And manpages for everything other, of course. Bash manpage looks like it was deliberately obfuscated.)
|
Look into using the 'split' command to split the files. I'm not sure if you are writing directly to the device (dd if=/dev/foo of=/dev/bar) or writing to an image on a mounted filesystem.
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
Look into using the 'split' command to split the files. I'm not sure if you are writing directly to the device (dd if=/dev/foo of=/dev/bar) or writing to an image on a mounted filesystem.
|
If the drive is empty ... reformat that drive to ext3! Unless you have other plans, just my 2pence
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
how about
<http://michi-bs.blogspot.com/2008/06/hdd-or-partition-backup-with-dd.html>
```
# dd if=/dev/hda1 | gzip -c | split -b 2000m - /mnt/hdc1/backup.img.gz.
```
|
Look into using the 'split' command to split the files. I'm not sure if you are writing directly to the device (dd if=/dev/foo of=/dev/bar) or writing to an image on a mounted filesystem.
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
You cannot create a file greater than 4GiB (2^32-1 bytes) on FAT32 partition, period. So if you want to use that image file with some VM software, you are probably out of luck, as I known no VMs which can work around limitations of braindead filesystems.
But if you are just trying to store the image there temporarily, you can create it with `dd` by 4GiB chunks, or split an existing one with a command like this:
```
split -b 4095M /source/file /target/files
```
Note that I've used 4095M and not 4096M/4G, as the maximal size of file is one byte less.
[This](http://tldp.org/LDP/abs/html/) is a guide I learned bash with. (And manpages for everything other, of course. Bash manpage looks like it was deliberately obfuscated.)
|
If the drive is empty ... reformat that drive to ext3! Unless you have other plans, just my 2pence
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
how about
<http://michi-bs.blogspot.com/2008/06/hdd-or-partition-backup-with-dd.html>
```
# dd if=/dev/hda1 | gzip -c | split -b 2000m - /mnt/hdc1/backup.img.gz.
```
|
You cannot create a file greater than 4GiB (2^32-1 bytes) on FAT32 partition, period. So if you want to use that image file with some VM software, you are probably out of luck, as I known no VMs which can work around limitations of braindead filesystems.
But if you are just trying to store the image there temporarily, you can create it with `dd` by 4GiB chunks, or split an existing one with a command like this:
```
split -b 4095M /source/file /target/files
```
Note that I've used 4095M and not 4096M/4G, as the maximal size of file is one byte less.
[This](http://tldp.org/LDP/abs/html/) is a guide I learned bash with. (And manpages for everything other, of course. Bash manpage looks like it was deliberately obfuscated.)
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
You cannot create a file greater than 4GiB (2^32-1 bytes) on FAT32 partition, period. So if you want to use that image file with some VM software, you are probably out of luck, as I known no VMs which can work around limitations of braindead filesystems.
But if you are just trying to store the image there temporarily, you can create it with `dd` by 4GiB chunks, or split an existing one with a command like this:
```
split -b 4095M /source/file /target/files
```
Note that I've used 4095M and not 4096M/4G, as the maximal size of file is one byte less.
[This](http://tldp.org/LDP/abs/html/) is a guide I learned bash with. (And manpages for everything other, of course. Bash manpage looks like it was deliberately obfuscated.)
|
My advice here would be to use gparted or similar partitioning or disk management software to resize the fat32 partition and create the space freed up as an ext2 or ntfs formatted partition. Get the best of both worlds.
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
how about
<http://michi-bs.blogspot.com/2008/06/hdd-or-partition-backup-with-dd.html>
```
# dd if=/dev/hda1 | gzip -c | split -b 2000m - /mnt/hdc1/backup.img.gz.
```
|
If the drive is empty ... reformat that drive to ext3! Unless you have other plans, just my 2pence
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
My advice here would be to use gparted or similar partitioning or disk management software to resize the fat32 partition and create the space freed up as an ext2 or ntfs formatted partition. Get the best of both worlds.
|
If the drive is empty ... reformat that drive to ext3! Unless you have other plans, just my 2pence
|
205,623 |
I need a recommendation for a email relay service to use on multiple domains with C#.
I don't want to worry about blacklisting and am looking for increase deliverability on confirmation emails etc.
Currently, trialling SocketLabs (<https://www.socketlabs.com/od/signup>) which works perfectly but limits you too 500 email/month. I don't mind paying but the starting level is 10,000 email @$39 which for me is overkill. I probably need to send 2000 max a month.
Does anyone have any recommendation or views on: <http://www.jangosmtp.com/Pricing.asp> or <http://www.smtp.com/?gclid=CPDr5qjruaUCFYVO4QodUhzUBA>
|
2010/11/24
|
[
"https://serverfault.com/questions/205623",
"https://serverfault.com",
"https://serverfault.com/users/28661/"
] |
how about
<http://michi-bs.blogspot.com/2008/06/hdd-or-partition-backup-with-dd.html>
```
# dd if=/dev/hda1 | gzip -c | split -b 2000m - /mnt/hdc1/backup.img.gz.
```
|
My advice here would be to use gparted or similar partitioning or disk management software to resize the fat32 partition and create the space freed up as an ext2 or ntfs formatted partition. Get the best of both worlds.
|
36,969,527 |
For my Javascript course I got this code:
```
for (var i in window.navigator)
{
document.getElementById('divResult').innerHTML +=
i + ': ' + window.navigator[i] + '<br />';
}
</script>
```
The teacher (online) wants me to limit the results into maximal 10.
For me this is a big puzzle. From other questions about the for..in I think to know it is a discussable statement. But how to approach this for..in? As an array with i.length?
|
2016/05/01
|
[
"https://Stackoverflow.com/questions/36969527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6277052/"
] |
Just set a counter. In each iteration you increase the counter and when the counter reaches 10 you simply break out of your loop
Code:
```
// Since we're going to access this div multiple times it's best to
// store it outside of the for loop.
var output = document.getElementById('divResult');
var counter = 0;
for (var elem in window.navigator) {
var value = window.navigator[elem];
output.innerHTML += counter + ': ' + elem + '=' + value + '<br />';
++counter;
if (counter == 10) {
break;
}
}
```
**Since you're new to JavaScript I would like to explain a little bit about for-in**
If you want to get a specific value from an array you access it's element by index. So for example:
```
var myArray = [7, 5, 6, 6];
for (var i = 0; i < myArray.length; ++i) {
var value = myArray[i];
}
```
But now you want to loop through `window.navigator` and this element is not an array but a object. And since a object is key-value it does not have a index. So how do you loop through it?
Let's imagine window.navigator looks like this:
```
var navigator = {
myBrowser: 'Google Chrome',
myOtherProperty: 'otherValue',
AnotherProperty: 'anotherValue'
};
```
If we want to get the first element from our object we use
```
navigator.myBrowser
```
or
```
navigator['myBrowser'];
```
Now we want to loop through all the elements in our object. Since the normal `for` loop uses a index and objects don't have indexes we use the `for in` loop. This loop iterates through all the properties of our object and gives us the key.
```
for (var key in navigator) {
// Here we access a property in our object by the key given by our for loop.
var value = navigator[key];
}
```
So the first iteration our key is `myBrowser` and the value is `Google Chrome`
The next iteration the key is `myOtherProperty` and the value `otherValue`.
It is usually a good idea to use [hasOwnProperty](https://developer.mozilla.org/nl/docs/Web/JavaScript/Reference/Global_Objects/Object/hasOwnProperty) if you're looping through an object:
```
for (var key in navigator) {
if (navigator.hasOwnProperty(key) {
var value = navigator[key];
}
}
```
Hope this helps
|
The teacher came back with an answer for my for..in array problem:
```
<script>
var navigatorArray = [];
for (var i in window.navigator) {
navigatorArray.push(window.navigator[i]);
}
navigatorArray.sort();
console.log(navigatorArray);
var htmlString = '';
for (var j = 0; j < navigatorArray.length; j++) {
htmlString += navigatorArray[j] + '<br />';
}
```
with the .push habbit it should be possible to collect them in an array and index them.
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
We can also try `git status` command and if the output is like :
`fatal: Not a git repository (or any of the parent directories): .git`
Then, the directory is not a git repository.
|
Adding to @Walk's comments,
`git status` will throw fatal message if its not a repo.
`echo $?` will help to capture of its an ERROR or not.
If its a git repo then `echo $?` will be 0 else 128 will be the value
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
Any directory in the system could be a git working copy. You can use an directory as if it contained a `.git` subdirectory by setting the `GIT_DIR` and `GIT_WORK_TREE` environment variables to point at an actual `.git` directory and the root of your working copy, or use the `--git-dir` and `--work-tree` options instead. See the [git man page](https://www.kernel.org/pub/software/scm/git/docs/#_the_git_repository) for more details.
|
In unix/linux systems, you can run `git status` and check the exit code `echo $?`. Anything other than 0 would tell you aren't in a git repo
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
We can also try `git status` command and if the output is like :
`fatal: Not a git repository (or any of the parent directories): .git`
Then, the directory is not a git repository.
|
Consider `git rev-parse --is-inside-work-tree` as well here (with the appropriate `-C` first as well if/as desired) to check whether this is a *working tree*. This allows you to distinguish between the "working tree" and "bare repository" cases:
```
$ git -C .git rev-parse && echo $?
0
$ git -C .git rev-parse --is-inside-work-tree && echo $?
false
0
```
As always, be aware that if you're not in a Git directory at all, you get an error:
```
$ git -C / rev-parse --is-inside-work-tree
fatal: not a git repository (or any of the parent directories): .git
```
which you may wish to discard (`git ... 2>/dev/null`). Since `--is-inside-work-tree` prints `true` or `false` you'll want to capture or test stdout.
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
Any directory in the system could be a git working copy. You can use an directory as if it contained a `.git` subdirectory by setting the `GIT_DIR` and `GIT_WORK_TREE` environment variables to point at an actual `.git` directory and the root of your working copy, or use the `--git-dir` and `--work-tree` options instead. See the [git man page](https://www.kernel.org/pub/software/scm/git/docs/#_the_git_repository) for more details.
|
Adding to @Walk's comments,
`git status` will throw fatal message if its not a repo.
`echo $?` will help to capture of its an ERROR or not.
If its a git repo then `echo $?` will be 0 else 128 will be the value
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
Use `git -C <path> rev-parse`. It will return 0 if the directory at `<path>` is a git repository and an error code otherwise.
**Further Reading**:
* [`rev-parse`](https://git-scm.com/docs/git-rev-parse)
* [`-C <path>`](https://git-scm.com/docs/git#Documentation/git.txt--Cltpathgt)
|
In unix/linux systems, you can run `git status` and check the exit code `echo $?`. Anything other than 0 would tell you aren't in a git repo
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
Adding to @Walk's comments,
`git status` will throw fatal message if its not a repo.
`echo $?` will help to capture of its an ERROR or not.
If its a git repo then `echo $?` will be 0 else 128 will be the value
|
In unix/linux systems, you can run `git status` and check the exit code `echo $?`. Anything other than 0 would tell you aren't in a git repo
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
Use `git -C <path> rev-parse`. It will return 0 if the directory at `<path>` is a git repository and an error code otherwise.
**Further Reading**:
* [`rev-parse`](https://git-scm.com/docs/git-rev-parse)
* [`-C <path>`](https://git-scm.com/docs/git#Documentation/git.txt--Cltpathgt)
|
Adding to @Walk's comments,
`git status` will throw fatal message if its not a repo.
`echo $?` will help to capture of its an ERROR or not.
If its a git repo then `echo $?` will be 0 else 128 will be the value
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
Consider `git rev-parse --is-inside-work-tree` as well here (with the appropriate `-C` first as well if/as desired) to check whether this is a *working tree*. This allows you to distinguish between the "working tree" and "bare repository" cases:
```
$ git -C .git rev-parse && echo $?
0
$ git -C .git rev-parse --is-inside-work-tree && echo $?
false
0
```
As always, be aware that if you're not in a Git directory at all, you get an error:
```
$ git -C / rev-parse --is-inside-work-tree
fatal: not a git repository (or any of the parent directories): .git
```
which you may wish to discard (`git ... 2>/dev/null`). Since `--is-inside-work-tree` prints `true` or `false` you'll want to capture or test stdout.
|
In unix/linux systems, you can run `git status` and check the exit code `echo $?`. Anything other than 0 would tell you aren't in a git repo
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
We can also try `git status` command and if the output is like :
`fatal: Not a git repository (or any of the parent directories): .git`
Then, the directory is not a git repository.
|
Any directory in the system could be a git working copy. You can use an directory as if it contained a `.git` subdirectory by setting the `GIT_DIR` and `GIT_WORK_TREE` environment variables to point at an actual `.git` directory and the root of your working copy, or use the `--git-dir` and `--work-tree` options instead. See the [git man page](https://www.kernel.org/pub/software/scm/git/docs/#_the_git_repository) for more details.
|
39,518,160 |
i'm scraping data from a website and then i want to show it as json in the browser, however even though when i `console.log()` recipes array it show the data, but it does not send anything to the browser how come it does not show the json array in browser?
```
router.get('/scrape', function(req, res, next) {
res.json(scrapeUrl("http://url"));
})
function scrapeUrl(url) {
request(url, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var recipes = [];
$('div.article-block a.picture').each(function(i, elem) {
var deepUrl = $(this).attr('href');
if(!$(this).attr('href').indexOf("tema") > -1) {
request(deepUrl, function(error, response, html){
// First we'll check to make sure no errors occurred when making the request
if(!error){
var $ = cheerio.load(html);
var image = $('div.article div.article-main-pic img').attr('src');
var title = $('div.recipe h2.fn').text();
var object = {url: deepUrl, title : title, image : image};
recipes.push(object);
}
});
}
});
return recipes;
}
});
}
```
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39518160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4034437/"
] |
Use `git -C <path> rev-parse`. It will return 0 if the directory at `<path>` is a git repository and an error code otherwise.
**Further Reading**:
* [`rev-parse`](https://git-scm.com/docs/git-rev-parse)
* [`-C <path>`](https://git-scm.com/docs/git#Documentation/git.txt--Cltpathgt)
|
Consider `git rev-parse --is-inside-work-tree` as well here (with the appropriate `-C` first as well if/as desired) to check whether this is a *working tree*. This allows you to distinguish between the "working tree" and "bare repository" cases:
```
$ git -C .git rev-parse && echo $?
0
$ git -C .git rev-parse --is-inside-work-tree && echo $?
false
0
```
As always, be aware that if you're not in a Git directory at all, you get an error:
```
$ git -C / rev-parse --is-inside-work-tree
fatal: not a git repository (or any of the parent directories): .git
```
which you may wish to discard (`git ... 2>/dev/null`). Since `--is-inside-work-tree` prints `true` or `false` you'll want to capture or test stdout.
|
37,105,299 |
I'm having a problem with my new table filtering function, the problem happens when selecting an offer to filter by - rather than showing the rows from all the filterable data inside the table the filter filters the visible rows only minus the data hidden by pagination.
On top of that when I click more to show more rows the table starts showing data outside the current filter. Which is not good.
I also have another filtering function to filter by "Free Handsets" which has been combined with my pagination method (Code below).
How can I merge this filter (the dropdown one) with my "Free Handsets" filter (the checkbox one) and pagination method, so that when I select an option to filter by the filter deals with all the data inside the table and not just the visible rows being displayed by pagination.
<https://jsfiddle.net/51Le6o06/48/>
The fiddle above shows both filtering functions working side by side but they don't function well together.
As you can see in the above jsfiddle the dropdown filter collects its options from the HTML then presents them in the dropdown menu, so all options are present to be filtered by there just hidden by pagination.
Here is the jQuery and Javascript for each of the functions.
This is the new filter that doesn't function well.
```
$(document).ready(function() {
$('.filter-gift').each(filterItems);
});
function filterItems(e) {
var items = [];
var table = '';
tableId = $(this).parent().parent().attr('tag')
var listItems = "";
listItems += "<option value=''> -Select- </option>";
$('div[tag="' + tableId + '"] table.internalActivities .information').each(function (i) {
var itm = $(this)[0].innerText;
if ($.inArray(itm, items) == -1) {
items.push($(this)[0].innerText);
listItems += "<option value='" + i + "'>" + $(this)[0].innerText + "</option>";
}
});
$('div[tag="' + tableId+ '"] .filter-gift').html(listItems);
$('.filter-gift').change(function () {
if($(this).val()!= "") {
var tableIdC = $(this).parent().parent().attr('tag');
var text = $('div[tag="' + tableIdC + '"] select option:selected')[0].text.replace(/(\r\n|\n|\r| |)/gm, "");;
$('div[tag="' + tableIdC + '"] .product-information-row').each(function (i) {
if ($(this).text().replace(/(\r\n|\n|\r| |)/gm, "") == text) {
$(this).show();
$(this).prev().show();
$(this).next().show();
}
else {
$(this).hide();
$(this).prev().hide();
$(this).next().hide();
}
});
} else {
$(this).parent().parent().find('table tr').show();
}
});
}
```
This is the filter and pagination function I want to merge with the above function (working).
```
jQuery.fn.sortPaging = function(options) {
var defaults = {
pageRows: 2
};
var settings = $.extend(true, defaults, options);
return this.each(function() {
var container = $(this);
var tableBody = container.find('.internalActivities > tbody');
var dataRows = [];
var currentPage = 1;
var maxPages = 1;
var buttonMore = container.find('.seeMoreRecords');
var buttonLess = container.find('.seeLessRecords');
var buttonFree = container.find('.filter-free');
var tableRows = [];
var maxFree = 0;
var filterFree = buttonFree.is(':checked');
function displayRows() {
tableBody.empty();
var displayed = 0;
$.each(dataRows, function(i, ele) {
if( !filterFree || (filterFree && ele.isFree) ) {
tableBody.append(ele.thisRow).append(ele.nextRow);
displayed++;
if( displayed >= currentPage*settings.pageRows ) {
return false;
};
};
});
};
function checkButtons() {
buttonLess.toggleClass('element_invisible', currentPage<=1);
buttonMore.toggleClass('element_invisible', filterFree ? currentPage>=maxFreePages : currentPage>=maxPages);
};
function showMore() {
currentPage++;
displayRows();
checkButtons();
};
function showLess() {
currentPage--;
displayRows();
checkButtons();
};
function changedFree() {
filterFree = buttonFree.is(':checked');
if( filterFree && currentPage>maxFreePages ) {
currentPage=maxFreePages;
};
displayRows();
checkButtons();
};
tableBody.find('.product-data-row').each(function(i, j) {
var thisRow = $(this);
var nextRow = thisRow.next();
var amount = parseFloat(thisRow.find('.amount').text().replace(/£/, ''));
var isFree = thisRow.find('.free').length;
maxFree += isFree;
dataRows.push({
amount: amount,
thisRow: thisRow,
nextRow: nextRow,
isFree: isFree
});
})
dataRows.sort(function(a, b) {
return a.amount - b.amount;
});
maxPages = Math.ceil(dataRows.length/settings.pageRows);
maxFreePages = Math.ceil(maxFree/settings.pageRows);
tableRows = tableBody.find("tr");
buttonMore.on('click', showMore);
buttonLess.on('click', showLess);
buttonFree.on('change', changedFree);
displayRows();
checkButtons();
})
};
$('.sort_paging').sortPaging();
```
**Goals**
* Make filter work with work with pagination.
* Make filter work simultaneously with "Free Handset" filter.
|
2016/05/08
|
[
"https://Stackoverflow.com/questions/37105299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4690404/"
] |
Your code is needlessly complicated. try to break it down into the necessary steps. About your essential problem: do everything in one go (read below to fully understand my approach):
```
function onFilterChange() {
filterProducts();
resetPagination();
showNextPage();
}
```
also improve your data structure:
if you use html as your data source, use attributes in your main objects, that will make it easy to find them. use multiple tbody tags to group your trs:
```
<tbody freeTv='false' freeHandset='false' cost='200'>
<tr>
<td>content of product 1</td>
</tr>
<tr>
<td>description of product 1</td>
</tr>
</tbody>
<tbody freeTv='true' freeHandset='false' cost='300'>
<tr>
<td>content of product 2</td>
</tr>
<tr>
<td>description of product 2</td>
</tr>
</tbody>
```
i prefer to add classes to my elements instead of removing/adding the whole element. be aware though that this will make a mess if you are planning to use nth-css-styling. if you dont need that its a great, well-debuggable way to add interaction:
```
function filterProducts() {
$('tbody').addClass('filtered');
// ... some domain-specific magic here ...
$('tbody[freeTv="true"]').removeClass('filtered');
}
```
now you just need a .filtered css definition like:
```
.filtered { display: none; }
```
for pagination you can act in a similar fashion. first hide everything (again with css `.paged { display: none; }` ):
```
function resetPagination() {
$('tbody').addClass('paged');
$('tbody.filtered').removeClass('paged');
}
```
then show the ones you want (the first 10 paged ones):
```
function showNextPage() {
$('tbody.paged').slice(0, 10).removeClass('paged');
}
```
<https://jsfiddle.net/g9zt0fan/>
|
I solved the problem myself by starting from scratch and using the datatables library. I'm still working on it but the code is a lot simpler to deal with.
The only problem I face now is changing the pagination style.
<https://jsfiddle.net/6k0bshb6/16/>
```
// This function is for displaying data from HTML "data-child-value" tag in the Child Row.
function format(value) {
return '<div>Hidden Value: ' + value + '</div>';
}
// Initialization of dataTable and settings.
$(document).ready(function () {
var dataTable = $('#example').DataTable({
bLengthChange: false,
"pageLength": 5,
"pagingType": "simple",
"order": [[ 7, "asc" ]],
"columnDefs": [
{
"targets": [ 5 ],
"visible": false,
"searchable": true
},
{
"targets": [ 6 ],
"visible": false,
"searchable": true
},
{
"targets": [ 7 ],
"visible": false,
"searchable": true
}
],
// Dropdown filter function for dataTable from hidden column number 5 for filtering gifts.
initComplete: function () {
this.api().columns(5).every(function () {
var column = this;
var select = $('<select><option value="">Show all</option></select>')
.appendTo($("#control-panel").find("div").eq(1))
.on('change', function () {
var val = $.fn.dataTable.util.escapeRegex(
$(this).val());
column.search(val ? '^' + val + '$' : '', true, false)
.draw();
});
column.data().unique().sort().each(function (d, j) {
select.append('<option value="' + d + '">' + d + '</option>')
});
});
}
});
// This function is for handling Child Rows.
$('#example').on('click', 'td.details-control', function () {
var tr = $(this).closest('tr');
var row = dataTable.row(tr);
if (row.child.isShown()) {
// This row is already open - close it
row.child.hide();
tr.removeClass('shown');
} else {
// Open this row
row.child(format(tr.data('child-value'))).show();
tr.addClass('shown');
}
});
// Checkbox filter function below is for filtering hidden column 6 to show Free Handsets only.
$('#checkbox-filter').on('change', function() {
dataTable.draw();
});
$.fn.dataTable.ext.search.push(
function( settings, data, dataIndex ) {
var target = '£0.00';
var position = data[6]; // use data for the position column
if($('#checkbox-filter').is(":checked")) {
if (target === position) {
return true;
}
return false;
}
return true;
}
);
});
```
|
79,378 |
I may not understand how DDoS attacks work at this point, but Im just wondering if this would work for a game server?
**IF each time a new connection is attempted and the packet or connection type is not similar to that of a normal player, then block the connection?**
I dont see any reason for the server to allow connections that are not needed to play the game. Only 1 port would be open. The only downside I see is that the CPU would be put under a lot of stress with large attacks.
But for the most part, wouldn't this work well?
I may be completely wrong and off-base, so i'm just posting here to find out.
|
2015/01/17
|
[
"https://security.stackexchange.com/questions/79378",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56920/"
] |
A DDoS attack does not necessarily saturate the network capacity, any resource which can be exhausted to result in the application being unable to serve requests can be used to achieve a DoS.
As you say:
>
> The only downside I see is that the CPU would be put under a lot of stress with large attacks.
>
>
>
The CPU is definitely a resource that can be exhausted to achieve a DoS. In the case of a game server this is probably a particularly common scenario given that the game server probably performs complex server side operations.
>
> not similar to that of a normal player
>
>
>
I suspect that determining what is *not* normal is another very challenging and subjective task. I imagine game developers expend considerable effort developing reasonable rate limits for computationally expensive operations to balance DoS prevention with not impacting legitimate game play. However, I doubt they get this balance right 100% of the time, just like developers aren't always able to prevent vulnerabilities and bugs 100% of the time.
|
DDOS attacks are not just against the Server resources they are against bandwidth too.
The most recent attacks had bandwidth consumption of up to 1 Tbps.
Blocking IP's is not going to help as the firewall still has to drop large amount of packets/connections from multiple sources and within a short amount of time the server runs out of it's allocated/available bandwidth.
Gaming severs have to accept traffic on certain ports across the internet to make sure users can access them regardless of their location. Most gaming servers have dedicated servers per region.
Script kiddies do not actually log on to gaming servers, they are simply trying to overwhelm the server/bandwidth so legitimate users wont be able to access them.
|
79,378 |
I may not understand how DDoS attacks work at this point, but Im just wondering if this would work for a game server?
**IF each time a new connection is attempted and the packet or connection type is not similar to that of a normal player, then block the connection?**
I dont see any reason for the server to allow connections that are not needed to play the game. Only 1 port would be open. The only downside I see is that the CPU would be put under a lot of stress with large attacks.
But for the most part, wouldn't this work well?
I may be completely wrong and off-base, so i'm just posting here to find out.
|
2015/01/17
|
[
"https://security.stackexchange.com/questions/79378",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56920/"
] |
A DDoS attack does not necessarily saturate the network capacity, any resource which can be exhausted to result in the application being unable to serve requests can be used to achieve a DoS.
As you say:
>
> The only downside I see is that the CPU would be put under a lot of stress with large attacks.
>
>
>
The CPU is definitely a resource that can be exhausted to achieve a DoS. In the case of a game server this is probably a particularly common scenario given that the game server probably performs complex server side operations.
>
> not similar to that of a normal player
>
>
>
I suspect that determining what is *not* normal is another very challenging and subjective task. I imagine game developers expend considerable effort developing reasonable rate limits for computationally expensive operations to balance DoS prevention with not impacting legitimate game play. However, I doubt they get this balance right 100% of the time, just like developers aren't always able to prevent vulnerabilities and bugs 100% of the time.
|
If you read stevens, you'll realise the network socket is defined in part by the source socket port and the receiver socket. So having one open port is not a panacea for the ddos. At this point, what you're trying to balance is the rate at which you can identify and remove resource hogs vs the rate of incoming requests.
It is the classification of the incoming requests that is the tricky part.
|
79,378 |
I may not understand how DDoS attacks work at this point, but Im just wondering if this would work for a game server?
**IF each time a new connection is attempted and the packet or connection type is not similar to that of a normal player, then block the connection?**
I dont see any reason for the server to allow connections that are not needed to play the game. Only 1 port would be open. The only downside I see is that the CPU would be put under a lot of stress with large attacks.
But for the most part, wouldn't this work well?
I may be completely wrong and off-base, so i'm just posting here to find out.
|
2015/01/17
|
[
"https://security.stackexchange.com/questions/79378",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56920/"
] |
A DDoS attack does not necessarily saturate the network capacity, any resource which can be exhausted to result in the application being unable to serve requests can be used to achieve a DoS.
As you say:
>
> The only downside I see is that the CPU would be put under a lot of stress with large attacks.
>
>
>
The CPU is definitely a resource that can be exhausted to achieve a DoS. In the case of a game server this is probably a particularly common scenario given that the game server probably performs complex server side operations.
>
> not similar to that of a normal player
>
>
>
I suspect that determining what is *not* normal is another very challenging and subjective task. I imagine game developers expend considerable effort developing reasonable rate limits for computationally expensive operations to balance DoS prevention with not impacting legitimate game play. However, I doubt they get this balance right 100% of the time, just like developers aren't always able to prevent vulnerabilities and bugs 100% of the time.
|
TL;DR
Powerful hardware is probably the only defense for full-scale DDoS attacks.
Blocking users IP can help, but can also backfire, since actual players might be used in DDoS against you without knowing it.
Keep server connection handling fast and lightweight, to avoid adding extra ways they can attack.
First of all, the server must not allow a connection to do anything until it has been authenticated and shown to be a valid user. There is no reason the server should do anything for non-valid connections. Your players must have accounts on your server, and they must login with a password. The server should not do anything with a connection until a they can login successfully. Any expensive handling should only happen after the login has been completed. You could take this one step further, and block the IP address of anybody that makes too many bad login attempts, but this probably wont help much for the reasons below...
Trying to block connections in software will not stop a true DDoS attack. At best it would work in a specific scenario where you have angered a forum of thousands of members, who decided retaliate by spending all day trying to log in to your sever. Plausible, but not an efficient way to attack. Blocking them would reduce the load on your game server if you have a somewhat expensive authentication scheme.
A true DDoS, however, ignores your login protocol. It operates at the lower network layers; before your application ever knows a packet has been received. These types of attacks cannot be mitigated other than by hardware that is powerful enough to withstand more abuse than your attackers can deal out. Special hardware firewalls might help here. The firewall itself can block connections based on it's own logic, but it still has limits. A more expensive firewall might have higher limits, but still probably can't withstand 10 million attackers. Now if your game server is a server cluster in dozens of locations each with it's own expensive router and firewall... it stands a much better chance overall. That might be more cost than any game server is worth, but even modestly priced routers have some traffic filtering.
Blocking connections in your code when your firewall can't might prevent your code from suffering a secondary hit on the processor and memory, and neither can help with network saturation. (You can close your front door to stop people from trampling all over your house, but that does not stop them from crowding your driveway. You could add a gate, but they can swarm the gate. You could build a bigger gate, but enough of them can swarm that too.)
Weather in software or through a firewall, blocking users by IP does have a downside. Imagine if a troll who hates your game decided to create a webpage with a script that makes login attempts to your game. If they upload this page to one of your games forums, any legitimate players who pass by that page will unknowingly be making login attempts. Not only does this cause problems for your server handling these requests... but if your sever blocks them, then you are potentially blocking legitimate players!
So, in closing, make sure you have a proper and efficient login protocol, as good of a router/firewall as you can afford, and that you are very careful about how you block IP addresses, if at all.
|
79,378 |
I may not understand how DDoS attacks work at this point, but Im just wondering if this would work for a game server?
**IF each time a new connection is attempted and the packet or connection type is not similar to that of a normal player, then block the connection?**
I dont see any reason for the server to allow connections that are not needed to play the game. Only 1 port would be open. The only downside I see is that the CPU would be put under a lot of stress with large attacks.
But for the most part, wouldn't this work well?
I may be completely wrong and off-base, so i'm just posting here to find out.
|
2015/01/17
|
[
"https://security.stackexchange.com/questions/79378",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56920/"
] |
DDOS attacks are not just against the Server resources they are against bandwidth too.
The most recent attacks had bandwidth consumption of up to 1 Tbps.
Blocking IP's is not going to help as the firewall still has to drop large amount of packets/connections from multiple sources and within a short amount of time the server runs out of it's allocated/available bandwidth.
Gaming severs have to accept traffic on certain ports across the internet to make sure users can access them regardless of their location. Most gaming servers have dedicated servers per region.
Script kiddies do not actually log on to gaming servers, they are simply trying to overwhelm the server/bandwidth so legitimate users wont be able to access them.
|
If you read stevens, you'll realise the network socket is defined in part by the source socket port and the receiver socket. So having one open port is not a panacea for the ddos. At this point, what you're trying to balance is the rate at which you can identify and remove resource hogs vs the rate of incoming requests.
It is the classification of the incoming requests that is the tricky part.
|
79,378 |
I may not understand how DDoS attacks work at this point, but Im just wondering if this would work for a game server?
**IF each time a new connection is attempted and the packet or connection type is not similar to that of a normal player, then block the connection?**
I dont see any reason for the server to allow connections that are not needed to play the game. Only 1 port would be open. The only downside I see is that the CPU would be put under a lot of stress with large attacks.
But for the most part, wouldn't this work well?
I may be completely wrong and off-base, so i'm just posting here to find out.
|
2015/01/17
|
[
"https://security.stackexchange.com/questions/79378",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/56920/"
] |
TL;DR
Powerful hardware is probably the only defense for full-scale DDoS attacks.
Blocking users IP can help, but can also backfire, since actual players might be used in DDoS against you without knowing it.
Keep server connection handling fast and lightweight, to avoid adding extra ways they can attack.
First of all, the server must not allow a connection to do anything until it has been authenticated and shown to be a valid user. There is no reason the server should do anything for non-valid connections. Your players must have accounts on your server, and they must login with a password. The server should not do anything with a connection until a they can login successfully. Any expensive handling should only happen after the login has been completed. You could take this one step further, and block the IP address of anybody that makes too many bad login attempts, but this probably wont help much for the reasons below...
Trying to block connections in software will not stop a true DDoS attack. At best it would work in a specific scenario where you have angered a forum of thousands of members, who decided retaliate by spending all day trying to log in to your sever. Plausible, but not an efficient way to attack. Blocking them would reduce the load on your game server if you have a somewhat expensive authentication scheme.
A true DDoS, however, ignores your login protocol. It operates at the lower network layers; before your application ever knows a packet has been received. These types of attacks cannot be mitigated other than by hardware that is powerful enough to withstand more abuse than your attackers can deal out. Special hardware firewalls might help here. The firewall itself can block connections based on it's own logic, but it still has limits. A more expensive firewall might have higher limits, but still probably can't withstand 10 million attackers. Now if your game server is a server cluster in dozens of locations each with it's own expensive router and firewall... it stands a much better chance overall. That might be more cost than any game server is worth, but even modestly priced routers have some traffic filtering.
Blocking connections in your code when your firewall can't might prevent your code from suffering a secondary hit on the processor and memory, and neither can help with network saturation. (You can close your front door to stop people from trampling all over your house, but that does not stop them from crowding your driveway. You could add a gate, but they can swarm the gate. You could build a bigger gate, but enough of them can swarm that too.)
Weather in software or through a firewall, blocking users by IP does have a downside. Imagine if a troll who hates your game decided to create a webpage with a script that makes login attempts to your game. If they upload this page to one of your games forums, any legitimate players who pass by that page will unknowingly be making login attempts. Not only does this cause problems for your server handling these requests... but if your sever blocks them, then you are potentially blocking legitimate players!
So, in closing, make sure you have a proper and efficient login protocol, as good of a router/firewall as you can afford, and that you are very careful about how you block IP addresses, if at all.
|
If you read stevens, you'll realise the network socket is defined in part by the source socket port and the receiver socket. So having one open port is not a panacea for the ddos. At this point, what you're trying to balance is the rate at which you can identify and remove resource hogs vs the rate of incoming requests.
It is the classification of the incoming requests that is the tricky part.
|
7,347,560 |
I just saw this in a project I downloaded from Code Project:
```
base.DialogResult = this.Result != null;
```
I don't consider myself new to C# but this one is new to me. Can anyone tell me what's going on with this statement?
**Edit** Great answers, thanks. I've just never used that before.
|
2011/09/08
|
[
"https://Stackoverflow.com/questions/7347560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698699/"
] |
If you add parens it's easier to read (and understand). The logical comparison operator `!=` precedes the assignment operator `=`:
```
base.DialogResult = (this.Result != null);
```
The same statement, even more verbose:
```
if (this.Result != null)
base.DialogResult = true;
else
base.DialogResult = false;
```
|
`this.Result != null` evaluates to a boolean, `true` or `false`.
The result of the evaluation is set in the `DialogResult` member of the base class.
Not strange at all, it's just an assignment.
|
7,347,560 |
I just saw this in a project I downloaded from Code Project:
```
base.DialogResult = this.Result != null;
```
I don't consider myself new to C# but this one is new to me. Can anyone tell me what's going on with this statement?
**Edit** Great answers, thanks. I've just never used that before.
|
2011/09/08
|
[
"https://Stackoverflow.com/questions/7347560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698699/"
] |
If you add parens it's easier to read (and understand). The logical comparison operator `!=` precedes the assignment operator `=`:
```
base.DialogResult = (this.Result != null);
```
The same statement, even more verbose:
```
if (this.Result != null)
base.DialogResult = true;
else
base.DialogResult = false;
```
|
The `!=` (not equal) operator has precedence over the `=` (assignment) operator.
|
7,347,560 |
I just saw this in a project I downloaded from Code Project:
```
base.DialogResult = this.Result != null;
```
I don't consider myself new to C# but this one is new to me. Can anyone tell me what's going on with this statement?
**Edit** Great answers, thanks. I've just never used that before.
|
2011/09/08
|
[
"https://Stackoverflow.com/questions/7347560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698699/"
] |
If you add parens it's easier to read (and understand). The logical comparison operator `!=` precedes the assignment operator `=`:
```
base.DialogResult = (this.Result != null);
```
The same statement, even more verbose:
```
if (this.Result != null)
base.DialogResult = true;
else
base.DialogResult = false;
```
|
Thats simple, basically it assigns the result of the expression
```
this.Result != null
```
to
```
base.DialogResult
```
the expression uses the in-equality operator, so it returns either true or false, depending on wether this.Result is null or not
|
7,347,560 |
I just saw this in a project I downloaded from Code Project:
```
base.DialogResult = this.Result != null;
```
I don't consider myself new to C# but this one is new to me. Can anyone tell me what's going on with this statement?
**Edit** Great answers, thanks. I've just never used that before.
|
2011/09/08
|
[
"https://Stackoverflow.com/questions/7347560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698699/"
] |
If you add parens it's easier to read (and understand). The logical comparison operator `!=` precedes the assignment operator `=`:
```
base.DialogResult = (this.Result != null);
```
The same statement, even more verbose:
```
if (this.Result != null)
base.DialogResult = true;
else
base.DialogResult = false;
```
|
That means:
```
bool g = (this.Result != null);
this.DialogResult = g;
```
|
7,347,560 |
I just saw this in a project I downloaded from Code Project:
```
base.DialogResult = this.Result != null;
```
I don't consider myself new to C# but this one is new to me. Can anyone tell me what's going on with this statement?
**Edit** Great answers, thanks. I've just never used that before.
|
2011/09/08
|
[
"https://Stackoverflow.com/questions/7347560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698699/"
] |
`this.Result != null` evaluates to a boolean, `true` or `false`.
The result of the evaluation is set in the `DialogResult` member of the base class.
Not strange at all, it's just an assignment.
|
That means:
```
bool g = (this.Result != null);
this.DialogResult = g;
```
|
7,347,560 |
I just saw this in a project I downloaded from Code Project:
```
base.DialogResult = this.Result != null;
```
I don't consider myself new to C# but this one is new to me. Can anyone tell me what's going on with this statement?
**Edit** Great answers, thanks. I've just never used that before.
|
2011/09/08
|
[
"https://Stackoverflow.com/questions/7347560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698699/"
] |
The `!=` (not equal) operator has precedence over the `=` (assignment) operator.
|
That means:
```
bool g = (this.Result != null);
this.DialogResult = g;
```
|
7,347,560 |
I just saw this in a project I downloaded from Code Project:
```
base.DialogResult = this.Result != null;
```
I don't consider myself new to C# but this one is new to me. Can anyone tell me what's going on with this statement?
**Edit** Great answers, thanks. I've just never used that before.
|
2011/09/08
|
[
"https://Stackoverflow.com/questions/7347560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698699/"
] |
Thats simple, basically it assigns the result of the expression
```
this.Result != null
```
to
```
base.DialogResult
```
the expression uses the in-equality operator, so it returns either true or false, depending on wether this.Result is null or not
|
That means:
```
bool g = (this.Result != null);
this.DialogResult = g;
```
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
For those still looking for an answer, here's how I managed to get the perfect solution for me.
I used iCalToolkit to create a calendar object.
It's important to make sure all the relevant fields are set up (organizer and attendees with RSVP).
Initially I was using the Postmark API service to send my emails but this solution was only working by sending an ics.file attachment.
I switched to the Postmark SMTP service where you can embed the iCal data inside the message and for that I used nodemailer.
This is what it looks like :
```
var icalToolkit = require('ical-toolkit');
var postmark = require('postmark');
var client = new postmark.Client('xxxxxxxKeyxxxxxxxxxxxx');
var nodemailer = require('nodemailer');
var smtpTransport = require('nodemailer-smtp-transport');
//Create a iCal object
var builder = icalToolkit.createIcsFileBuilder();
builder.method = meeting.method;
//Add the event data
var icsFileContent = builder.toString();
var smtpOptions = {
host:'smtp.postmarkapp.com',
port: 2525,
secureConnection: true,
auth:{
user:'xxxxxxxKeyxxxxxxxxxxxx',
pass:'xxxxxxxPassxxxxxxxxxxx'
}
};
var transporter = nodemailer.createTransport(smtpTransport(smtpOptions));
var mailOptions = {
from: '[email protected]',
to: meeting.events[0].attendees[i].email,
subject: 'Meeting to attend',
html: "Anything here",
text: "Anything here",
alternatives: [{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}]
};
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
This sends a real meeting request with the Accept, decline and Reject button.
It's really unbelievable the amount of work you need to go through for such a trivial functionality and how all of this not well documented.
Hope this helps.
|
It should be possible as long as you can use SOAP in Node and also if you can use NTLM authentication for Exchange with Node. I believe there are modules for each.
I found this [blog](https://www.howtoforge.com/talking-soap-with-exchange) very helpful when designing a similar system using PHP
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
If you do not want to use smtp server approach in earlier accepted solution, you have Exchange focused solution available. Whats wrong in current accepted answer? it does not create a meeting in sender's Calendar, you do not have ownership of the meeting item for further modification by the sender Outlook/OWA.
here is code snippet in javascript using npm package `ews-javascript-api`
```js
var ews = require("ews-javascript-api");
var credentials = require("../credentials");
ews.EwsLogging.DebugLogEnabled = false;
var exch = new ews.ExchangeService(ews.ExchangeVersion.Exchange2013);
exch.Credentials = new ews.ExchangeCredentials(credentials.userName, credentials.password);
exch.Url = new ews.Uri("https://outlook.office365.com/Ews/Exchange.asmx");
var appointment = new ews.Appointment(exch);
appointment.Subject = "Dentist Appointment";
appointment.Body = new ews.TextBody("The appointment is with Dr. Smith.");
appointment.Start = new ews.DateTime("20170502T130000");
appointment.End = appointment.Start.Add(1, "h");
appointment.Location = "Conf Room";
appointment.RequiredAttendees.Add("[email protected]");
appointment.RequiredAttendees.Add("[email protected]");
appointment.OptionalAttendees.Add("[email protected]");
appointment.Save(ews.SendInvitationsMode.SendToAllAndSaveCopy).then(function () {
console.log("done - check email");
}, function (error) {
console.log(error);
});
```
|
It should be possible as long as you can use SOAP in Node and also if you can use NTLM authentication for Exchange with Node. I believe there are modules for each.
I found this [blog](https://www.howtoforge.com/talking-soap-with-exchange) very helpful when designing a similar system using PHP
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
Instead of using 'ical-generator', I used 'ical-toolkit' to build a calender invite event.
Using this, the invite directly gets appended in the email instead of the attached .ics file object.
Here is a sample code:
```
const icalToolkit = require("ical-toolkit");
//Create a builder
var builder = icalToolkit.createIcsFileBuilder();
builder.method = "REQUEST"; // The method of the request that you want, could be REQUEST, PUBLISH, etc
//Add events
builder.events.push({
start: new Date(2020, 09, 28, 10, 30),
end: new Date(2020, 09, 28, 12, 30),
timestamp: new Date(),
summary: "My Event",
uid: uuidv4(), // a random UUID
categories: [{ name: "MEETING" }],
attendees: [
{
rsvp: true,
name: "Akarsh ****",
email: "Akarsh **** <akarsh.***@abc.com>"
},
{
rsvp: true,
name: "**** RANA",
email: "**** RANA <****[email protected]>"
}
],
organizer: {
name: "A****a N****i",
email: "A****a N****i <a****a.r.n****[email protected]>"
}
});
//Try to build
var icsFileContent = builder.toString();
//Check if there was an error (Only required if yu configured to return error, else error will be thrown.)
if (icsFileContent instanceof Error) {
console.log("Returned Error, you can also configure to throw errors!");
//handle error
}
var mailOptions = { // Set the values you want. In the alternative section, set the calender file
from: obj.from,
to: obj.to,
cc: obj.cc,
subject: result.email.subject,
html: result.email.html,
text: result.email.text,
alternatives: [
{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}
]
}
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
|
It should be possible as long as you can use SOAP in Node and also if you can use NTLM authentication for Exchange with Node. I believe there are modules for each.
I found this [blog](https://www.howtoforge.com/talking-soap-with-exchange) very helpful when designing a similar system using PHP
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
For those still looking for an answer, here's how I managed to get the perfect solution for me.
I used iCalToolkit to create a calendar object.
It's important to make sure all the relevant fields are set up (organizer and attendees with RSVP).
Initially I was using the Postmark API service to send my emails but this solution was only working by sending an ics.file attachment.
I switched to the Postmark SMTP service where you can embed the iCal data inside the message and for that I used nodemailer.
This is what it looks like :
```
var icalToolkit = require('ical-toolkit');
var postmark = require('postmark');
var client = new postmark.Client('xxxxxxxKeyxxxxxxxxxxxx');
var nodemailer = require('nodemailer');
var smtpTransport = require('nodemailer-smtp-transport');
//Create a iCal object
var builder = icalToolkit.createIcsFileBuilder();
builder.method = meeting.method;
//Add the event data
var icsFileContent = builder.toString();
var smtpOptions = {
host:'smtp.postmarkapp.com',
port: 2525,
secureConnection: true,
auth:{
user:'xxxxxxxKeyxxxxxxxxxxxx',
pass:'xxxxxxxPassxxxxxxxxxxx'
}
};
var transporter = nodemailer.createTransport(smtpTransport(smtpOptions));
var mailOptions = {
from: '[email protected]',
to: meeting.events[0].attendees[i].email,
subject: 'Meeting to attend',
html: "Anything here",
text: "Anything here",
alternatives: [{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}]
};
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
This sends a real meeting request with the Accept, decline and Reject button.
It's really unbelievable the amount of work you need to go through for such a trivial functionality and how all of this not well documented.
Hope this helps.
|
If you do not want to use smtp server approach in earlier accepted solution, you have Exchange focused solution available. Whats wrong in current accepted answer? it does not create a meeting in sender's Calendar, you do not have ownership of the meeting item for further modification by the sender Outlook/OWA.
here is code snippet in javascript using npm package `ews-javascript-api`
```js
var ews = require("ews-javascript-api");
var credentials = require("../credentials");
ews.EwsLogging.DebugLogEnabled = false;
var exch = new ews.ExchangeService(ews.ExchangeVersion.Exchange2013);
exch.Credentials = new ews.ExchangeCredentials(credentials.userName, credentials.password);
exch.Url = new ews.Uri("https://outlook.office365.com/Ews/Exchange.asmx");
var appointment = new ews.Appointment(exch);
appointment.Subject = "Dentist Appointment";
appointment.Body = new ews.TextBody("The appointment is with Dr. Smith.");
appointment.Start = new ews.DateTime("20170502T130000");
appointment.End = appointment.Start.Add(1, "h");
appointment.Location = "Conf Room";
appointment.RequiredAttendees.Add("[email protected]");
appointment.RequiredAttendees.Add("[email protected]");
appointment.OptionalAttendees.Add("[email protected]");
appointment.Save(ews.SendInvitationsMode.SendToAllAndSaveCopy).then(function () {
console.log("done - check email");
}, function (error) {
console.log(error);
});
```
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
For those still looking for an answer, here's how I managed to get the perfect solution for me.
I used iCalToolkit to create a calendar object.
It's important to make sure all the relevant fields are set up (organizer and attendees with RSVP).
Initially I was using the Postmark API service to send my emails but this solution was only working by sending an ics.file attachment.
I switched to the Postmark SMTP service where you can embed the iCal data inside the message and for that I used nodemailer.
This is what it looks like :
```
var icalToolkit = require('ical-toolkit');
var postmark = require('postmark');
var client = new postmark.Client('xxxxxxxKeyxxxxxxxxxxxx');
var nodemailer = require('nodemailer');
var smtpTransport = require('nodemailer-smtp-transport');
//Create a iCal object
var builder = icalToolkit.createIcsFileBuilder();
builder.method = meeting.method;
//Add the event data
var icsFileContent = builder.toString();
var smtpOptions = {
host:'smtp.postmarkapp.com',
port: 2525,
secureConnection: true,
auth:{
user:'xxxxxxxKeyxxxxxxxxxxxx',
pass:'xxxxxxxPassxxxxxxxxxxx'
}
};
var transporter = nodemailer.createTransport(smtpTransport(smtpOptions));
var mailOptions = {
from: '[email protected]',
to: meeting.events[0].attendees[i].email,
subject: 'Meeting to attend',
html: "Anything here",
text: "Anything here",
alternatives: [{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}]
};
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
This sends a real meeting request with the Accept, decline and Reject button.
It's really unbelievable the amount of work you need to go through for such a trivial functionality and how all of this not well documented.
Hope this helps.
|
Instead of using 'ical-generator', I used 'ical-toolkit' to build a calender invite event.
Using this, the invite directly gets appended in the email instead of the attached .ics file object.
Here is a sample code:
```
const icalToolkit = require("ical-toolkit");
//Create a builder
var builder = icalToolkit.createIcsFileBuilder();
builder.method = "REQUEST"; // The method of the request that you want, could be REQUEST, PUBLISH, etc
//Add events
builder.events.push({
start: new Date(2020, 09, 28, 10, 30),
end: new Date(2020, 09, 28, 12, 30),
timestamp: new Date(),
summary: "My Event",
uid: uuidv4(), // a random UUID
categories: [{ name: "MEETING" }],
attendees: [
{
rsvp: true,
name: "Akarsh ****",
email: "Akarsh **** <akarsh.***@abc.com>"
},
{
rsvp: true,
name: "**** RANA",
email: "**** RANA <****[email protected]>"
}
],
organizer: {
name: "A****a N****i",
email: "A****a N****i <a****a.r.n****[email protected]>"
}
});
//Try to build
var icsFileContent = builder.toString();
//Check if there was an error (Only required if yu configured to return error, else error will be thrown.)
if (icsFileContent instanceof Error) {
console.log("Returned Error, you can also configure to throw errors!");
//handle error
}
var mailOptions = { // Set the values you want. In the alternative section, set the calender file
from: obj.from,
to: obj.to,
cc: obj.cc,
subject: result.email.subject,
html: result.email.html,
text: result.email.text,
alternatives: [
{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}
]
}
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
For those still looking for an answer, here's how I managed to get the perfect solution for me.
I used iCalToolkit to create a calendar object.
It's important to make sure all the relevant fields are set up (organizer and attendees with RSVP).
Initially I was using the Postmark API service to send my emails but this solution was only working by sending an ics.file attachment.
I switched to the Postmark SMTP service where you can embed the iCal data inside the message and for that I used nodemailer.
This is what it looks like :
```
var icalToolkit = require('ical-toolkit');
var postmark = require('postmark');
var client = new postmark.Client('xxxxxxxKeyxxxxxxxxxxxx');
var nodemailer = require('nodemailer');
var smtpTransport = require('nodemailer-smtp-transport');
//Create a iCal object
var builder = icalToolkit.createIcsFileBuilder();
builder.method = meeting.method;
//Add the event data
var icsFileContent = builder.toString();
var smtpOptions = {
host:'smtp.postmarkapp.com',
port: 2525,
secureConnection: true,
auth:{
user:'xxxxxxxKeyxxxxxxxxxxxx',
pass:'xxxxxxxPassxxxxxxxxxxx'
}
};
var transporter = nodemailer.createTransport(smtpTransport(smtpOptions));
var mailOptions = {
from: '[email protected]',
to: meeting.events[0].attendees[i].email,
subject: 'Meeting to attend',
html: "Anything here",
text: "Anything here",
alternatives: [{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}]
};
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
This sends a real meeting request with the Accept, decline and Reject button.
It's really unbelievable the amount of work you need to go through for such a trivial functionality and how all of this not well documented.
Hope this helps.
|
Please check the following sample:
```js
const options = {
authProvider,
};
const client = Client.init(options);
const onlineMeeting = {
startDateTime: '2019-07-12T14:30:34.2444915-07:00',
endDateTime: '2019-07-12T15:00:34.2464912-07:00',
subject: 'User Token Meeting'
};
await client.api('/me/onlineMeetings')
.post(onlineMeeting);
```
More Information: <https://learn.microsoft.com/en-us/graph/api/application-post-onlinemeetings?view=graph-rest-1.0&tabs=http>
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
If you do not want to use smtp server approach in earlier accepted solution, you have Exchange focused solution available. Whats wrong in current accepted answer? it does not create a meeting in sender's Calendar, you do not have ownership of the meeting item for further modification by the sender Outlook/OWA.
here is code snippet in javascript using npm package `ews-javascript-api`
```js
var ews = require("ews-javascript-api");
var credentials = require("../credentials");
ews.EwsLogging.DebugLogEnabled = false;
var exch = new ews.ExchangeService(ews.ExchangeVersion.Exchange2013);
exch.Credentials = new ews.ExchangeCredentials(credentials.userName, credentials.password);
exch.Url = new ews.Uri("https://outlook.office365.com/Ews/Exchange.asmx");
var appointment = new ews.Appointment(exch);
appointment.Subject = "Dentist Appointment";
appointment.Body = new ews.TextBody("The appointment is with Dr. Smith.");
appointment.Start = new ews.DateTime("20170502T130000");
appointment.End = appointment.Start.Add(1, "h");
appointment.Location = "Conf Room";
appointment.RequiredAttendees.Add("[email protected]");
appointment.RequiredAttendees.Add("[email protected]");
appointment.OptionalAttendees.Add("[email protected]");
appointment.Save(ews.SendInvitationsMode.SendToAllAndSaveCopy).then(function () {
console.log("done - check email");
}, function (error) {
console.log(error);
});
```
|
Instead of using 'ical-generator', I used 'ical-toolkit' to build a calender invite event.
Using this, the invite directly gets appended in the email instead of the attached .ics file object.
Here is a sample code:
```
const icalToolkit = require("ical-toolkit");
//Create a builder
var builder = icalToolkit.createIcsFileBuilder();
builder.method = "REQUEST"; // The method of the request that you want, could be REQUEST, PUBLISH, etc
//Add events
builder.events.push({
start: new Date(2020, 09, 28, 10, 30),
end: new Date(2020, 09, 28, 12, 30),
timestamp: new Date(),
summary: "My Event",
uid: uuidv4(), // a random UUID
categories: [{ name: "MEETING" }],
attendees: [
{
rsvp: true,
name: "Akarsh ****",
email: "Akarsh **** <akarsh.***@abc.com>"
},
{
rsvp: true,
name: "**** RANA",
email: "**** RANA <****[email protected]>"
}
],
organizer: {
name: "A****a N****i",
email: "A****a N****i <a****a.r.n****[email protected]>"
}
});
//Try to build
var icsFileContent = builder.toString();
//Check if there was an error (Only required if yu configured to return error, else error will be thrown.)
if (icsFileContent instanceof Error) {
console.log("Returned Error, you can also configure to throw errors!");
//handle error
}
var mailOptions = { // Set the values you want. In the alternative section, set the calender file
from: obj.from,
to: obj.to,
cc: obj.cc,
subject: result.email.subject,
html: result.email.html,
text: result.email.text,
alternatives: [
{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}
]
}
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
If you do not want to use smtp server approach in earlier accepted solution, you have Exchange focused solution available. Whats wrong in current accepted answer? it does not create a meeting in sender's Calendar, you do not have ownership of the meeting item for further modification by the sender Outlook/OWA.
here is code snippet in javascript using npm package `ews-javascript-api`
```js
var ews = require("ews-javascript-api");
var credentials = require("../credentials");
ews.EwsLogging.DebugLogEnabled = false;
var exch = new ews.ExchangeService(ews.ExchangeVersion.Exchange2013);
exch.Credentials = new ews.ExchangeCredentials(credentials.userName, credentials.password);
exch.Url = new ews.Uri("https://outlook.office365.com/Ews/Exchange.asmx");
var appointment = new ews.Appointment(exch);
appointment.Subject = "Dentist Appointment";
appointment.Body = new ews.TextBody("The appointment is with Dr. Smith.");
appointment.Start = new ews.DateTime("20170502T130000");
appointment.End = appointment.Start.Add(1, "h");
appointment.Location = "Conf Room";
appointment.RequiredAttendees.Add("[email protected]");
appointment.RequiredAttendees.Add("[email protected]");
appointment.OptionalAttendees.Add("[email protected]");
appointment.Save(ews.SendInvitationsMode.SendToAllAndSaveCopy).then(function () {
console.log("done - check email");
}, function (error) {
console.log(error);
});
```
|
Please check the following sample:
```js
const options = {
authProvider,
};
const client = Client.init(options);
const onlineMeeting = {
startDateTime: '2019-07-12T14:30:34.2444915-07:00',
endDateTime: '2019-07-12T15:00:34.2464912-07:00',
subject: 'User Token Meeting'
};
await client.api('/me/onlineMeetings')
.post(onlineMeeting);
```
More Information: <https://learn.microsoft.com/en-us/graph/api/application-post-onlinemeetings?view=graph-rest-1.0&tabs=http>
|
31,082,000 |
I am developing a very basic calendar with Angular and Node and I haven't found any code on this.
Workflow is the following : create an event, input the recipient's e-mail address, validate the event.
This triggers an e-mail sent to the recipient. The mail should be in the outlook meeting request format (not an attached object).
This means that when received in outlook the meeting is automatically added in the calendar.
Is this possible? If yes is it possible with only javascript on Node side?
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31082000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584541/"
] |
Instead of using 'ical-generator', I used 'ical-toolkit' to build a calender invite event.
Using this, the invite directly gets appended in the email instead of the attached .ics file object.
Here is a sample code:
```
const icalToolkit = require("ical-toolkit");
//Create a builder
var builder = icalToolkit.createIcsFileBuilder();
builder.method = "REQUEST"; // The method of the request that you want, could be REQUEST, PUBLISH, etc
//Add events
builder.events.push({
start: new Date(2020, 09, 28, 10, 30),
end: new Date(2020, 09, 28, 12, 30),
timestamp: new Date(),
summary: "My Event",
uid: uuidv4(), // a random UUID
categories: [{ name: "MEETING" }],
attendees: [
{
rsvp: true,
name: "Akarsh ****",
email: "Akarsh **** <akarsh.***@abc.com>"
},
{
rsvp: true,
name: "**** RANA",
email: "**** RANA <****[email protected]>"
}
],
organizer: {
name: "A****a N****i",
email: "A****a N****i <a****a.r.n****[email protected]>"
}
});
//Try to build
var icsFileContent = builder.toString();
//Check if there was an error (Only required if yu configured to return error, else error will be thrown.)
if (icsFileContent instanceof Error) {
console.log("Returned Error, you can also configure to throw errors!");
//handle error
}
var mailOptions = { // Set the values you want. In the alternative section, set the calender file
from: obj.from,
to: obj.to,
cc: obj.cc,
subject: result.email.subject,
html: result.email.html,
text: result.email.text,
alternatives: [
{
contentType: 'text/calendar; charset="utf-8"; method=REQUEST',
content: icsFileContent.toString()
}
]
}
//send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
console.log(error);
}
else{
console.log('Message sent: ' + info.response);
}
});
```
|
Please check the following sample:
```js
const options = {
authProvider,
};
const client = Client.init(options);
const onlineMeeting = {
startDateTime: '2019-07-12T14:30:34.2444915-07:00',
endDateTime: '2019-07-12T15:00:34.2464912-07:00',
subject: 'User Token Meeting'
};
await client.api('/me/onlineMeetings')
.post(onlineMeeting);
```
More Information: <https://learn.microsoft.com/en-us/graph/api/application-post-onlinemeetings?view=graph-rest-1.0&tabs=http>
|
294,462 |
I recently bought a 4K monitor and love the screen real estate. But the standard fonts are becoming a bit too small to read. So I plan on writing a script in Keyboard Maestro to change the default PageZoom in Safari to 125% when I press a button.
I have figured out that I can set the pagezoom with this command in the Terminal:
`defaults write com.apple.Safari DefaultPageZoom "1.25"`
This works because `defaults read com.apple.Safari DefaultPageZoom` reports the set value back. AND the Safari preferences also show the value set.
But page in Safari doesn't change. However, when I change the PageZoom in the Preferences manually the page DOES change.
I've tried reloading the page and changing the window size after setting the PageZoom in the Terminal, but nothing works.
What do I need to do to make the `defaults write` setting become active?
I don't want to use the CMD+ and CMD- keys all the time.
In the end I want Keyboard Maestro to trigger this script when I plug in a device that signals I'm using this monitor.
|
2017/08/07
|
[
"https://apple.stackexchange.com/questions/294462",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/130171/"
] |
No, it is not bad to have sound check enabled in iTunes.
Having it enabled does not make the sound quality of songs worse.
Sound Check works simply by dynamically raising or lowering the playback volume, just as you yourself can raise and lower the playback volume in iTunes without affecting sound quality.
iTunes raises/lowers playback volume according to the audio normalisation ID3-tag within the audio file. These tags are created automatically when you rip a CD with iTunes, and automatically added when you import files lackings these tags into iTunes.
The normalisation tag simply tells what the average volume of the song is. Note that iTunes recognizes albums, so that when playing back and album, it will adjust according to the average track volume of the album as a whole, instead of changing the playback volume for each song individually. Therefore your album listening experience is not harmed.
iTunes automatically ensures that the volume is not raised so high that neither clipping occurs or compression effects are introduced. Therefore your sound quality is not harmed.
All in all this means that enabling Sound Check is safe, does not change the audio content of your songs, thus not reducing their quality. It simply change the playback volume slightly so that you get an pleasurable listening experience without having to tweak your playback volume every time you select a new album.
|
I was experiencing distortion on both headphones and via airplay (when playing music from my iPhone). When investigating, I discovered that sound check was on. I turned it off and the distortion went away. The tracks where I noticed this were:
Field of Dreams, opening credits (track 1).
Star Trek V, opening credits (track 1).
Note: Both albums were ripped from CD using iTunes, 256 kbps AAC and played via iTunes Match.
|
294,462 |
I recently bought a 4K monitor and love the screen real estate. But the standard fonts are becoming a bit too small to read. So I plan on writing a script in Keyboard Maestro to change the default PageZoom in Safari to 125% when I press a button.
I have figured out that I can set the pagezoom with this command in the Terminal:
`defaults write com.apple.Safari DefaultPageZoom "1.25"`
This works because `defaults read com.apple.Safari DefaultPageZoom` reports the set value back. AND the Safari preferences also show the value set.
But page in Safari doesn't change. However, when I change the PageZoom in the Preferences manually the page DOES change.
I've tried reloading the page and changing the window size after setting the PageZoom in the Terminal, but nothing works.
What do I need to do to make the `defaults write` setting become active?
I don't want to use the CMD+ and CMD- keys all the time.
In the end I want Keyboard Maestro to trigger this script when I plug in a device that signals I'm using this monitor.
|
2017/08/07
|
[
"https://apple.stackexchange.com/questions/294462",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/130171/"
] |
No, it is not bad to have sound check enabled in iTunes.
Having it enabled does not make the sound quality of songs worse.
Sound Check works simply by dynamically raising or lowering the playback volume, just as you yourself can raise and lower the playback volume in iTunes without affecting sound quality.
iTunes raises/lowers playback volume according to the audio normalisation ID3-tag within the audio file. These tags are created automatically when you rip a CD with iTunes, and automatically added when you import files lackings these tags into iTunes.
The normalisation tag simply tells what the average volume of the song is. Note that iTunes recognizes albums, so that when playing back and album, it will adjust according to the average track volume of the album as a whole, instead of changing the playback volume for each song individually. Therefore your album listening experience is not harmed.
iTunes automatically ensures that the volume is not raised so high that neither clipping occurs or compression effects are introduced. Therefore your sound quality is not harmed.
All in all this means that enabling Sound Check is safe, does not change the audio content of your songs, thus not reducing their quality. It simply change the playback volume slightly so that you get an pleasurable listening experience without having to tweak your playback volume every time you select a new album.
|
I hate this option, by default it was on and I was thinking why audio quality is that poor, after while I found that option and after I turned it off I have a better life since then!
|
294,462 |
I recently bought a 4K monitor and love the screen real estate. But the standard fonts are becoming a bit too small to read. So I plan on writing a script in Keyboard Maestro to change the default PageZoom in Safari to 125% when I press a button.
I have figured out that I can set the pagezoom with this command in the Terminal:
`defaults write com.apple.Safari DefaultPageZoom "1.25"`
This works because `defaults read com.apple.Safari DefaultPageZoom` reports the set value back. AND the Safari preferences also show the value set.
But page in Safari doesn't change. However, when I change the PageZoom in the Preferences manually the page DOES change.
I've tried reloading the page and changing the window size after setting the PageZoom in the Terminal, but nothing works.
What do I need to do to make the `defaults write` setting become active?
I don't want to use the CMD+ and CMD- keys all the time.
In the end I want Keyboard Maestro to trigger this script when I plug in a device that signals I'm using this monitor.
|
2017/08/07
|
[
"https://apple.stackexchange.com/questions/294462",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/130171/"
] |
I was experiencing distortion on both headphones and via airplay (when playing music from my iPhone). When investigating, I discovered that sound check was on. I turned it off and the distortion went away. The tracks where I noticed this were:
Field of Dreams, opening credits (track 1).
Star Trek V, opening credits (track 1).
Note: Both albums were ripped from CD using iTunes, 256 kbps AAC and played via iTunes Match.
|
I hate this option, by default it was on and I was thinking why audio quality is that poor, after while I found that option and after I turned it off I have a better life since then!
|
58,139,849 |
I'm trying to insert a List of Users to the Sembast Database in Flutter. But this does not work - I always get the following Error:
```
Exception has occurred.
_TypeError (type 'List<dynamic>' is not a subtype of type 'List<Map<String, dynamic>>')
```
Just adding one User - works for me. I just have a problem in adding a List of Users.
```
Future insertAll(List<Users> users) async {
print(jsonEncode(users));
await _usersStore.addAll(
await _db, jsonDecode(jsonEncode(users)));
}
```
The print gives me following: [{"id":"f20ce2fb-d0db-11e9-9e8b-06ba1e206a58", "name":"Max", "lastName":"Mustermann"}]
|
2019/09/27
|
[
"https://Stackoverflow.com/questions/58139849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9020285/"
] |
I think the issue is just that jsonDecode is not of the expected type. You should consider casting using a function such as:
```dart
/// This properly cast a decoded json list of object
List<Map<String, dynamic>> asMapList(dynamic list) {
return (list as List)?.cast<Map<String, dynamic>>();
}
```
A simple test like this should work (assuming db is an opened database):
```dart
test('jsonDecode', () async {
var store = intMapStoreFactory.store();
var list = [
{'test': 'value'}
];
var toAdd = jsonDecode(jsonEncode(list));
try {
// This fails
await store.addAll(db, toAdd);
fail('should fail');
} catch (e) {
expect(e, isNot(const TypeMatcher<TestFailure>()));
print(e);
}
// This works!
await store.addAll(db, asMapList(toAdd));
});
```
|
What if you use a foreach and insert one at a time?
I know it may not be what you are looking for but it is a way to keep working until you find a way to do it!
```
Future insertAll(List<Users> users) async {
print(jsonEncode(users));
users.foreach((user) async{
// insert here
});
}
```
|
58,139,849 |
I'm trying to insert a List of Users to the Sembast Database in Flutter. But this does not work - I always get the following Error:
```
Exception has occurred.
_TypeError (type 'List<dynamic>' is not a subtype of type 'List<Map<String, dynamic>>')
```
Just adding one User - works for me. I just have a problem in adding a List of Users.
```
Future insertAll(List<Users> users) async {
print(jsonEncode(users));
await _usersStore.addAll(
await _db, jsonDecode(jsonEncode(users)));
}
```
The print gives me following: [{"id":"f20ce2fb-d0db-11e9-9e8b-06ba1e206a58", "name":"Max", "lastName":"Mustermann"}]
|
2019/09/27
|
[
"https://Stackoverflow.com/questions/58139849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9020285/"
] |
I think the issue is just that jsonDecode is not of the expected type. You should consider casting using a function such as:
```dart
/// This properly cast a decoded json list of object
List<Map<String, dynamic>> asMapList(dynamic list) {
return (list as List)?.cast<Map<String, dynamic>>();
}
```
A simple test like this should work (assuming db is an opened database):
```dart
test('jsonDecode', () async {
var store = intMapStoreFactory.store();
var list = [
{'test': 'value'}
];
var toAdd = jsonDecode(jsonEncode(list));
try {
// This fails
await store.addAll(db, toAdd);
fail('should fail');
} catch (e) {
expect(e, isNot(const TypeMatcher<TestFailure>()));
print(e);
}
// This works!
await store.addAll(db, asMapList(toAdd));
});
```
|
1 - You could retype your `User` class, because your error is because `Users` is typed as a `List<dynamic>` and you are passing and map.
2 - You could add a field in the `User` class : `List<Map<dynamic, dynamic>>`
or simply use the `.add()` method.
I strongly recommend to use `.add()`
|
51,746,995 |
Is it possible to add some metrics to a Java system that will return the version number as a string for the application that is monitored?
I am aiming for a dashboard where each pod, running a Java application inside a Docker container, in a Kubernetes cluster is monitored and the current version of each Java application is viewed.
If it isn't possible, do you have an idea on how to get that information from the Java application and make it available in a Grafana dashboard?
|
2018/08/08
|
[
"https://Stackoverflow.com/questions/51746995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2731545/"
] |
In your application, you can make available a Gauge metric that uses labels to export e.g. a version number or commit/build hash and then set the value of the gauge to `1`.
For example, this is how the `redis_exporter` exports information about a redis instance:
```
# HELP redis_instance_info Information about the Redis instance
# TYPE redis_instance_info gauge
redis_instance_info{addr="redis://localhost:6379",os="Linux 4.4.0-62-generic x86_64",redis_build_id="687a2a319020fa42",redis_mode="standalone",redis_version="3.0.6",role="master"} 1
```
You can see the version and a couple of other attributes exported as labels of the metric `redis_instance_info`.
|
Expanding on the idea of @Oliver I'm adding a sample java class which exposes the currently used application version, git branch and git commit id to the Prometheus metrics endpoint after the application is started/ready.
This assumes that you have a spring boot app with prometheus metrics enabled as an actuator endpoint.
```java
@Component
@RequiredArgsConstructor
public class PrometheusCustomMetricsService implements ApplicationListener<ApplicationReadyEvent> {
private static final int FIXED_VALUE = 1;
@Value("${info.project.version}")
private String applicationVersion;
private final MeterRegistry meterRegistry;
private final GitProperties gitProperties;
@Override
public void onApplicationEvent(@NonNull ApplicationReadyEvent event) {
registerApplicationInfoGauge();
}
private void registerApplicationInfoGauge() {
Tag versionTag = new ImmutableTag("application_version", applicationVersion);
Tag branchTag = new ImmutableTag("branch", gitProperties.getBranch());
Tag commitIdTag = new ImmutableTag("commit_id", gitProperties.getShortCommitId());
meterRegistry.gauge("application_info",
List.of(versionTag, branchTag, commitIdTag),
new AtomicInteger(FIXED_VALUE));
}
}
```
Your custom metric should show up something like this in the prometheus endpoint:
```
application_info{application_version="1.2.3",branch="SOME-BRANCH-123",commit_id="123456"} NaN
```
I wouldn't worry about the value of the application\_info gauge being `NaN` as we don't need the value and only use it as a way to send over the tags.
|
9,079 |
Am I missing some dependency on the server? cURL is in php-xml is in etc.
I am trying to export an invoice to PDG and getting this error:
Fatal error: Class 'DOMDocument' not found in /MY-PATH/htdocs/system/expressionengine/third\_party/store/vendor/dompdf/include/dompdf.cls.php on line 481
Hope you can help - Thanks
|
2013/04/06
|
[
"https://expressionengine.stackexchange.com/questions/9079",
"https://expressionengine.stackexchange.com",
"https://expressionengine.stackexchange.com/users/855/"
] |
`DOMDocument` is a built in PHP class. It sounds like the PHP DOM extension is disabled on your server - you will probably need to talk to your host about enabling it (it is enabled by default in PHP and I've never seen a host without it before).
To check this, you can visit Tools > Utilities > PHP Info in the EE control panel, and search for *DOM/XML* which should say *enabled* next to it.
|
Hmmm strange. Have you checked that the file it mentions is actually there?
If it is and it has read access then my guess would be that it's more of a memory issue. PDF generation can require a surprisingly large amount of memory (a min of 256MB) and I've had similar problems in the past. Make sure that you have at least 256MB as your memory limit in Tools > Utilities > PHP Info > memory\_limit. Try increasing this to at least 256MB and seeing if that makes a difference.
What type of environment are you running on btw? Is this shared hosting?
|
9,079 |
Am I missing some dependency on the server? cURL is in php-xml is in etc.
I am trying to export an invoice to PDG and getting this error:
Fatal error: Class 'DOMDocument' not found in /MY-PATH/htdocs/system/expressionengine/third\_party/store/vendor/dompdf/include/dompdf.cls.php on line 481
Hope you can help - Thanks
|
2013/04/06
|
[
"https://expressionengine.stackexchange.com/questions/9079",
"https://expressionengine.stackexchange.com",
"https://expressionengine.stackexchange.com/users/855/"
] |
Hmmm strange. Have you checked that the file it mentions is actually there?
If it is and it has read access then my guess would be that it's more of a memory issue. PDF generation can require a surprisingly large amount of memory (a min of 256MB) and I've had similar problems in the past. Make sure that you have at least 256MB as your memory limit in Tools > Utilities > PHP Info > memory\_limit. Try increasing this to at least 256MB and seeing if that makes a difference.
What type of environment are you running on btw? Is this shared hosting?
|
si usas centos estas lineas son la solución:
Fatal error: Class 'DOMDocument' not found in /var/www/html/ include/
dompdf.cls.php on line 177
Solved the problem by:
To install PHP-XML run the following command: yum install php-xml
To install DOM/XML run the following command: yum whatprovides php-dom
To restart Apache run the following command: service httpd restart
|
9,079 |
Am I missing some dependency on the server? cURL is in php-xml is in etc.
I am trying to export an invoice to PDG and getting this error:
Fatal error: Class 'DOMDocument' not found in /MY-PATH/htdocs/system/expressionengine/third\_party/store/vendor/dompdf/include/dompdf.cls.php on line 481
Hope you can help - Thanks
|
2013/04/06
|
[
"https://expressionengine.stackexchange.com/questions/9079",
"https://expressionengine.stackexchange.com",
"https://expressionengine.stackexchange.com/users/855/"
] |
Hmmm strange. Have you checked that the file it mentions is actually there?
If it is and it has read access then my guess would be that it's more of a memory issue. PDF generation can require a surprisingly large amount of memory (a min of 256MB) and I've had similar problems in the past. Make sure that you have at least 256MB as your memory limit in Tools > Utilities > PHP Info > memory\_limit. Try increasing this to at least 256MB and seeing if that makes a difference.
What type of environment are you running on btw? Is this shared hosting?
|
On Ubuntu:
sudo apt-get install php5-dom
In my case
sudo apt-get install php5.6-dom
|
9,079 |
Am I missing some dependency on the server? cURL is in php-xml is in etc.
I am trying to export an invoice to PDG and getting this error:
Fatal error: Class 'DOMDocument' not found in /MY-PATH/htdocs/system/expressionengine/third\_party/store/vendor/dompdf/include/dompdf.cls.php on line 481
Hope you can help - Thanks
|
2013/04/06
|
[
"https://expressionengine.stackexchange.com/questions/9079",
"https://expressionengine.stackexchange.com",
"https://expressionengine.stackexchange.com/users/855/"
] |
`DOMDocument` is a built in PHP class. It sounds like the PHP DOM extension is disabled on your server - you will probably need to talk to your host about enabling it (it is enabled by default in PHP and I've never seen a host without it before).
To check this, you can visit Tools > Utilities > PHP Info in the EE control panel, and search for *DOM/XML* which should say *enabled* next to it.
|
si usas centos estas lineas son la solución:
Fatal error: Class 'DOMDocument' not found in /var/www/html/ include/
dompdf.cls.php on line 177
Solved the problem by:
To install PHP-XML run the following command: yum install php-xml
To install DOM/XML run the following command: yum whatprovides php-dom
To restart Apache run the following command: service httpd restart
|
9,079 |
Am I missing some dependency on the server? cURL is in php-xml is in etc.
I am trying to export an invoice to PDG and getting this error:
Fatal error: Class 'DOMDocument' not found in /MY-PATH/htdocs/system/expressionengine/third\_party/store/vendor/dompdf/include/dompdf.cls.php on line 481
Hope you can help - Thanks
|
2013/04/06
|
[
"https://expressionengine.stackexchange.com/questions/9079",
"https://expressionengine.stackexchange.com",
"https://expressionengine.stackexchange.com/users/855/"
] |
`DOMDocument` is a built in PHP class. It sounds like the PHP DOM extension is disabled on your server - you will probably need to talk to your host about enabling it (it is enabled by default in PHP and I've never seen a host without it before).
To check this, you can visit Tools > Utilities > PHP Info in the EE control panel, and search for *DOM/XML* which should say *enabled* next to it.
|
On Ubuntu:
sudo apt-get install php5-dom
In my case
sudo apt-get install php5.6-dom
|
45,364,539 |
I use two variables in which memory is allocated dynamically, and I print the memory locations, but they are not consecutive. Why?
```
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *a = malloc(sizeof(int));
int *b = malloc(sizeof(int));
printf("\n a=%p \t b=%p \n",a,b);
}
```
The answers I get (in Linux) are
1st time:
```
a=0x20a0010 b=0x20a0030
```
2nd time:
```
a=0x657010 b=0x657030
```
3rd time:
```
a=0x139e010 b=0x139e030
```
Why is the exact difference between the memory locations of `a` and `b` variables the way it is in the 1st, 2nd and 3rd times?
Is this related to paging memory?
My processor is 64 bit.
|
2017/07/28
|
[
"https://Stackoverflow.com/questions/45364539",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6234260/"
] |
The gap between two consecutive allocations is not related to paging. Your allocations are so small that they reside in the data segment. Libc handles these internally - the space outside your `sizeof int` bytes generally contains pointers to the previous and the next block of data and the size of allocation - after all `free` will just get a pointer and it will need to figure out how much memory it is to deallocate.
Additionally both of these pointers are aligned to 16-byte boundary. [C11 7.22.3](http://port70.net/~nsz/c/c11/n1570.html#7.22.3) says that
>
> The pointer returned if the allocation succeeds is **suitably aligned** so that it may be assigned to **a pointer to any type of object with a fundamental alignment requirement** and then used to access such an object or an array of such objects in the space allocated (until the space is explicitly deallocated).
>
>
>
Thus even though you're using them for `int` the C standard requires that the pointer returned be aligned for any data type - which on your implementation is 16 bytes.
If however you allocate an object that is very *large*, glibc will map entire pages using `mmap` instead. Then the alignment (on my 64-bit computer) is exactly 16 bytes from the start of a 4K page:
```
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *a = malloc(12345678);
int *b = malloc(12345678);
printf("\n a=%p \t b=%p \n",a,b);
}
```
when run
```
% ./a.out
a=0x7fb65e7b7010 b=0x7fb65dbf0010
```
One can see the `mmap` calls with `strace ./a.out` - there among other system calls there are
```
mmap(NULL, 12349440, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb65e7b7000
mmap(NULL, 12349440, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb65dbf0000
```
---
As for why the addresses keep changing from one execution to another - this is due to [address space layout randomization, or ASLR](https://en.wikipedia.org/wiki/Address_space_layout_randomization) - a security mechanism which makes it harder for evil crackers to **predictably** exploit *undefined behaviour* in your code.
---
P.S. If you really need to dynamically allocate space for 2 `int`s at consecutive addresses, allocate an array.
|
The operating system handles memory allocation, and there is no guarantee that this memory is contiguous when dynamically allocating two consecutive variables. I should also mention that this is the outcome of a defense mechanism known as [ASLR](https://stackoverflow.com/questions/3605222/what-is-address-space-layout-randomization). ASLR defends against buffer overflows by randomizing the location of a process during its execution, this may include the [stack, heap, and libraries](https://www.howtogeek.com/278056/what-is-aslr-and-how-does-it-keep-your-computer-secure/). This is why you notice these addresses changing. By the standard, you're only guaranteed the following.
>
> ISO C11 7.22.3.4 Malloc
>
>
> 1) **Synopsis**
>
>
>
```
#include <stdlib.h>
void* malloc(size_t size);
```
>
> 2) **Description**
> The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate.
>
>
> 3) **Return**
> The malloc function returns either a null pointer or a pointer to the allocated space.
>
>
>
|
45,364,539 |
I use two variables in which memory is allocated dynamically, and I print the memory locations, but they are not consecutive. Why?
```
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *a = malloc(sizeof(int));
int *b = malloc(sizeof(int));
printf("\n a=%p \t b=%p \n",a,b);
}
```
The answers I get (in Linux) are
1st time:
```
a=0x20a0010 b=0x20a0030
```
2nd time:
```
a=0x657010 b=0x657030
```
3rd time:
```
a=0x139e010 b=0x139e030
```
Why is the exact difference between the memory locations of `a` and `b` variables the way it is in the 1st, 2nd and 3rd times?
Is this related to paging memory?
My processor is 64 bit.
|
2017/07/28
|
[
"https://Stackoverflow.com/questions/45364539",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6234260/"
] |
The operating system handles memory allocation, and there is no guarantee that this memory is contiguous when dynamically allocating two consecutive variables. I should also mention that this is the outcome of a defense mechanism known as [ASLR](https://stackoverflow.com/questions/3605222/what-is-address-space-layout-randomization). ASLR defends against buffer overflows by randomizing the location of a process during its execution, this may include the [stack, heap, and libraries](https://www.howtogeek.com/278056/what-is-aslr-and-how-does-it-keep-your-computer-secure/). This is why you notice these addresses changing. By the standard, you're only guaranteed the following.
>
> ISO C11 7.22.3.4 Malloc
>
>
> 1) **Synopsis**
>
>
>
```
#include <stdlib.h>
void* malloc(size_t size);
```
>
> 2) **Description**
> The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate.
>
>
> 3) **Return**
> The malloc function returns either a null pointer or a pointer to the allocated space.
>
>
>
|
As note at [GNU Examples of malloc](https://www.gnu.org/software/libc/manual/html_node/Malloc-Examples.html)
>
> Note that the memory located after the end of the block is likely to
> be in use for something else; perhaps a block already allocated by
> another call to malloc.
>
>
>
This actually means that, for each call to `malloc` the OS, depending on its memory management algorithm, finds the most appropriate / proper / suitable / efficient free space for the caller.
For example:
```
void* p_1 = malloc(4);
void* p_2 = malloc(4);
[oooo][xxxx][oooo][oooo]
^ ^
p_1 p_2
```
|
45,364,539 |
I use two variables in which memory is allocated dynamically, and I print the memory locations, but they are not consecutive. Why?
```
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *a = malloc(sizeof(int));
int *b = malloc(sizeof(int));
printf("\n a=%p \t b=%p \n",a,b);
}
```
The answers I get (in Linux) are
1st time:
```
a=0x20a0010 b=0x20a0030
```
2nd time:
```
a=0x657010 b=0x657030
```
3rd time:
```
a=0x139e010 b=0x139e030
```
Why is the exact difference between the memory locations of `a` and `b` variables the way it is in the 1st, 2nd and 3rd times?
Is this related to paging memory?
My processor is 64 bit.
|
2017/07/28
|
[
"https://Stackoverflow.com/questions/45364539",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6234260/"
] |
The gap between two consecutive allocations is not related to paging. Your allocations are so small that they reside in the data segment. Libc handles these internally - the space outside your `sizeof int` bytes generally contains pointers to the previous and the next block of data and the size of allocation - after all `free` will just get a pointer and it will need to figure out how much memory it is to deallocate.
Additionally both of these pointers are aligned to 16-byte boundary. [C11 7.22.3](http://port70.net/~nsz/c/c11/n1570.html#7.22.3) says that
>
> The pointer returned if the allocation succeeds is **suitably aligned** so that it may be assigned to **a pointer to any type of object with a fundamental alignment requirement** and then used to access such an object or an array of such objects in the space allocated (until the space is explicitly deallocated).
>
>
>
Thus even though you're using them for `int` the C standard requires that the pointer returned be aligned for any data type - which on your implementation is 16 bytes.
If however you allocate an object that is very *large*, glibc will map entire pages using `mmap` instead. Then the alignment (on my 64-bit computer) is exactly 16 bytes from the start of a 4K page:
```
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *a = malloc(12345678);
int *b = malloc(12345678);
printf("\n a=%p \t b=%p \n",a,b);
}
```
when run
```
% ./a.out
a=0x7fb65e7b7010 b=0x7fb65dbf0010
```
One can see the `mmap` calls with `strace ./a.out` - there among other system calls there are
```
mmap(NULL, 12349440, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb65e7b7000
mmap(NULL, 12349440, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb65dbf0000
```
---
As for why the addresses keep changing from one execution to another - this is due to [address space layout randomization, or ASLR](https://en.wikipedia.org/wiki/Address_space_layout_randomization) - a security mechanism which makes it harder for evil crackers to **predictably** exploit *undefined behaviour* in your code.
---
P.S. If you really need to dynamically allocate space for 2 `int`s at consecutive addresses, allocate an array.
|
As note at [GNU Examples of malloc](https://www.gnu.org/software/libc/manual/html_node/Malloc-Examples.html)
>
> Note that the memory located after the end of the block is likely to
> be in use for something else; perhaps a block already allocated by
> another call to malloc.
>
>
>
This actually means that, for each call to `malloc` the OS, depending on its memory management algorithm, finds the most appropriate / proper / suitable / efficient free space for the caller.
For example:
```
void* p_1 = malloc(4);
void* p_2 = malloc(4);
[oooo][xxxx][oooo][oooo]
^ ^
p_1 p_2
```
|
22,607,657 |
I have a android program (Java + html in a webview). I can call from the javascript to the Java code. But the other way around stopped working (after updating in eclipse).
So this is what I'm trying to do
* Make a webview (worked)
* calling in javascript to AndroidFunction.test(); (worked)
* the java test() function call webView.loadUrl("javascript:helloBack()");
(! not working anymore)
I tried to let it work with the WebView in the MainActivity, but it didnt work.
MainActivity.java
```
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final WebView webView = (WebView)findViewById(R.id.webView);
webView.getSettings().setJavaScriptEnabled(true);
webView.setWebChromeClient(new WebChromeClient());
WebSettings webSettings = webView.getSettings();
webSettings.setJavaScriptEnabled(true);
javascr = new Javascript(this, webView);
webView.addJavascriptInterface(javascr, "AndroidFunction");
webView.loadUrl("file:///android_asset/www/index.html");
....
}
```
Javascript.java
```
public class Javascript {
Context cont;
WebView webView;
Javascript(Context c, WebView w) {
cont = c;
webView = w;
}
// function called in the javascript by AndroidFunction.test();
public void test() {
// Breaking point!!!
webView.loadUrl("javascript:helloBack()");
}
```
Error:
```
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): java.lang.Throwable: A WebView method was called on thread 'JavaBridge'. All WebView methods must be called on the same thread. (Expected Looper Looper{41ab68f8} called on Looper{41bb70a8}, FYI main Looper is Looper{41ab68f8})
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.checkThread(WebView.java:2063)
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.loadUrl(WebView.java:794)
03-24 11:47:50.103: W/WebView(21026): at com.example.hellobt.Javascript.test(Javascript.java:24)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.nativeDoRunLoopOnce(Native Method)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): at android.os.Handler.dispatchMessage(Handler.java:102)
03-24 11:47:50.103: W/WebView(21026): at android.os.Looper.loop(Looper.java:137)
03-24 11:47:50.103: W/WebView(21026): at android.os.HandlerThread.run(HandlerThread.java:61)
```
Thanks for the answer. I edited the function in my Javascript file like this:
```
private void test(final String s) {
webView.post(new Runnable() {
public void run() {
webView.loadUrl("javascript:" + s + ";");
}
});
System.out.println("javscript done..");
}
```
|
2014/03/24
|
[
"https://Stackoverflow.com/questions/22607657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876718/"
] |
The JavaScript method is executed on a background (i.e. non-UI) thread. You need to call all Android View related methods on the UI thread. You can achieve what you need with:
```
mWebView.post(new Runnable() {
@Override
public void run() {
mWebView.loadUrl(...).
}
});
```
Which will post the task to run on the UI thread.
|
In my case nothing was shown in a WebView, so I prefer another way:
```
runOnUiThread(new Runnable() {
@Override
public void run() {
final WebView webView = (WebView) findViewById(R.id.map);
webView.loadDataWithBaseURL(...);
}
});
```
|
22,607,657 |
I have a android program (Java + html in a webview). I can call from the javascript to the Java code. But the other way around stopped working (after updating in eclipse).
So this is what I'm trying to do
* Make a webview (worked)
* calling in javascript to AndroidFunction.test(); (worked)
* the java test() function call webView.loadUrl("javascript:helloBack()");
(! not working anymore)
I tried to let it work with the WebView in the MainActivity, but it didnt work.
MainActivity.java
```
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final WebView webView = (WebView)findViewById(R.id.webView);
webView.getSettings().setJavaScriptEnabled(true);
webView.setWebChromeClient(new WebChromeClient());
WebSettings webSettings = webView.getSettings();
webSettings.setJavaScriptEnabled(true);
javascr = new Javascript(this, webView);
webView.addJavascriptInterface(javascr, "AndroidFunction");
webView.loadUrl("file:///android_asset/www/index.html");
....
}
```
Javascript.java
```
public class Javascript {
Context cont;
WebView webView;
Javascript(Context c, WebView w) {
cont = c;
webView = w;
}
// function called in the javascript by AndroidFunction.test();
public void test() {
// Breaking point!!!
webView.loadUrl("javascript:helloBack()");
}
```
Error:
```
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): java.lang.Throwable: A WebView method was called on thread 'JavaBridge'. All WebView methods must be called on the same thread. (Expected Looper Looper{41ab68f8} called on Looper{41bb70a8}, FYI main Looper is Looper{41ab68f8})
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.checkThread(WebView.java:2063)
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.loadUrl(WebView.java:794)
03-24 11:47:50.103: W/WebView(21026): at com.example.hellobt.Javascript.test(Javascript.java:24)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.nativeDoRunLoopOnce(Native Method)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): at android.os.Handler.dispatchMessage(Handler.java:102)
03-24 11:47:50.103: W/WebView(21026): at android.os.Looper.loop(Looper.java:137)
03-24 11:47:50.103: W/WebView(21026): at android.os.HandlerThread.run(HandlerThread.java:61)
```
Thanks for the answer. I edited the function in my Javascript file like this:
```
private void test(final String s) {
webView.post(new Runnable() {
public void run() {
webView.loadUrl("javascript:" + s + ";");
}
});
System.out.println("javscript done..");
}
```
|
2014/03/24
|
[
"https://Stackoverflow.com/questions/22607657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876718/"
] |
The JavaScript method is executed on a background (i.e. non-UI) thread. You need to call all Android View related methods on the UI thread. You can achieve what you need with:
```
mWebView.post(new Runnable() {
@Override
public void run() {
mWebView.loadUrl(...).
}
});
```
Which will post the task to run on the UI thread.
|
This can be come over by using the post method. Please go through below code.
```
m_targetView.post(new Runnable() {
@Override
public void run() {
m_targetView.loadUrl(".....");
}
});
```
|
22,607,657 |
I have a android program (Java + html in a webview). I can call from the javascript to the Java code. But the other way around stopped working (after updating in eclipse).
So this is what I'm trying to do
* Make a webview (worked)
* calling in javascript to AndroidFunction.test(); (worked)
* the java test() function call webView.loadUrl("javascript:helloBack()");
(! not working anymore)
I tried to let it work with the WebView in the MainActivity, but it didnt work.
MainActivity.java
```
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final WebView webView = (WebView)findViewById(R.id.webView);
webView.getSettings().setJavaScriptEnabled(true);
webView.setWebChromeClient(new WebChromeClient());
WebSettings webSettings = webView.getSettings();
webSettings.setJavaScriptEnabled(true);
javascr = new Javascript(this, webView);
webView.addJavascriptInterface(javascr, "AndroidFunction");
webView.loadUrl("file:///android_asset/www/index.html");
....
}
```
Javascript.java
```
public class Javascript {
Context cont;
WebView webView;
Javascript(Context c, WebView w) {
cont = c;
webView = w;
}
// function called in the javascript by AndroidFunction.test();
public void test() {
// Breaking point!!!
webView.loadUrl("javascript:helloBack()");
}
```
Error:
```
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): java.lang.Throwable: A WebView method was called on thread 'JavaBridge'. All WebView methods must be called on the same thread. (Expected Looper Looper{41ab68f8} called on Looper{41bb70a8}, FYI main Looper is Looper{41ab68f8})
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.checkThread(WebView.java:2063)
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.loadUrl(WebView.java:794)
03-24 11:47:50.103: W/WebView(21026): at com.example.hellobt.Javascript.test(Javascript.java:24)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.nativeDoRunLoopOnce(Native Method)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): at android.os.Handler.dispatchMessage(Handler.java:102)
03-24 11:47:50.103: W/WebView(21026): at android.os.Looper.loop(Looper.java:137)
03-24 11:47:50.103: W/WebView(21026): at android.os.HandlerThread.run(HandlerThread.java:61)
```
Thanks for the answer. I edited the function in my Javascript file like this:
```
private void test(final String s) {
webView.post(new Runnable() {
public void run() {
webView.loadUrl("javascript:" + s + ";");
}
});
System.out.println("javscript done..");
}
```
|
2014/03/24
|
[
"https://Stackoverflow.com/questions/22607657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876718/"
] |
The JavaScript method is executed on a background (i.e. non-UI) thread. You need to call all Android View related methods on the UI thread. You can achieve what you need with:
```
mWebView.post(new Runnable() {
@Override
public void run() {
mWebView.loadUrl(...).
}
});
```
Which will post the task to run on the UI thread.
|
**Java** version: You must to use **Runnable** interface and Post to **Handler**.
```
webView.post(new Runnable() {
@Override
public void run() {
webView.loadUrl("file:///android_asset/www/index.html");
}
});
```
**Kotlin** version:
```
webView.post(new Runnable {
webView.loadUrl("file:///android_asset/www/index.html")
})
```
|
22,607,657 |
I have a android program (Java + html in a webview). I can call from the javascript to the Java code. But the other way around stopped working (after updating in eclipse).
So this is what I'm trying to do
* Make a webview (worked)
* calling in javascript to AndroidFunction.test(); (worked)
* the java test() function call webView.loadUrl("javascript:helloBack()");
(! not working anymore)
I tried to let it work with the WebView in the MainActivity, but it didnt work.
MainActivity.java
```
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final WebView webView = (WebView)findViewById(R.id.webView);
webView.getSettings().setJavaScriptEnabled(true);
webView.setWebChromeClient(new WebChromeClient());
WebSettings webSettings = webView.getSettings();
webSettings.setJavaScriptEnabled(true);
javascr = new Javascript(this, webView);
webView.addJavascriptInterface(javascr, "AndroidFunction");
webView.loadUrl("file:///android_asset/www/index.html");
....
}
```
Javascript.java
```
public class Javascript {
Context cont;
WebView webView;
Javascript(Context c, WebView w) {
cont = c;
webView = w;
}
// function called in the javascript by AndroidFunction.test();
public void test() {
// Breaking point!!!
webView.loadUrl("javascript:helloBack()");
}
```
Error:
```
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): java.lang.Throwable: A WebView method was called on thread 'JavaBridge'. All WebView methods must be called on the same thread. (Expected Looper Looper{41ab68f8} called on Looper{41bb70a8}, FYI main Looper is Looper{41ab68f8})
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.checkThread(WebView.java:2063)
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.loadUrl(WebView.java:794)
03-24 11:47:50.103: W/WebView(21026): at com.example.hellobt.Javascript.test(Javascript.java:24)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.nativeDoRunLoopOnce(Native Method)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): at android.os.Handler.dispatchMessage(Handler.java:102)
03-24 11:47:50.103: W/WebView(21026): at android.os.Looper.loop(Looper.java:137)
03-24 11:47:50.103: W/WebView(21026): at android.os.HandlerThread.run(HandlerThread.java:61)
```
Thanks for the answer. I edited the function in my Javascript file like this:
```
private void test(final String s) {
webView.post(new Runnable() {
public void run() {
webView.loadUrl("javascript:" + s + ";");
}
});
System.out.println("javscript done..");
}
```
|
2014/03/24
|
[
"https://Stackoverflow.com/questions/22607657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876718/"
] |
In my case nothing was shown in a WebView, so I prefer another way:
```
runOnUiThread(new Runnable() {
@Override
public void run() {
final WebView webView = (WebView) findViewById(R.id.map);
webView.loadDataWithBaseURL(...);
}
});
```
|
This can be come over by using the post method. Please go through below code.
```
m_targetView.post(new Runnable() {
@Override
public void run() {
m_targetView.loadUrl(".....");
}
});
```
|
22,607,657 |
I have a android program (Java + html in a webview). I can call from the javascript to the Java code. But the other way around stopped working (after updating in eclipse).
So this is what I'm trying to do
* Make a webview (worked)
* calling in javascript to AndroidFunction.test(); (worked)
* the java test() function call webView.loadUrl("javascript:helloBack()");
(! not working anymore)
I tried to let it work with the WebView in the MainActivity, but it didnt work.
MainActivity.java
```
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final WebView webView = (WebView)findViewById(R.id.webView);
webView.getSettings().setJavaScriptEnabled(true);
webView.setWebChromeClient(new WebChromeClient());
WebSettings webSettings = webView.getSettings();
webSettings.setJavaScriptEnabled(true);
javascr = new Javascript(this, webView);
webView.addJavascriptInterface(javascr, "AndroidFunction");
webView.loadUrl("file:///android_asset/www/index.html");
....
}
```
Javascript.java
```
public class Javascript {
Context cont;
WebView webView;
Javascript(Context c, WebView w) {
cont = c;
webView = w;
}
// function called in the javascript by AndroidFunction.test();
public void test() {
// Breaking point!!!
webView.loadUrl("javascript:helloBack()");
}
```
Error:
```
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): java.lang.Throwable: A WebView method was called on thread 'JavaBridge'. All WebView methods must be called on the same thread. (Expected Looper Looper{41ab68f8} called on Looper{41bb70a8}, FYI main Looper is Looper{41ab68f8})
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.checkThread(WebView.java:2063)
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.loadUrl(WebView.java:794)
03-24 11:47:50.103: W/WebView(21026): at com.example.hellobt.Javascript.test(Javascript.java:24)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.nativeDoRunLoopOnce(Native Method)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): at android.os.Handler.dispatchMessage(Handler.java:102)
03-24 11:47:50.103: W/WebView(21026): at android.os.Looper.loop(Looper.java:137)
03-24 11:47:50.103: W/WebView(21026): at android.os.HandlerThread.run(HandlerThread.java:61)
```
Thanks for the answer. I edited the function in my Javascript file like this:
```
private void test(final String s) {
webView.post(new Runnable() {
public void run() {
webView.loadUrl("javascript:" + s + ";");
}
});
System.out.println("javscript done..");
}
```
|
2014/03/24
|
[
"https://Stackoverflow.com/questions/22607657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876718/"
] |
In my case nothing was shown in a WebView, so I prefer another way:
```
runOnUiThread(new Runnable() {
@Override
public void run() {
final WebView webView = (WebView) findViewById(R.id.map);
webView.loadDataWithBaseURL(...);
}
});
```
|
**Java** version: You must to use **Runnable** interface and Post to **Handler**.
```
webView.post(new Runnable() {
@Override
public void run() {
webView.loadUrl("file:///android_asset/www/index.html");
}
});
```
**Kotlin** version:
```
webView.post(new Runnable {
webView.loadUrl("file:///android_asset/www/index.html")
})
```
|
22,607,657 |
I have a android program (Java + html in a webview). I can call from the javascript to the Java code. But the other way around stopped working (after updating in eclipse).
So this is what I'm trying to do
* Make a webview (worked)
* calling in javascript to AndroidFunction.test(); (worked)
* the java test() function call webView.loadUrl("javascript:helloBack()");
(! not working anymore)
I tried to let it work with the WebView in the MainActivity, but it didnt work.
MainActivity.java
```
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final WebView webView = (WebView)findViewById(R.id.webView);
webView.getSettings().setJavaScriptEnabled(true);
webView.setWebChromeClient(new WebChromeClient());
WebSettings webSettings = webView.getSettings();
webSettings.setJavaScriptEnabled(true);
javascr = new Javascript(this, webView);
webView.addJavascriptInterface(javascr, "AndroidFunction");
webView.loadUrl("file:///android_asset/www/index.html");
....
}
```
Javascript.java
```
public class Javascript {
Context cont;
WebView webView;
Javascript(Context c, WebView w) {
cont = c;
webView = w;
}
// function called in the javascript by AndroidFunction.test();
public void test() {
// Breaking point!!!
webView.loadUrl("javascript:helloBack()");
}
```
Error:
```
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): java.lang.Throwable: A WebView method was called on thread 'JavaBridge'. All WebView methods must be called on the same thread. (Expected Looper Looper{41ab68f8} called on Looper{41bb70a8}, FYI main Looper is Looper{41ab68f8})
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.checkThread(WebView.java:2063)
03-24 11:47:50.103: W/WebView(21026): at android.webkit.WebView.loadUrl(WebView.java:794)
03-24 11:47:50.103: W/WebView(21026): at com.example.hellobt.Javascript.test(Javascript.java:24)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.nativeDoRunLoopOnce(Native Method)
03-24 11:47:50.103: W/WebView(21026): at com.android.org.chromium.base.SystemMessageHandler.handleMessage(SystemMessageHandler.java:27)
03-24 11:47:50.103: W/WebView(21026): at android.os.Handler.dispatchMessage(Handler.java:102)
03-24 11:47:50.103: W/WebView(21026): at android.os.Looper.loop(Looper.java:137)
03-24 11:47:50.103: W/WebView(21026): at android.os.HandlerThread.run(HandlerThread.java:61)
```
Thanks for the answer. I edited the function in my Javascript file like this:
```
private void test(final String s) {
webView.post(new Runnable() {
public void run() {
webView.loadUrl("javascript:" + s + ";");
}
});
System.out.println("javscript done..");
}
```
|
2014/03/24
|
[
"https://Stackoverflow.com/questions/22607657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1876718/"
] |
**Java** version: You must to use **Runnable** interface and Post to **Handler**.
```
webView.post(new Runnable() {
@Override
public void run() {
webView.loadUrl("file:///android_asset/www/index.html");
}
});
```
**Kotlin** version:
```
webView.post(new Runnable {
webView.loadUrl("file:///android_asset/www/index.html")
})
```
|
This can be come over by using the post method. Please go through below code.
```
m_targetView.post(new Runnable() {
@Override
public void run() {
m_targetView.loadUrl(".....");
}
});
```
|
36,101,259 |
I am evaluating biicode in my organization.
I started this activity last year in september but did not continue because of other pressing concerns. I have resumed the same now.
It seems biicode has shut down their operations. None of their help links seem to be working. The login page as well as signup page are dead.
Is there anyone using biicode nowadays or is it dead?
|
2016/03/19
|
[
"https://Stackoverflow.com/questions/36101259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6086316/"
] |
Yes, biicode is closed. While you are evaluating options you can take a look to [conan project](https://github.com/conan-io/conan) and [conan.io](https://www.conan.io/ "conan.io"). It's an full open source project with a lot of community contributions right now.
Conan uses a more direct (and easier) approach to library dependencies management than biicode, supporting both binary packages as building from source.
|
Biicode as a company has shutdown. The central biicode servers have been closed, and will no longer operate. The current pages, blogs, etc, that can be seen are in fact static pages captured and hosted in github, thats why it is impossible to login/register. There are no support people (in fact no employees at all) since July 2015. If you still have interest, it is an OSS project (MIT), included the server, if you want to run biicode, you have to run your own server.
|
29,550,984 |
I have the following code to set an alarm when the main activity starts and cancel when logging out
```
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
...
// Schedule alarm
Intent intent = new Intent(this, MyReceiver.class);
alarmIntent = PendingIntent.getBroadcast(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
alarmManager = (AlarmManager) getSystemService(Context.ALARM_SERVICE);
if (PendingIntent.getBroadcast(this, 0, intent, PendingIntent.FLAG_NO_CREATE) == null) {
alarmManager.setInexactRepeating(AlarmManager.ELAPSED_REALTIME, 0, AlarmManager.INTERVAL_DAY, alarmIntent);
}
}
private void logOut() {
if (alarmManager != null) {
alarmIntent.cancel();
alarmManager.cancel(alarmIntent);
}
Intent i = new Intent(MainActivity.this, LoginActivity.class);
i.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TASK);
startActivity(i);
finish();
}
```
But when I log out, log back in, and enter the main activity the code `alarmManager.setInexactRepeating()` is never reached.
|
2015/04/09
|
[
"https://Stackoverflow.com/questions/29550984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/548852/"
] |
To make Dynamic changes to the sticky from the documentation it needs to unstick first and updated
```
$("#headerbg2").unstick() & $("#headerbg2").sticky('update')
```
this also needs to change `(!$("#contact").css("float") === "none")` to either `($("#contact").css("float") != "none")` or `else { .. }`
**Demo** with dynamic window resize
<http://jsfiddle.net/w2jv7szg/>
**Code**
```
$(document).ready(function () {
$("#headerbg1").sticky({
topSpacing: 0
});
function checkForFloat() {
if ($("#contact").css("float") === "none") {
$("#headerbg2").sticky({
topSpacing: 50
});
} else {
$("#headerbg2").sticky({
topSpacing: 120
});
}
}
setTimeout(checkForFloat, 1000);
window.addEventListener('resize', function (event) {
$("#headerbg2").unstick()
if ($("#contact").css("float") === "none") {
$("#headerbg2").sticky({
topSpacing: 50
});
$("#headerbg2").sticky('update')
} else {
$("#headerbg2").sticky({
topSpacing: 120
});
$("#headerbg2").sticky('update')
}
})
});
```
|
Try this instead:
```
$(document).ready(function(){
function checkForFloat()
{
setTimeout(checkForFloat, 100);
if($("#contact").css("float") === "none") {
$("#headerbg2").sticky({topSpacing:180});
}
else if ($("#contact").css("float") !== "none") {
$("#headerbg2").sticky({topSpacing:120});
}
}
});
```
Changed the "!" position.
|
12,171,156 |
The Google Maps API states that the KML(or KMZ) file can be hosted on a publicly accessible web server. This does infer that the document should be available via HTTP(S), but the protocol is not actually stated. Can you please confirm whether the protocol must be HTTP / HTTPS, or could FTP be used as an interim option during the prototype.
|
2012/08/29
|
[
"https://Stackoverflow.com/questions/12171156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1632066/"
] |
Cant confirm but if they say that your KMZ can be hosted on a publicly accessible **web server** then it mean **HTTP(S)** and **not FTP**. Can you setup your FTP location to also be accessed via a URL?
|
If you enter a ftp url into the maps search box
```
We could not understand the location ftp://test...
```
suggests they dont understand ftp urls. If it understood the url, it would be a 'cant read file at ... '
Google Maps and the API use the same system for KML rendering.
|
1,517,556 |
Prove that for all odd prime numbers $p\_1$ and $p\_2$, there exist prime numbers(exclude 2) $p\_3$ and $p\_4$ such that $$p\_3 + p\_4 = p\_1 + p\_2 + 2.$$
Hints would be appreciated.
|
2015/11/07
|
[
"https://math.stackexchange.com/questions/1517556",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/85798/"
] |
If this is true then the Goldbach conjecture is true. It is famous, and unproved.
|
For $p\_1=2,p\_2=7$, we have $p\_1+p\_2+2=11$, but there are no pairs of primes $(p\_3,p\_4)$ such that $$11=p\_3+p\_4.$$
|
19,498,300 |
I got a problem with `print` in python 3.x
If you remember from python 2.x, you can write code like this:
```
var = 224
print "The var is %d" %(var)
```
and it would print out:
```
The var is 224
```
But in python 3.x it doesn't work, so who knows please help.
|
2013/10/21
|
[
"https://Stackoverflow.com/questions/19498300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2903699/"
] |
```
var = 224
print("The var is %d" % var)
```
Definitely have to treat print as a function with Python 3.
Try it out at: <http://ideone.com/U95q0L>
You could also, for a simpler solution, without interpolation, do this:
```
print("The var is", var)
```
Also included that on Ideone.
|
In Python 2, `print` is a keyword which is used in a print statement. In Python 3, `print` is a function name and is used just as any other function is.
In particular, `print` requires parenthesis around its argument list:
```
print ("The var is %d" %(var))
```
Ref: <http://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function>
|
52,331,848 |
i'm trying to create a kubernetes cluster from kops using terraform,following code is a part of infrastructure, i'm trying to create the name concatenate with two variables, and i'm getting illegal char error line two, error happens because im trying to define the name with concatenate variables. is it possible in terraform?
```
resource "aws_autoscaling_group" "master-kubernetes" {
name = "master-"${var.zone}".masters."${var.cluster_name}""
launch_configuration = "${aws_launch_configuration.master-kubernetes.id}"
max_size = 1
min_size = 1
vpc_zone_identifier = ["${aws_subnet.subnet-kubernetes.id}"]
```
|
2018/09/14
|
[
"https://Stackoverflow.com/questions/52331848",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9776078/"
] |
Try this:
```
resource "aws_autoscaling_group" "master-kubernetes" {
name = "master-${var.zone}.masters.${var.cluster_name}"
# ... other params ...
}
```
|
I would say rather concatenating at resource level use the locals first define a variable in locals and then you can utilize it at resource level
**Locals declaration**
```
locals {
rds_instance_name = "${var.env}-${var.rds_name}"
}
```
**Resource Level declaration**
```
resource "aws_db_instance" "default_mssql" {
count = var.db_create ? 1 : 0
name = local.rds_instance_name
........
}
```
This is as simple as its need to be ....
|
52,331,848 |
i'm trying to create a kubernetes cluster from kops using terraform,following code is a part of infrastructure, i'm trying to create the name concatenate with two variables, and i'm getting illegal char error line two, error happens because im trying to define the name with concatenate variables. is it possible in terraform?
```
resource "aws_autoscaling_group" "master-kubernetes" {
name = "master-"${var.zone}".masters."${var.cluster_name}""
launch_configuration = "${aws_launch_configuration.master-kubernetes.id}"
max_size = 1
min_size = 1
vpc_zone_identifier = ["${aws_subnet.subnet-kubernetes.id}"]
```
|
2018/09/14
|
[
"https://Stackoverflow.com/questions/52331848",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9776078/"
] |
With latest terraform 0.12.x [terraform format doc](https://www.terraform.io/docs/configuration/functions/format.html) , you could do better like:
```
resource "aws_autoscaling_group" "master-kubernetes" {
name = format("master-%s.masters.%s", var.zone, var.cluster_name)
}
```
|
I would say rather concatenating at resource level use the locals first define a variable in locals and then you can utilize it at resource level
**Locals declaration**
```
locals {
rds_instance_name = "${var.env}-${var.rds_name}"
}
```
**Resource Level declaration**
```
resource "aws_db_instance" "default_mssql" {
count = var.db_create ? 1 : 0
name = local.rds_instance_name
........
}
```
This is as simple as its need to be ....
|
16,818,201 |
I have a tuple that looks like this:
```
('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
```
And i want to split it so that each column will be seperate so i can access it in an easier way.
So for an example:
```
tuple[0][2]
```
would return `0`
```
tuple[1][3]
```
would return `299`
The second part of my question is what is the equivalent of `.rstrip()`
so i can get rid of the `\n`
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16818201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1708551/"
] |
You could use a list comprehension:
```
rows = [row.split() for row in your_tuple]
```
As for `.rstrip()`, you don't need it. `.split()` (with no argument!) takes care of that for you:
```
>>> ' a b c \t\n\n '.split()
['a', 'b', 'c']
```
|
Apply `str.split` to each item of the tuple:
```
>>> tup = ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
>>> t = map(str.split, tup)
>>> t
[['1', '2130', '0', '279', '90', '92', '193', '1'], ['1', '186', '0', '299', '14', '36', '44', '1']]
>>> t[0][2]
'0'
>>> t[1][3]
'299'
```
|
16,818,201 |
I have a tuple that looks like this:
```
('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
```
And i want to split it so that each column will be seperate so i can access it in an easier way.
So for an example:
```
tuple[0][2]
```
would return `0`
```
tuple[1][3]
```
would return `299`
The second part of my question is what is the equivalent of `.rstrip()`
so i can get rid of the `\n`
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16818201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1708551/"
] |
You could use a list comprehension:
```
rows = [row.split() for row in your_tuple]
```
As for `.rstrip()`, you don't need it. `.split()` (with no argument!) takes care of that for you:
```
>>> ' a b c \t\n\n '.split()
['a', 'b', 'c']
```
|
```
>>> data = ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
>>> [x.split() for x in data]
[['1', '2130', '0', '279', '90', '92', '193', '1'],
['1', '186', '0', '299', '14', '36', '44', '1']]
```
If you want integer values:
```
>>> [[int(y) for y in x.split()] for x in data]
[[1, 2130, 0, 279, 90, 92, 193, 1], [1, 186, 0, 299, 14, 36, 44, 1]]
>>> res = [[int(y) for y in x.split()] for x in data]
>>> res[0][1]
2130
```
|
16,818,201 |
I have a tuple that looks like this:
```
('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
```
And i want to split it so that each column will be seperate so i can access it in an easier way.
So for an example:
```
tuple[0][2]
```
would return `0`
```
tuple[1][3]
```
would return `299`
The second part of my question is what is the equivalent of `.rstrip()`
so i can get rid of the `\n`
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16818201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1708551/"
] |
You could use a list comprehension:
```
rows = [row.split() for row in your_tuple]
```
As for `.rstrip()`, you don't need it. `.split()` (with no argument!) takes care of that for you:
```
>>> ' a b c \t\n\n '.split()
['a', 'b', 'c']
```
|
You don't want to `split()` or `rstrip()` the tuple. The things in the tuple are strings and these are perfectly splittable and strippable, and so what you want is an easy way to apply those operations to *each string* in the tuple. (Actually, you don't need the `rstrip()` as the `split()` will take care of the newline for you.) This is where list comprehensions come in:
```
data = ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
data = [x.split() for x in data]
```
Now this ends up with the tuple (and its elements) being a list. This is fine most of the time, but if you really need it to be a tuple, not a list, try this:
```
data = tuple(tuple(x.split()) for x in data)
```
|
16,818,201 |
I have a tuple that looks like this:
```
('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
```
And i want to split it so that each column will be seperate so i can access it in an easier way.
So for an example:
```
tuple[0][2]
```
would return `0`
```
tuple[1][3]
```
would return `299`
The second part of my question is what is the equivalent of `.rstrip()`
so i can get rid of the `\n`
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16818201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1708551/"
] |
You could use a list comprehension:
```
rows = [row.split() for row in your_tuple]
```
As for `.rstrip()`, you don't need it. `.split()` (with no argument!) takes care of that for you:
```
>>> ' a b c \t\n\n '.split()
['a', 'b', 'c']
```
|
use a list comprehension:
```
rows = [row.split() for row in tuple]
```
You can use:
```
each_item[:-1]
```
To get rid of the `\n`
|
16,818,201 |
I have a tuple that looks like this:
```
('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
```
And i want to split it so that each column will be seperate so i can access it in an easier way.
So for an example:
```
tuple[0][2]
```
would return `0`
```
tuple[1][3]
```
would return `299`
The second part of my question is what is the equivalent of `.rstrip()`
so i can get rid of the `\n`
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16818201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1708551/"
] |
Apply `str.split` to each item of the tuple:
```
>>> tup = ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
>>> t = map(str.split, tup)
>>> t
[['1', '2130', '0', '279', '90', '92', '193', '1'], ['1', '186', '0', '299', '14', '36', '44', '1']]
>>> t[0][2]
'0'
>>> t[1][3]
'299'
```
|
use a list comprehension:
```
rows = [row.split() for row in tuple]
```
You can use:
```
each_item[:-1]
```
To get rid of the `\n`
|
16,818,201 |
I have a tuple that looks like this:
```
('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
```
And i want to split it so that each column will be seperate so i can access it in an easier way.
So for an example:
```
tuple[0][2]
```
would return `0`
```
tuple[1][3]
```
would return `299`
The second part of my question is what is the equivalent of `.rstrip()`
so i can get rid of the `\n`
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16818201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1708551/"
] |
```
>>> data = ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
>>> [x.split() for x in data]
[['1', '2130', '0', '279', '90', '92', '193', '1'],
['1', '186', '0', '299', '14', '36', '44', '1']]
```
If you want integer values:
```
>>> [[int(y) for y in x.split()] for x in data]
[[1, 2130, 0, 279, 90, 92, 193, 1], [1, 186, 0, 299, 14, 36, 44, 1]]
>>> res = [[int(y) for y in x.split()] for x in data]
>>> res[0][1]
2130
```
|
use a list comprehension:
```
rows = [row.split() for row in tuple]
```
You can use:
```
each_item[:-1]
```
To get rid of the `\n`
|
16,818,201 |
I have a tuple that looks like this:
```
('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
```
And i want to split it so that each column will be seperate so i can access it in an easier way.
So for an example:
```
tuple[0][2]
```
would return `0`
```
tuple[1][3]
```
would return `299`
The second part of my question is what is the equivalent of `.rstrip()`
so i can get rid of the `\n`
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16818201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1708551/"
] |
You don't want to `split()` or `rstrip()` the tuple. The things in the tuple are strings and these are perfectly splittable and strippable, and so what you want is an easy way to apply those operations to *each string* in the tuple. (Actually, you don't need the `rstrip()` as the `split()` will take care of the newline for you.) This is where list comprehensions come in:
```
data = ('1 2130 0 279 90 92 193 1\n', '1 186 0 299 14 36 44 1\n')
data = [x.split() for x in data]
```
Now this ends up with the tuple (and its elements) being a list. This is fine most of the time, but if you really need it to be a tuple, not a list, try this:
```
data = tuple(tuple(x.split()) for x in data)
```
|
use a list comprehension:
```
rows = [row.split() for row in tuple]
```
You can use:
```
each_item[:-1]
```
To get rid of the `\n`
|
17,457,270 |
I am trying to create a table with 3 columns url, price and timestamps...I tried both "change" and "up"
```
class Shp < ActiveRecord::Base
def change
create_table :shps do |t|
t.string :url
t.float :price
t.timestamps
end
end
end
```
Running db:migrate seems to do nothing, as when I do
```
ActiveRecord::Base.connection.column_names("shps")
```
I get a table with default columns only.
```
=> [#<ActiveRecord::ConnectionAdapters::SQLiteColumn:0x8fc8a34 @name="id", @sql_type="INTEGER", @null=false, @limit=nil, @precision=nil, @scale=nil, @type=:integer, @default=nil, @primary=nil, @coder=nil>, #<ActiveRecord::ConnectionAdapters::SQLiteColumn:0x8fc878c @name="created_at", @sql_type="datetime", @null=false, @limit=nil, @precision=nil, @scale=nil, @type=:datetime, @default=nil, @primary=nil, @coder=nil>, #<ActiveRecord::ConnectionAdapters::SQLiteColumn:0x8fc8444 @name="updated_at", @sql_type="datetime", @null=false, @limit=nil, @precision=nil, @scale=nil, @type=:datetime, @default=nil, @primary=nil, @coder=nil>]
```
|
2013/07/03
|
[
"https://Stackoverflow.com/questions/17457270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1650757/"
] |
Based on the comments, I think this is what you're asking about.
```
int main(void)
{
char *array1 = "12345";
char *array2 = "abcde";
char *array3 = "67890";
char *array4 = "fghij";
char *array_2d[8];
array_2d[0] = array1;
array_2d[1] = array2;
array_2d[2] = array3;
array_2d[3] = array4;
array1 = "ABCDE";
array2 = "GHIJK";
array3 = "LMNOP";
array4 = "QRSTU";
array_2d[4] = array1;
array_2d[5] = array2;
array_2d[6] = array3;
array_2d[7] = array4;
int i,j;
for(i = 0; i<=7 ; i++ ) {
for(j = 0; j<=4 ; j++) {
printf("%c", array_2d[i][j]);
}
printf("\n");
}
}
```
Reassigning `array1` doesn't affect what `array_2d[0]` points to. The original assignment just copies the pointer, it doesn't make them aliases for each other.
|
Like others have said, it is not completely clear to me what is being asked, but it is possible to make copies of the strings as follows (this is just one possible way):
```
char *array_2d[8];
array_2d[0] = strdup( array1 );
...
array_2d[4] = strdup( array_2d[1] );
strupr( array_2d[4] );
...
```
This will result in putting separately allocated strings in each location. [strdup](http://linux.die.net/man/3/strdup) makes a copy of string. Note that you have to call free() (e.g., `free( array_2d[0] );`) to free that memory.
Another thing to note: The OP seems to indicate that you plan to modify the array (string) pointed to by `array1`. You cannot change that memory (it is a string constant). You can change what `array1` points to, though.
|
48,967,983 |
I am trying to upload files using php on my local server and save it in the current working directory.The code is working fine for any file type of size less than 2mb but above this size it gives error. I have also modified the upload and post size in php.ini file,but it was of no help.It gives 4 errors:
>
> 1. Undefined index: temp in C:\wamp64\www\project\upload.php on line 3
> 2. file\_get\_contents(C:\wamp64\www\project\project): failed to open stream: No such file or directory in C:\wamp64\www\project\upload.php
> on line 5
> 3. Undefined index: temp in C:\wamp64\www\project\upload.php on line 6
> 4. Undefined index: temp in C:\wamp64\www\project\upload.php on line 18
>
>
>
At end it shows "Sorry, there was an error uploading your file.".
I am saving the uploaded file in the same location where upload.php is located.
```
Upload.php:
<?php
$target_dir = getcwd();
$target_file = $target_dir .'\project'. basename($_FILES["temp"]["name"]);
$uploadOk = 1;
echo file_get_contents($target_file);
if ($_FILES["temp"]["size"] > 5000000000)
{
echo "Sorry, your file is too large.";
$uploadOk = 0;
}
if ($uploadOk == 0)
{
echo "Sorry, your file was not uploaded.";
}
else
{
if (move_uploaded_file($_FILES["temp"]["tmp_name"], $target_file))
{
echo "The file ". basename( $_FILES["temp"]["name"]). " has been
uploaded.";
}
else
{
echo "Sorry, there was an error uploading your file.";
}
}
?>
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48967983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8401485/"
] |
try to do this points:(ensure if any point is created)
**1- ensure that the method attribute in form is with `POST` value and enctype with `multipart/form-data` value.**
**2- This line `echo file_get_contents($target_file);` *is not here must be placed*
because you are placed it before the file is uploaded, is suggest you to put it at end of your code**
**3-Add this condition at the beginning if your code**
```
if($_POST["submit"]
{
//Your Code here;
}
```
***and ensure the `name` attribute was added to submit button on a form***
4- Ensure the `project` folder is exist in the same folder which contain `upload.php` file
5-
>
> **Very Important:**
>
>
>
Ensure to add `\\` ***before & after*** project in `$traget_file` variable to become
```
$target_file = $target_dir .'\\project\\'. basename($_FILES["temp"]["name"]);
```
**the full code work with me**
```
<?php
if($_POST["submit"])
{
$target_dir = getcwd();
$target_file = $target_dir .'\\project\\'. basename($_FILES["temp"]["name"]);
$uploadOk = 1;
echo file_get_contents($target_file);
if ($_FILES["temp"]["size"] > 5000000000)
{
echo "Sorry, your file is too large.";
$uploadOk = 0;
}
if ($uploadOk == 0)
{
echo "Sorry, your file was not uploaded.";
}
else
{
if (move_uploaded_file($_FILES["temp"]["tmp_name"], $target_file))
{
echo "The file ". basename( $_FILES["temp"]["name"]). " has been
uploaded.";
}
else
{
echo "Sorry, there was an error uploading your file.";
}
}
}
?>
<html>
<head>
<title></title>
</head>
<body>
<form action="first.php" method='post' enctype="multipart/form-data">
<input type="file" name="temp" />
<input type="submit" name="submit" value="upload" />
</form>
</body>
</html>
```
I hope I helped you
Thanks
|
Look at your `php.ini` file for these 2 parameters, WAMPServer comes configured with
```
upload_max_filesize = 2M
post_max_size = 8M
```
Increase the `upload_max_filesize` to a little bigger that the largest file you want to allow to be unloaded. Or a lot larger if you intend to upload more than one file at a time.
Then increase `post_max_size` to a number LARGER that you set `upload_max_filesize` to so for example
```
upload_max_filesize = 80M
post_max_size = 85M
```
`post_max_size` must be larger that `upload_max_filesize` as the file is transported in the POST buffer as well as all other fields that may exist on your form
>
> NOTE use the wampmanager icon to get to edit `php.ini` and that will ensure you edit the correct `php.ini` there are 2.
>
>
>
```
wampmanager -> php -> php.ini
```
>
> Also remember to restart Apache once you have edited and saved `php.ini`
>
>
>
```
wampmanager -> Apache -> Service administration 'wampapache' -> Restart Service
```
|
30,537,193 |
I cloned fresh repo of my code.
Then I made necessary changes in repo followed by these three steps
```
git add --all
git commit
git push -u origin master
```
It asks username,password, which I enter.
but after the push, git repo shows changes authored by my colleague rather than what I am. Why so?
can't get it.
|
2015/05/29
|
[
"https://Stackoverflow.com/questions/30537193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3818223/"
] |
Check what is the email in the config
```
git config --global user.email
```
if it is not yours, change and commit again:
```
$ git config --global user.name "John Doe"
$ git config --global user.email [email protected]
```
|
Does it locally show the author of your commit as your colleague when you do `git log`?
You can check "who you're committing as" by running these commands:
```
git config user.name
git config --global user.name
```
And [ensure your email is properly associated with your GitHub account](https://help.github.com/articles/setting-your-username-in-git/). You can check your local email address settings the same way:
```
git config user.email
git config --global user.email
```
|
30,285 |
Let $M^m$ and $N^n$ be compact, oriented smooth manifolds without boundary. Then what is the degree of the map
$$ f: M\times N \to N \times M$$
given by $f(x,y) = (y,x)$? I have the feeling it should be $(-1)^{mn}$ (would fit in with some proof I have to give), but the result I come up with is $1$.
My reasoning: If $w\_1, \dots, w\_m$ and $v\_1, \dots, v\_n$ are positively oriented ordered bases for $T\_pM$ and $T\_qN$, then $w\_1, \dots, w\_m, v\_1, \dots, v\_n$ is a positively oriented ordered basis on $T\_{(p,q)}(M \times N)$ (where $T\_{(p,q)}(M \times N) \simeq T\_pM \oplus T\_qN$).
Similarly, $v\_1, \dots, v\_n, w\_1, \dots, w\_m$ is a postiviely oriented basis for $T\_{(q,p)}(N \times M)$. Now $df(p,q)(w\_1, \dots, w\_m, v\_1, \dots, v\_n) = (v\_1, \dots, v\_n, w\_1, \dots, w\_m)$, so $f$ is order preserving. And since it is a diffeomorphism, it's degree is $1$.
Now since the above would contradict another statement I have to prove, I must be making a mistake. I would be very glad if someone could help me out here. :-)
Best Regards,
S.L.
|
2011/04/01
|
[
"https://math.stackexchange.com/questions/30285",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/3208/"
] |
When you say $df(p,q)(w\_1,\ldots, w\_m,v\_1,\ldots, v\_n) = (v\_1,\ldots, v\_n,w\_1,\ldots, w\_m)$, what exactly do you mean? My guess is that you're looking at the image of an ordered basis. I think it might be better if you look at each individual basis vector in $T\_{(p,q)}(M\times N)$ and see what its image is instead of trying to take them all at once, so that you can be sure of what you're doing. Then, the question is not whether the map is order-preserving, but whether the image of your oriented basis has the right or wrong orientation. You can check this by checking the sign of the permutation that brings it back to your original choice of ordered basis.
|
Since $f$ is one to one, then its degree is $-1$ or $1$ (also because $f\circ f=Id$, then $(\mathrm{deg}\,f)^2=1$).
if $\{e\_1,...,e\_{n+m}\}$ denote the canonical basis of $\mathbb{R}^{n+m}$ then
$\det D(x,y)f=\det(e\_{n+1},...,e\_{n+m},e\_1,...,e\_n)=(-1)^{mn}\det(e\_1,...,e\_n,e\_{n+1},...,e\_{n+m})=(-1)^{mn}$.
So $\mathrm{deg}\,f=(-1)^{mn}$.
|
28,053,951 |
New to Angular, may be using promises wrong. I have a factory returning a promise:
```
.factory('myData', ['$http', '$q', function($http, $q) {
var deferred = $q.defer();
$http.get('/path/to/endpoint')
.success(function(data) {
deferred.resolve(data);
})
.error(function(err) {
deferred.reject(err.what())
});
return deferred.promise;
}])
```
Now if I inject my factory somewhere I can use the promise:
```
.controller('myController', ['$scope', 'myData', function($scope, myData) {
myData.then(function(result) {
$scope.data = result;
});
}]);
```
This is fine, but I'm starting to use `myData` in several places and I don't want to be writing a new `.then` in every directive and controller I use the data in. After the promise is resolved I don't care about it anymore, is there any way to make `myData` return a promise if it's unresolved but return just the result after it's finished resolving?
To word it another way, can `myData` simple be the `.then` `result` after resolution, or do I have to write a new `.then` every time?
|
2015/01/20
|
[
"https://Stackoverflow.com/questions/28053951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3316036/"
] |
On working with promises
------------------------
First of all your `myData` service can just return the call:
```
.factory('myData', ['$http', function($http) {
return $http.get('/path/to/endpoint').then(function(req){
return req.data;
});
}]);
```
Unwrapping values
-----------------
So you know how to work with promises in Angular...
But, you want something better, you want the promise to automatically unwrap with Angular's digests. This is tricky but it can be done. Note that it can be confusing in code and I don't really recommend it but here's the general idea:
Automatic unwrapping
--------------------
```
.factory('myData', ['$http', function($http) {
var result = []; // initial result to return
var p = $http.get('/path/to/endpoint').then(function(res){
result.push.apply(result, res.data); // add all the items
});
result.then = p.then; // allow hooking on it
return result; // return the array, initially empty
}]);
```
This would let you do something like:
```
.controller('myController', ['$scope', 'myData', function($scope, myData) {
$scope.data = myData;
}]);
```
Which will put an empty array there and will replace it (also causing a digest) whenever the real data arrives, you can still do `myData.then` in order to check if it's done loading yourself so you can use the old syntax if you need to be sure.
Is it a good idea?
------------------
Note that **Angular used to do this** until version 1.2 (removed completely in 1.3) but stopped doing so because automatic unwrapping was considered too magical. You can formulate your own decisions but take note that the Angular core team decided this was not a good idea. Note that things like ngResource still do this.
|
Yes, resolve the promise in your service/factory and reference the resolved promise value in your controllers instead of referencing and handling the promise in your controllers. Hope that makes sense.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.