qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
---|---|---|---|---|---|
289,429 | I just added a second user to my Exchange 2010 box, it is in coexistence with exc2003. My account is already set up and working with a personal archive folder.
The user I just set up however is unable to see the archive in Outlook. It is visible in OWA but not outlook. I have created a test profile on my PC with the users account and still no archive, if I jump back to my profile on the same box the archive is there so I know it is not an office versions issue.
UPDATE:
I have deleted all profiles from Outlook (one of which worked with the archive) now any new profiles including my own no longer show up. I think I have broken something In exchange. I get an auto discover certificate error which I am in the process of fixing. Perhaps the 2 problems are related. Also OWA on this server runs on a custom SSL port. | 2011/07/12 | [
"https://serverfault.com/questions/289429",
"https://serverfault.com",
"https://serverfault.com/users/87374/"
] | Change the configuration of the `origin` remote. See the **REMOTES** section of the `git-push(1)` man page for details. | Instead, save your identity in a configuration file using the git config command.
$ git config user.name "Jon Loeliger"
$ git config user.email "[email protected]"
You can also tell Git your name and email address using the GIT\_AUTHOR\_NAME and
GIT\_AUTHOR\_EMAIL environment variables. If set, these variables override all configuration
settings. |
289,429 | I just added a second user to my Exchange 2010 box, it is in coexistence with exc2003. My account is already set up and working with a personal archive folder.
The user I just set up however is unable to see the archive in Outlook. It is visible in OWA but not outlook. I have created a test profile on my PC with the users account and still no archive, if I jump back to my profile on the same box the archive is there so I know it is not an office versions issue.
UPDATE:
I have deleted all profiles from Outlook (one of which worked with the archive) now any new profiles including my own no longer show up. I think I have broken something In exchange. I get an auto discover certificate error which I am in the process of fixing. Perhaps the 2 problems are related. Also OWA on this server runs on a custom SSL port. | 2011/07/12 | [
"https://serverfault.com/questions/289429",
"https://serverfault.com",
"https://serverfault.com/users/87374/"
] | If I understood the situation, the following commands should set the information that you desire in for git configuration.
```
git config --global user.name "Your Name Comes Here"
git config --global user.email [email protected]
``` | Instead, save your identity in a configuration file using the git config command.
$ git config user.name "Jon Loeliger"
$ git config user.email "[email protected]"
You can also tell Git your name and email address using the GIT\_AUTHOR\_NAME and
GIT\_AUTHOR\_EMAIL environment variables. If set, these variables override all configuration
settings. |
1,911,426 | In one customer computer (windows vista) almost all forms of my app are oddly unnaligned.
Making some investigation I noticed that every component with akRight anchor acts oddly, being positioned way more for the right.
Searching in SO I found [this](https://stackoverflow.com/questions/1355258/delphi-7-forms-anchors-not-working-in-vista) issue wich is similar to mine but not exacly the same.
Since I cannot reproduce the issue in my computer and my access to my customer´s computer is restrict to a few minutes and via remote desktop I´d like to know if my issue described here could be solved by the same fix. | 2009/12/16 | [
"https://Stackoverflow.com/questions/1911426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19224/"
] | Aren't you experiencing the problem described in the following question?
[Why do my Borland C++Builder 5 forms with right-anchored controls appear incorrectly on Vista?](https://stackoverflow.com/questions/525517/why-do-my-borland-cbuilder-5-forms-with-right-anchored-controls-appear-incorrec)
Maybe the answer is of some help. | I've also experienced similar issues. After much frustration I pretty much gave up using anchors and started using a combination of the Align, AlignWithMargins and Margins properties. |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | You can actually chain multiple $lookup stages. Based on the names of the collections shared by profesor79, you can do this :
```
db.sivaUserInfo.aggregate([
{
$lookup: {
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind: "$userRole"
},
{
$lookup: {
from: "sivaUserInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{
$unwind: "$userInfo"
}
])
```
This will return the following structure :
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000",
"userRole" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
},
"userInfo" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
}
```
Maybe this could be considered an anti-pattern because MongoDB wasn't meant to be relational but it is useful. | According to the [documentation](https://docs.mongodb.org/manual/reference/operator/aggregation/lookup/), $lookup can join only one external collection.
What you could do is to combine `userInfo` and `userRole` in one collection, as provided example is based on relational DB schema. Mongo is noSQL database - and this require different approach for document management.
Please find below 2-step query, which combines userInfo with userRole - creating new temporary collection used in last query to display combined data.
In last query there is an option to use $out and create new collection with merged data for later use.
>
> create collections
>
>
>
```
db.sivaUser.insert(
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
})
//"userinfo"
db.sivaUserInfo.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
})
//"userrole"
db.sivaUserRole.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
})
```
>
> "join" them all :-)
>
>
>
```
db.sivaUserInfo.aggregate([
{$lookup:
{
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind:"$userRole"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :"$userRole.role"
}
},
{
$out:"sivaUserTmp"
}
])
db.sivaUserTmp.aggregate([
{$lookup:
{
from: "sivaUser",
localField: "userId",
foreignField: "userId",
as: "user"
}
},
{
$unwind:"$user"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :1,
"email" : "$user.email",
"userName" : "$user.userName"
}
}
])
``` |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | The join feature supported by **Mongodb 3.2** and later versions. You can use joins by using **aggregate** query.
You can do it using below example :
```
db.users.aggregate([
// Join with user_info table
{
$lookup:{
from: "userinfo", // other table name
localField: "userId", // name of users table field
foreignField: "userId", // name of userinfo table field
as: "user_info" // alias for userinfo table
}
},
{ $unwind:"$user_info" }, // $unwind used for getting data in object or for one record only
// Join with user_role table
{
$lookup:{
from: "userrole",
localField: "userId",
foreignField: "userId",
as: "user_role"
}
},
{ $unwind:"$user_role" },
// define some conditions here
{
$match:{
$and:[{"userName" : "admin"}]
}
},
// define which fields are you want to fetch
{
$project:{
_id : 1,
email : 1,
userName : 1,
userPhone : "$user_info.phone",
role : "$user_role.role",
}
}
]);
```
This will give result like this:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userName" : "admin",
"userPhone" : "0000000000",
"role" : "admin"
}
```
Hope this will help you or someone else.
Thanks | According to the [documentation](https://docs.mongodb.org/manual/reference/operator/aggregation/lookup/), $lookup can join only one external collection.
What you could do is to combine `userInfo` and `userRole` in one collection, as provided example is based on relational DB schema. Mongo is noSQL database - and this require different approach for document management.
Please find below 2-step query, which combines userInfo with userRole - creating new temporary collection used in last query to display combined data.
In last query there is an option to use $out and create new collection with merged data for later use.
>
> create collections
>
>
>
```
db.sivaUser.insert(
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
})
//"userinfo"
db.sivaUserInfo.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
})
//"userrole"
db.sivaUserRole.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
})
```
>
> "join" them all :-)
>
>
>
```
db.sivaUserInfo.aggregate([
{$lookup:
{
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind:"$userRole"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :"$userRole.role"
}
},
{
$out:"sivaUserTmp"
}
])
db.sivaUserTmp.aggregate([
{$lookup:
{
from: "sivaUser",
localField: "userId",
foreignField: "userId",
as: "user"
}
},
{
$unwind:"$user"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :1,
"email" : "$user.email",
"userName" : "$user.userName"
}
}
])
``` |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | According to the [documentation](https://docs.mongodb.org/manual/reference/operator/aggregation/lookup/), $lookup can join only one external collection.
What you could do is to combine `userInfo` and `userRole` in one collection, as provided example is based on relational DB schema. Mongo is noSQL database - and this require different approach for document management.
Please find below 2-step query, which combines userInfo with userRole - creating new temporary collection used in last query to display combined data.
In last query there is an option to use $out and create new collection with merged data for later use.
>
> create collections
>
>
>
```
db.sivaUser.insert(
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
})
//"userinfo"
db.sivaUserInfo.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
})
//"userrole"
db.sivaUserRole.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
})
```
>
> "join" them all :-)
>
>
>
```
db.sivaUserInfo.aggregate([
{$lookup:
{
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind:"$userRole"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :"$userRole.role"
}
},
{
$out:"sivaUserTmp"
}
])
db.sivaUserTmp.aggregate([
{$lookup:
{
from: "sivaUser",
localField: "userId",
foreignField: "userId",
as: "user"
}
},
{
$unwind:"$user"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :1,
"email" : "$user.email",
"userName" : "$user.userName"
}
}
])
``` | First add the collections and then apply lookup on these collections. Don't use `$unwind`
as unwind will simply separate all the documents of each collections. So apply simple lookup and then use `$project` for projection.
Here is mongoDB query:
```
db.userInfo.aggregate([
{
$lookup: {
from: "userRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$lookup: {
from: "userInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{$project: {
"_id":0,
"userRole._id":0,
"userInfo._id":0
}
} ])
```
Here is the output:
```
/* 1 */ {
"userId" : "AD",
"phone" : "0000000000",
"userRole" : [
{
"userId" : "AD",
"role" : "admin"
}
],
"userInfo" : [
{
"userId" : "AD",
"phone" : "0000000000"
}
] }
```
Thanks. |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | According to the [documentation](https://docs.mongodb.org/manual/reference/operator/aggregation/lookup/), $lookup can join only one external collection.
What you could do is to combine `userInfo` and `userRole` in one collection, as provided example is based on relational DB schema. Mongo is noSQL database - and this require different approach for document management.
Please find below 2-step query, which combines userInfo with userRole - creating new temporary collection used in last query to display combined data.
In last query there is an option to use $out and create new collection with merged data for later use.
>
> create collections
>
>
>
```
db.sivaUser.insert(
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
})
//"userinfo"
db.sivaUserInfo.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
})
//"userrole"
db.sivaUserRole.insert(
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
})
```
>
> "join" them all :-)
>
>
>
```
db.sivaUserInfo.aggregate([
{$lookup:
{
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind:"$userRole"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :"$userRole.role"
}
},
{
$out:"sivaUserTmp"
}
])
db.sivaUserTmp.aggregate([
{$lookup:
{
from: "sivaUser",
localField: "userId",
foreignField: "userId",
as: "user"
}
},
{
$unwind:"$user"
},
{
$project:{
"_id":1,
"userId" : 1,
"phone" : 1,
"role" :1,
"email" : "$user.email",
"userName" : "$user.userName"
}
}
])
``` | first lookup finds all the products where p.cid = categories.\_id,simlarly 2nd lookup
finds all products where p.sid = subcategories.\_id.
```
let dataQuery :any = await ProductModel.aggregate([ { $lookup:{
from :"categories",
localField:"cid",
foreignField :"_id",
as :"products"
}
},
{
$unwind: "$products"
},
{ $lookup:{
from :"subcategories",
localField:"sid",
foreignField :"_id",
as :"productList"
}
},
{
$unwind: "$productList"
},
{
$project:{
productList:0
}
}
]);
``` |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | The join feature supported by **Mongodb 3.2** and later versions. You can use joins by using **aggregate** query.
You can do it using below example :
```
db.users.aggregate([
// Join with user_info table
{
$lookup:{
from: "userinfo", // other table name
localField: "userId", // name of users table field
foreignField: "userId", // name of userinfo table field
as: "user_info" // alias for userinfo table
}
},
{ $unwind:"$user_info" }, // $unwind used for getting data in object or for one record only
// Join with user_role table
{
$lookup:{
from: "userrole",
localField: "userId",
foreignField: "userId",
as: "user_role"
}
},
{ $unwind:"$user_role" },
// define some conditions here
{
$match:{
$and:[{"userName" : "admin"}]
}
},
// define which fields are you want to fetch
{
$project:{
_id : 1,
email : 1,
userName : 1,
userPhone : "$user_info.phone",
role : "$user_role.role",
}
}
]);
```
This will give result like this:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userName" : "admin",
"userPhone" : "0000000000",
"role" : "admin"
}
```
Hope this will help you or someone else.
Thanks | You can actually chain multiple $lookup stages. Based on the names of the collections shared by profesor79, you can do this :
```
db.sivaUserInfo.aggregate([
{
$lookup: {
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind: "$userRole"
},
{
$lookup: {
from: "sivaUserInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{
$unwind: "$userInfo"
}
])
```
This will return the following structure :
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000",
"userRole" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
},
"userInfo" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
}
```
Maybe this could be considered an anti-pattern because MongoDB wasn't meant to be relational but it is useful. |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | You can actually chain multiple $lookup stages. Based on the names of the collections shared by profesor79, you can do this :
```
db.sivaUserInfo.aggregate([
{
$lookup: {
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind: "$userRole"
},
{
$lookup: {
from: "sivaUserInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{
$unwind: "$userInfo"
}
])
```
This will return the following structure :
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000",
"userRole" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
},
"userInfo" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
}
```
Maybe this could be considered an anti-pattern because MongoDB wasn't meant to be relational but it is useful. | First add the collections and then apply lookup on these collections. Don't use `$unwind`
as unwind will simply separate all the documents of each collections. So apply simple lookup and then use `$project` for projection.
Here is mongoDB query:
```
db.userInfo.aggregate([
{
$lookup: {
from: "userRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$lookup: {
from: "userInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{$project: {
"_id":0,
"userRole._id":0,
"userInfo._id":0
}
} ])
```
Here is the output:
```
/* 1 */ {
"userId" : "AD",
"phone" : "0000000000",
"userRole" : [
{
"userId" : "AD",
"role" : "admin"
}
],
"userInfo" : [
{
"userId" : "AD",
"phone" : "0000000000"
}
] }
```
Thanks. |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | You can actually chain multiple $lookup stages. Based on the names of the collections shared by profesor79, you can do this :
```
db.sivaUserInfo.aggregate([
{
$lookup: {
from: "sivaUserRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$unwind: "$userRole"
},
{
$lookup: {
from: "sivaUserInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{
$unwind: "$userInfo"
}
])
```
This will return the following structure :
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000",
"userRole" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
},
"userInfo" : {
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
}
```
Maybe this could be considered an anti-pattern because MongoDB wasn't meant to be relational but it is useful. | first lookup finds all the products where p.cid = categories.\_id,simlarly 2nd lookup
finds all products where p.sid = subcategories.\_id.
```
let dataQuery :any = await ProductModel.aggregate([ { $lookup:{
from :"categories",
localField:"cid",
foreignField :"_id",
as :"products"
}
},
{
$unwind: "$products"
},
{ $lookup:{
from :"subcategories",
localField:"sid",
foreignField :"_id",
as :"productList"
}
},
{
$unwind: "$productList"
},
{
$project:{
productList:0
}
}
]);
``` |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | The join feature supported by **Mongodb 3.2** and later versions. You can use joins by using **aggregate** query.
You can do it using below example :
```
db.users.aggregate([
// Join with user_info table
{
$lookup:{
from: "userinfo", // other table name
localField: "userId", // name of users table field
foreignField: "userId", // name of userinfo table field
as: "user_info" // alias for userinfo table
}
},
{ $unwind:"$user_info" }, // $unwind used for getting data in object or for one record only
// Join with user_role table
{
$lookup:{
from: "userrole",
localField: "userId",
foreignField: "userId",
as: "user_role"
}
},
{ $unwind:"$user_role" },
// define some conditions here
{
$match:{
$and:[{"userName" : "admin"}]
}
},
// define which fields are you want to fetch
{
$project:{
_id : 1,
email : 1,
userName : 1,
userPhone : "$user_info.phone",
role : "$user_role.role",
}
}
]);
```
This will give result like this:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userName" : "admin",
"userPhone" : "0000000000",
"role" : "admin"
}
```
Hope this will help you or someone else.
Thanks | First add the collections and then apply lookup on these collections. Don't use `$unwind`
as unwind will simply separate all the documents of each collections. So apply simple lookup and then use `$project` for projection.
Here is mongoDB query:
```
db.userInfo.aggregate([
{
$lookup: {
from: "userRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$lookup: {
from: "userInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{$project: {
"_id":0,
"userRole._id":0,
"userInfo._id":0
}
} ])
```
Here is the output:
```
/* 1 */ {
"userId" : "AD",
"phone" : "0000000000",
"userRole" : [
{
"userId" : "AD",
"role" : "admin"
}
],
"userInfo" : [
{
"userId" : "AD",
"phone" : "0000000000"
}
] }
```
Thanks. |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | The join feature supported by **Mongodb 3.2** and later versions. You can use joins by using **aggregate** query.
You can do it using below example :
```
db.users.aggregate([
// Join with user_info table
{
$lookup:{
from: "userinfo", // other table name
localField: "userId", // name of users table field
foreignField: "userId", // name of userinfo table field
as: "user_info" // alias for userinfo table
}
},
{ $unwind:"$user_info" }, // $unwind used for getting data in object or for one record only
// Join with user_role table
{
$lookup:{
from: "userrole",
localField: "userId",
foreignField: "userId",
as: "user_role"
}
},
{ $unwind:"$user_role" },
// define some conditions here
{
$match:{
$and:[{"userName" : "admin"}]
}
},
// define which fields are you want to fetch
{
$project:{
_id : 1,
email : 1,
userName : 1,
userPhone : "$user_info.phone",
role : "$user_role.role",
}
}
]);
```
This will give result like this:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userName" : "admin",
"userPhone" : "0000000000",
"role" : "admin"
}
```
Hope this will help you or someone else.
Thanks | first lookup finds all the products where p.cid = categories.\_id,simlarly 2nd lookup
finds all products where p.sid = subcategories.\_id.
```
let dataQuery :any = await ProductModel.aggregate([ { $lookup:{
from :"categories",
localField:"cid",
foreignField :"_id",
as :"products"
}
},
{
$unwind: "$products"
},
{ $lookup:{
from :"subcategories",
localField:"sid",
foreignField :"_id",
as :"productList"
}
},
{
$unwind: "$productList"
},
{
$project:{
productList:0
}
}
]);
``` |
35,813,854 | I want to join more than two collections in MongoDB using the aggregate `$lookup`. Is it possible to join? Give me some examples.
Here I have three collections:
`users`:
```
{
"_id" : ObjectId("5684f3c454b1fd6926c324fd"),
"email" : "[email protected]",
"userId" : "AD",
"userName" : "admin"
}
```
`userinfo`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"phone" : "0000000000"
}
```
`userrole`:
```
{
"_id" : ObjectId("56d82612b63f1c31cf906003"),
"userId" : "AD",
"role" : "admin"
}
``` | 2016/03/05 | [
"https://Stackoverflow.com/questions/35813854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5492366/"
] | First add the collections and then apply lookup on these collections. Don't use `$unwind`
as unwind will simply separate all the documents of each collections. So apply simple lookup and then use `$project` for projection.
Here is mongoDB query:
```
db.userInfo.aggregate([
{
$lookup: {
from: "userRole",
localField: "userId",
foreignField: "userId",
as: "userRole"
}
},
{
$lookup: {
from: "userInfo",
localField: "userId",
foreignField: "userId",
as: "userInfo"
}
},
{$project: {
"_id":0,
"userRole._id":0,
"userInfo._id":0
}
} ])
```
Here is the output:
```
/* 1 */ {
"userId" : "AD",
"phone" : "0000000000",
"userRole" : [
{
"userId" : "AD",
"role" : "admin"
}
],
"userInfo" : [
{
"userId" : "AD",
"phone" : "0000000000"
}
] }
```
Thanks. | first lookup finds all the products where p.cid = categories.\_id,simlarly 2nd lookup
finds all products where p.sid = subcategories.\_id.
```
let dataQuery :any = await ProductModel.aggregate([ { $lookup:{
from :"categories",
localField:"cid",
foreignField :"_id",
as :"products"
}
},
{
$unwind: "$products"
},
{ $lookup:{
from :"subcategories",
localField:"sid",
foreignField :"_id",
as :"productList"
}
},
{
$unwind: "$productList"
},
{
$project:{
productList:0
}
}
]);
``` |
126,378 | I just reinstalled Windows 7 on my Dell Inspiron 15R SE 7520 and I'm trying to get the dicrete GPU selected. But even though I got the High Performance option selected on Catalyst Control Center, I can't get Skyrim to run on the discrete GPU. How can I get it working?
EDIT:
I forgot to mention this is a hybrid GPU, it needs the on chip intel one to perform in any level... IMHO, a crappy design. | 2013/08/03 | [
"https://gaming.stackexchange.com/questions/126378",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/53082/"
] | No guarantee to work (don't have a onboard card to test), but I'd assume the following should work:
* Go to `[my documents]\My Games\Skyrim` and open `SkyrimPrefs.ini` with your favorite text editor.
* Look for a line starting with `iAdapter=`. Actually it should state `iAdapter=0`. Change it to `iAdapter=1`.
Try launching the game again, avoid going into the launcher's settings. | Don't blame your onboard gpu it comes in handy if you're on the road and don't want to waste a lot of power.
But concerning your question: Start your Skyrim launcher and then there should be the bullet point 'Options' which leads you to the graphics settings.
The first dropdown list there should be 'Graphics Adapter' if this isn't the case, try this [link to a dell forum thread](http://en.community.dell.com/support-forums/laptop/f/3518/p/19474269/20221062.aspx#20221062 "this link") about a similar problem with a Inspiron 14Z.
\*add\*:
Also one tip from there was to ensure you don't have a old GPU that lacks the power so try to uninstall your Radeon Driver and start the game. If the performance is even worse you know what to do. Use [fraps](http://fraps.com) and determine the frames per second to see the performance drop and if there is one. |
126,378 | I just reinstalled Windows 7 on my Dell Inspiron 15R SE 7520 and I'm trying to get the dicrete GPU selected. But even though I got the High Performance option selected on Catalyst Control Center, I can't get Skyrim to run on the discrete GPU. How can I get it working?
EDIT:
I forgot to mention this is a hybrid GPU, it needs the on chip intel one to perform in any level... IMHO, a crappy design. | 2013/08/03 | [
"https://gaming.stackexchange.com/questions/126378",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/53082/"
] | No guarantee to work (don't have a onboard card to test), but I'd assume the following should work:
* Go to `[my documents]\My Games\Skyrim` and open `SkyrimPrefs.ini` with your favorite text editor.
* Look for a line starting with `iAdapter=`. Actually it should state `iAdapter=0`. Change it to `iAdapter=1`.
Try launching the game again, avoid going into the launcher's settings. | Go to *Catalyst Control Center->Power->PowerPla*y and make sure it's ***enabled***. Then go to *Graphics Switching* and assign *High Performance* to the `game.exe` file. Done.
 |
126,378 | I just reinstalled Windows 7 on my Dell Inspiron 15R SE 7520 and I'm trying to get the dicrete GPU selected. But even though I got the High Performance option selected on Catalyst Control Center, I can't get Skyrim to run on the discrete GPU. How can I get it working?
EDIT:
I forgot to mention this is a hybrid GPU, it needs the on chip intel one to perform in any level... IMHO, a crappy design. | 2013/08/03 | [
"https://gaming.stackexchange.com/questions/126378",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/53082/"
] | insall GPU-Z run the program and select the graphic-card you want to use. you can set the program to open when windows starts up. In that way you dont have to manually open it everytime you wont to play. | Don't blame your onboard gpu it comes in handy if you're on the road and don't want to waste a lot of power.
But concerning your question: Start your Skyrim launcher and then there should be the bullet point 'Options' which leads you to the graphics settings.
The first dropdown list there should be 'Graphics Adapter' if this isn't the case, try this [link to a dell forum thread](http://en.community.dell.com/support-forums/laptop/f/3518/p/19474269/20221062.aspx#20221062 "this link") about a similar problem with a Inspiron 14Z.
\*add\*:
Also one tip from there was to ensure you don't have a old GPU that lacks the power so try to uninstall your Radeon Driver and start the game. If the performance is even worse you know what to do. Use [fraps](http://fraps.com) and determine the frames per second to see the performance drop and if there is one. |
126,378 | I just reinstalled Windows 7 on my Dell Inspiron 15R SE 7520 and I'm trying to get the dicrete GPU selected. But even though I got the High Performance option selected on Catalyst Control Center, I can't get Skyrim to run on the discrete GPU. How can I get it working?
EDIT:
I forgot to mention this is a hybrid GPU, it needs the on chip intel one to perform in any level... IMHO, a crappy design. | 2013/08/03 | [
"https://gaming.stackexchange.com/questions/126378",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/53082/"
] | insall GPU-Z run the program and select the graphic-card you want to use. you can set the program to open when windows starts up. In that way you dont have to manually open it everytime you wont to play. | Go to *Catalyst Control Center->Power->PowerPla*y and make sure it's ***enabled***. Then go to *Graphics Switching* and assign *High Performance* to the `game.exe` file. Done.
 |
13,797 | I would like to force an update of App Store applications from the command line.
How can I do this? | 2011/05/09 | [
"https://apple.stackexchange.com/questions/13797",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1916/"
] | Apple doesn't supply a command line helper or any scriptable cocoa classes you can latch on to for automation. Unlike the overall software update, which allows updates and installs out of the box, you can script the process of clicking buttons with a mouse using Automator.
However, the app store has been reverse engineered and released open source [as well as a binary form](https://github.com/argon/mas/blob/master/ghreleases):
* <https://github.com/argon/mas>
The install is quick and it appears to be quite reliable on the current version of OS X 10.11:
```
brew install argon/mas/mas
```
With the source released, I would expect some other implementations of this tool to pop up, perhaps even one scripted with python.
If someone is logged into the mac (windowmanager is running), you can use Automator and the "watch me do" function to automate updates and storing your store password in the script fairly insecurely.
Here are two tutorials to get you started if this meets your needs.
<http://www.tuaw.com/2009/01/19/mac-automation-creating-watch-me-do-workflows/>
<http://automator.us/leopard/features/virtual-user.html>
Once you have a working script, you can use the command line `open` command to kick it off.
If the App Store app ever exposes that function to scripting you will have more options from the command line. It would be easy to use `sdef`, `sdp` and `gen_bridge_metadata` to [dump the entire scriptable dictionary and script things using ruby](http://www.macruby.org/trac/wiki/MacRubyTutorial) from the command line, but at present the best option would be to use the `mas` command line tool. | The App Store is simply not suitable for administration. Barely a quasi-package manager, it is not nearly as useful or reliable as real package managers like pkgsrc, FreeBSD ports, aptitude, RPM, macports or even softwareupdate. In my experience, it is unpredictable and a beard for commercial developers to hock their wares. So there is really only one rational and responsible way, as a competent administrator, to work with App Store:
```
sudo launchctl unload -w /System/Library/LaunchAgents/com.apple.store_helper.plist
sudo launchctl unload -w /System/Library/LaunchAgents/com.apple.storeagent.plist
sudo mkdir /System/Library/LaunchAgents\ \(disabled\)/
sudo mv /System/Library/LaunchAgents/com.apple.store* /System/Library/LaunchAgents\ \(disabled\)/
```
And just put it out of your mind, it won't trouble you any longer. ;-)
---
Use ARD instead, though not a package manager, it manages packages, installations, updates, and upgrades, it will do what you want, save you time, and will not let you down:
For Apple Remote Desktop 3, for 10.9:
Check out the admin guide first to convince yourself that this is the way to go:
```
curl -Ok http://images.apple.com/ca/fr/remotedesktop/pdf/ARD3_AdminGuide.pdf
open ARD3_AdminGuide.pdf
```
Then install:
```
curl -Ok http://supportdownload.apple.com/download.info.apple.com/Apple_Support_Area/Apple_Software_Updates/Mac_OS_X/downloads/031-2845.20140313.rerft/RemoteDesktopAdmin372.dmg
hdiutil attach -quiet -noverify -nobrowse -noautoopen RemoteDesktopAdmin372.dmg
sudo installer -pkg /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/RemoteDesktopAdmin372.pkg -target /
```
but that might throw a funny error if not running 10.9, or if no previous version of ARD is installed, and if it does, try:
```
pkgutil --expand /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/RemoteDesktopAdmin372.pkg ARDexpanded/
```
or to equal effect (either/or here, don't need to use both pkgutil and xar... I'm just being thorough):
```
mkdir ARDexpanded
cd ARDexpanded
xar -xf /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/RemoteDesktopAdmin372.pkg
```
And we no longer need the disk image attached, so eject it:
```
hdiutil detach -quiet /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/
```
And now what you'll see if you
```
cd ARDexpanded/RemoteDesktopAdmin372.pkg/
ls
```
is
```
Bom PackageInfo Payload Scripts
```
What's in the Payload file, which is a cpio archive compressed with gzip, is what you're after. So with a few piped commands we can get to the app bundle:
```
cat Payload | gzip -d - | cpio -id
ls
```
returns:
```
Applications Bom Library PackageInfo Payload Scripts
```
And you're nearly done.
```
cp -R Applications/Remote\ Desktop.app /Applications/
```
Now you have installed Apple Remote Desktop Admin 3.7.2
So all that's left to do is purchase your license:
```
open http://store.apple.com/us_smb_78313/product/D6020Z/A/apple-remote-desktop-3-volume-licenses-20-seats-price-is-per-seat
```
Launch /Applications/Remote\ Desktop.app and serialize. And get some work done.
---
For 10.6 Snow Leopard, you'll need a slightly earlier version of ARD:
```
curl -Ok http://images.apple.com/ca/fr/remotedesktop/pdf/ARD3_AdminGuide.pdf
curl -Ok http://supportdownload.apple.com/download.info.apple.com/Apple_Support_Area/Apple_Software_Updates/Mac_OS_X/downloads/041-6789.20120917.xD6TR/RemoteDesktopAdmin353.dmg
hdiutil attach -quiet -noverify -nobrowse -noautoopen RemoteDesktopAdmin353.dmg
sudo installer -pkg /Volumes/Apple\ Remote\ Desktop\ 3.5.3\ Admin\ Update/RemoteDesktopAdmin353.pkg -target /
```
and if it throws back at you this:
```
installer: Cannot install on volume / because it is disabled.
installer: This update could not find Remote Desktop on this volume.
```
then try:
```
pkgutil --expand /Volumes/Apple\ Remote\ Desktop\ 3.5.3\ Admin\ Update/RemoteDesktopAdmin353.pkg ARD353
hdiutil detach -quiet /Volumes/Apple\ Remote\ Desktop\ 3.5.3\ Admin\ Update
```
drill down to the Payload:
```
cd ARD353/RemoteDesktopAdmin353.pkg/
ls
```
returns:
```
Bom PackageInfo Payload Scripts
```
So run:
```
cat Payload | gzip -d - | cpio -id
ls
```
returns:
```
Applications Bom Library PackageInfo Payload Scripts
```
And you're nearly done:
```
cp -R Applications/Remote\ Desktop.app /Applications/
```
purchase your license:
```
open http://store.apple.com/us_smb_78313/product/D6020Z/A/apple-remote-desktop-3-volume-licenses-20-seats-price-is-per-seat
```
Launch /Applications/Remote\ Desktop.app and serialize. And get something done. |
13,797 | I would like to force an update of App Store applications from the command line.
How can I do this? | 2011/05/09 | [
"https://apple.stackexchange.com/questions/13797",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1916/"
] | Apple doesn't supply a command line helper or any scriptable cocoa classes you can latch on to for automation. Unlike the overall software update, which allows updates and installs out of the box, you can script the process of clicking buttons with a mouse using Automator.
However, the app store has been reverse engineered and released open source [as well as a binary form](https://github.com/argon/mas/blob/master/ghreleases):
* <https://github.com/argon/mas>
The install is quick and it appears to be quite reliable on the current version of OS X 10.11:
```
brew install argon/mas/mas
```
With the source released, I would expect some other implementations of this tool to pop up, perhaps even one scripted with python.
If someone is logged into the mac (windowmanager is running), you can use Automator and the "watch me do" function to automate updates and storing your store password in the script fairly insecurely.
Here are two tutorials to get you started if this meets your needs.
<http://www.tuaw.com/2009/01/19/mac-automation-creating-watch-me-do-workflows/>
<http://automator.us/leopard/features/virtual-user.html>
Once you have a working script, you can use the command line `open` command to kick it off.
If the App Store app ever exposes that function to scripting you will have more options from the command line. It would be easy to use `sdef`, `sdp` and `gen_bridge_metadata` to [dump the entire scriptable dictionary and script things using ruby](http://www.macruby.org/trac/wiki/MacRubyTutorial) from the command line, but at present the best option would be to use the `mas` command line tool. | You can use the `softwareupdate` tool.
```
sudo softwareupdate -l
```
Lists all available updates.
```
sudo softwareupdate -ia
```
Installs all available updates. |
13,797 | I would like to force an update of App Store applications from the command line.
How can I do this? | 2011/05/09 | [
"https://apple.stackexchange.com/questions/13797",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/1916/"
] | The App Store is simply not suitable for administration. Barely a quasi-package manager, it is not nearly as useful or reliable as real package managers like pkgsrc, FreeBSD ports, aptitude, RPM, macports or even softwareupdate. In my experience, it is unpredictable and a beard for commercial developers to hock their wares. So there is really only one rational and responsible way, as a competent administrator, to work with App Store:
```
sudo launchctl unload -w /System/Library/LaunchAgents/com.apple.store_helper.plist
sudo launchctl unload -w /System/Library/LaunchAgents/com.apple.storeagent.plist
sudo mkdir /System/Library/LaunchAgents\ \(disabled\)/
sudo mv /System/Library/LaunchAgents/com.apple.store* /System/Library/LaunchAgents\ \(disabled\)/
```
And just put it out of your mind, it won't trouble you any longer. ;-)
---
Use ARD instead, though not a package manager, it manages packages, installations, updates, and upgrades, it will do what you want, save you time, and will not let you down:
For Apple Remote Desktop 3, for 10.9:
Check out the admin guide first to convince yourself that this is the way to go:
```
curl -Ok http://images.apple.com/ca/fr/remotedesktop/pdf/ARD3_AdminGuide.pdf
open ARD3_AdminGuide.pdf
```
Then install:
```
curl -Ok http://supportdownload.apple.com/download.info.apple.com/Apple_Support_Area/Apple_Software_Updates/Mac_OS_X/downloads/031-2845.20140313.rerft/RemoteDesktopAdmin372.dmg
hdiutil attach -quiet -noverify -nobrowse -noautoopen RemoteDesktopAdmin372.dmg
sudo installer -pkg /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/RemoteDesktopAdmin372.pkg -target /
```
but that might throw a funny error if not running 10.9, or if no previous version of ARD is installed, and if it does, try:
```
pkgutil --expand /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/RemoteDesktopAdmin372.pkg ARDexpanded/
```
or to equal effect (either/or here, don't need to use both pkgutil and xar... I'm just being thorough):
```
mkdir ARDexpanded
cd ARDexpanded
xar -xf /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/RemoteDesktopAdmin372.pkg
```
And we no longer need the disk image attached, so eject it:
```
hdiutil detach -quiet /Volumes/Apple\ Remote\ Desktop\ 3.7.2\ Admin\ Update/
```
And now what you'll see if you
```
cd ARDexpanded/RemoteDesktopAdmin372.pkg/
ls
```
is
```
Bom PackageInfo Payload Scripts
```
What's in the Payload file, which is a cpio archive compressed with gzip, is what you're after. So with a few piped commands we can get to the app bundle:
```
cat Payload | gzip -d - | cpio -id
ls
```
returns:
```
Applications Bom Library PackageInfo Payload Scripts
```
And you're nearly done.
```
cp -R Applications/Remote\ Desktop.app /Applications/
```
Now you have installed Apple Remote Desktop Admin 3.7.2
So all that's left to do is purchase your license:
```
open http://store.apple.com/us_smb_78313/product/D6020Z/A/apple-remote-desktop-3-volume-licenses-20-seats-price-is-per-seat
```
Launch /Applications/Remote\ Desktop.app and serialize. And get some work done.
---
For 10.6 Snow Leopard, you'll need a slightly earlier version of ARD:
```
curl -Ok http://images.apple.com/ca/fr/remotedesktop/pdf/ARD3_AdminGuide.pdf
curl -Ok http://supportdownload.apple.com/download.info.apple.com/Apple_Support_Area/Apple_Software_Updates/Mac_OS_X/downloads/041-6789.20120917.xD6TR/RemoteDesktopAdmin353.dmg
hdiutil attach -quiet -noverify -nobrowse -noautoopen RemoteDesktopAdmin353.dmg
sudo installer -pkg /Volumes/Apple\ Remote\ Desktop\ 3.5.3\ Admin\ Update/RemoteDesktopAdmin353.pkg -target /
```
and if it throws back at you this:
```
installer: Cannot install on volume / because it is disabled.
installer: This update could not find Remote Desktop on this volume.
```
then try:
```
pkgutil --expand /Volumes/Apple\ Remote\ Desktop\ 3.5.3\ Admin\ Update/RemoteDesktopAdmin353.pkg ARD353
hdiutil detach -quiet /Volumes/Apple\ Remote\ Desktop\ 3.5.3\ Admin\ Update
```
drill down to the Payload:
```
cd ARD353/RemoteDesktopAdmin353.pkg/
ls
```
returns:
```
Bom PackageInfo Payload Scripts
```
So run:
```
cat Payload | gzip -d - | cpio -id
ls
```
returns:
```
Applications Bom Library PackageInfo Payload Scripts
```
And you're nearly done:
```
cp -R Applications/Remote\ Desktop.app /Applications/
```
purchase your license:
```
open http://store.apple.com/us_smb_78313/product/D6020Z/A/apple-remote-desktop-3-volume-licenses-20-seats-price-is-per-seat
```
Launch /Applications/Remote\ Desktop.app and serialize. And get something done. | You can use the `softwareupdate` tool.
```
sudo softwareupdate -l
```
Lists all available updates.
```
sudo softwareupdate -ia
```
Installs all available updates. |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | There are [two ways to access properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Property_Accessors) of an object:
* Dot notation: `something.bar`
* Bracket notation: `something['bar']`
The value between the brackets can be any expression. Therefore, if the property name is stored in a variable, you have to use bracket notation:
```js
var something = {
bar: 'foo'
};
var foo = 'bar';
// both x = something[foo] and something[foo] = x work as expected
console.log(something[foo]);
console.log(something.bar)
``` | You can do it like this using Lodash get
```
_.get(object, 'a[0].b.c');
``` |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | To access a property dynamically, simply use [square brackets](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Property_accessors) `[]` as follows:
```js
const something = { bar: "Foobar!" };
const userInput = 'bar';
console.log(something[userInput])
```
The problem
-----------
There's a major gotchya in that solution! (I'm surprised other answers have not brought this up yet). Often you only want to access properties that you've put onto that object yourself, you don't want to grab inherited properties.
Here's an illustration of this issue. Here we have an innocent-looking program, but it has a subtle bug - can you spot it?
```js
const agesOfUsers = { sam: 16, sally: 22 }
const username = prompt('Enter a username:')
if (agesOfUsers[username] !== undefined) {
console.log(`${username} is ${agesOfUsers[username]} years old`)
} else {
console.log(`${username} is not found`)
}
```
When prompted for a username, if you supply "toString" as a username, it'll give you the following message: "toString is function toString() { [native code] } years old". The issue is that `agesOfUsers` is an object, and as such, automatically inherits certain properties like `.toString()` from the base Object class. You can look [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object) for a full list of properties that all objects inherit.
Solutions
=========
1. Use a [Map data structure](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instead. The stored contents of a map don't suffer from prototype issues, so they provide a clean solution to this problem.
```js
const agesOfUsers = new Map()
agesOfUsers.set('sam', 16)
agesOfUsers.set('sally', 2)
console.log(agesOfUsers.get('sam')) // 16
```
2. Use an object with a null prototype, instead of the default prototype. You can use `Object.create(null)` to create such an object. This sort of object does not suffer from these prototype issues, because you've explicitly created it in a way that it does not inherit anything.
```js
const agesOfUsers = Object.create(null)
agesOfUsers.sam = 16
agesOfUsers.sally = 22;
console.log(agesOfUsers['sam']) // 16
console.log(agesOfUsers['toString']) // undefined - toString was not inherited
```
3. You can use `Object.hasOwn(yourObj, attrName)` to first check if the dynamic key you wish to access is directly on the object and not inherited (learn more [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/hasOwn)). This is a relatively newer feature, so check the compatibility tables before dropping it into your code. Before `Object.hasOwn(yourObj, attrName)` came around, you would achieve this same effect via `Object.prototype.hasOwnProperty.call(yourObj, attrName)`. Sometimes, you might see code using `yourObj.hasOwnProperty(attrName)` too, which sometimes works but it has some pitfalls that you can read about [here](https://stackoverflow.com/questions/12017693/why-use-object-prototype-hasownproperty-callmyobj-prop-instead-of-myobj-hasow).
```js
// Try entering the property name "toString",
// you'll see it gets handled correctly.
const user = { name: 'sam', age: 16 }
const propName = prompt('Enter a property name:')
if (Object.hasOwn(user, propName)) {
console.log(`${propName} = ${user[propName]}`)
} else {
console.log(`${propName} is not found`)
}
```
4. If you know the key you're trying to use will never be the name of an inherited property (e.g. maybe they're numbers, or they all have the same prefix, etc), you can choose to use the original solution. | You can do dynamically access the property of an object using the bracket notation. This would look like this `obj[yourKey]` **however JavaScript objects are really not designed to dynamically updated or read**. They are intended to be **defined on initialisation**.
In case you want to dynamically assign and access `key value` pairs you should use a map instead.
```js
const yourKey = 'yourKey';
// initialise it with the value
const map1 = new Map([
['yourKey', 'yourValue']
]);
// initialise empty then dynamically assign
const map2 = new Map();
map2.set(yourKey, 'yourValue');
console.log(map1.get(yourKey));
console.log(map2.get(yourKey));
``` |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | Following is an ES6 example of how you can access the property of an object using a property name that has been dynamically generated by concatenating two strings.
```
var suffix = " name";
var person = {
["first" + suffix]: "Nicholas",
["last" + suffix]: "Zakas"
};
console.log(person["first name"]); // "Nicholas"
console.log(person["last name"]); // "Zakas"
```
This is called [computed property names](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer#Computed_property_names) | I asked a question that kinda duplicated on this topic a while back, and after excessive research, and seeing a lot of information missing that should be here, I feel I have something valuable to add to this older post.
* Firstly I want to address that there are several ways to obtain the value of a property and store it in a dynamic Variable. The first most popular, and easiest way IMHO would be:
```
let properyValue = element.style['enter-a-property'];
```
however I rarely go this route because it doesn't work on property values assigned via style-sheets. To give you an example, I'll demonstrate with a bit of pseudo code.
```
let elem = document.getElementById('someDiv');
let cssProp = elem.style['width'];
```
Using the code example above; if the width property of the div element that was stored in the 'elem' variable was styled in a CSS style-sheet, and not styled inside of its HTML tag, you are without a doubt going to get a return value of undefined stored inside of the cssProp variable. The undefined value occurs because in-order to get the correct value, the code written inside a CSS Style-Sheet needs to be computed in-order to get the value, therefore; you must use a method that will compute the value of the property who's value lies within the style-sheet.
* Henceforth the getComputedStyle() method!
```
function getCssProp(){
let ele = document.getElementById("test");
let cssProp = window.getComputedStyle(ele,null).getPropertyValue("width");
}
```
[W3Schools getComputedValue Doc](https://www.w3schools.com/jsref/jsref_getcomputedstyle.asp) This gives a good example, and lets you play with it, however, this link [Mozilla CSS getComputedValue doc](https://developer.mozilla.org/en-US/docs/Web/CSS/computed_value) talks about the getComputedValue function in detail, and should be read by any aspiring developer who isn't totally clear on this subject.
* As a side note, the getComputedValue method only gets, it does not set. This, obviously is a major downside, however there is a method that gets from CSS style-sheets, as well as sets values, though it is not standard Javascript.
The JQuery method...
```
$(selector).css(property,value)
```
...does get, and does set. It is what I use, the only downside is you got to know JQuery, but this is honestly one of the very many good reasons that every Javascript Developer should learn JQuery, it just makes life easy, and offers methods, like this one, which is not available with standard Javascript.
Hope this helps someone!!! |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | To access a property dynamically, simply use [square brackets](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Property_accessors) `[]` as follows:
```js
const something = { bar: "Foobar!" };
const userInput = 'bar';
console.log(something[userInput])
```
The problem
-----------
There's a major gotchya in that solution! (I'm surprised other answers have not brought this up yet). Often you only want to access properties that you've put onto that object yourself, you don't want to grab inherited properties.
Here's an illustration of this issue. Here we have an innocent-looking program, but it has a subtle bug - can you spot it?
```js
const agesOfUsers = { sam: 16, sally: 22 }
const username = prompt('Enter a username:')
if (agesOfUsers[username] !== undefined) {
console.log(`${username} is ${agesOfUsers[username]} years old`)
} else {
console.log(`${username} is not found`)
}
```
When prompted for a username, if you supply "toString" as a username, it'll give you the following message: "toString is function toString() { [native code] } years old". The issue is that `agesOfUsers` is an object, and as such, automatically inherits certain properties like `.toString()` from the base Object class. You can look [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object) for a full list of properties that all objects inherit.
Solutions
=========
1. Use a [Map data structure](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instead. The stored contents of a map don't suffer from prototype issues, so they provide a clean solution to this problem.
```js
const agesOfUsers = new Map()
agesOfUsers.set('sam', 16)
agesOfUsers.set('sally', 2)
console.log(agesOfUsers.get('sam')) // 16
```
2. Use an object with a null prototype, instead of the default prototype. You can use `Object.create(null)` to create such an object. This sort of object does not suffer from these prototype issues, because you've explicitly created it in a way that it does not inherit anything.
```js
const agesOfUsers = Object.create(null)
agesOfUsers.sam = 16
agesOfUsers.sally = 22;
console.log(agesOfUsers['sam']) // 16
console.log(agesOfUsers['toString']) // undefined - toString was not inherited
```
3. You can use `Object.hasOwn(yourObj, attrName)` to first check if the dynamic key you wish to access is directly on the object and not inherited (learn more [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/hasOwn)). This is a relatively newer feature, so check the compatibility tables before dropping it into your code. Before `Object.hasOwn(yourObj, attrName)` came around, you would achieve this same effect via `Object.prototype.hasOwnProperty.call(yourObj, attrName)`. Sometimes, you might see code using `yourObj.hasOwnProperty(attrName)` too, which sometimes works but it has some pitfalls that you can read about [here](https://stackoverflow.com/questions/12017693/why-use-object-prototype-hasownproperty-callmyobj-prop-instead-of-myobj-hasow).
```js
// Try entering the property name "toString",
// you'll see it gets handled correctly.
const user = { name: 'sam', age: 16 }
const propName = prompt('Enter a property name:')
if (Object.hasOwn(user, propName)) {
console.log(`${propName} = ${user[propName]}`)
} else {
console.log(`${propName} is not found`)
}
```
4. If you know the key you're trying to use will never be the name of an inherited property (e.g. maybe they're numbers, or they all have the same prefix, etc), you can choose to use the original solution. | Others have already mentioned 'dot' and 'square' syntaxes so I want to cover accessing functions and sending parameters in a similar fashion.
**Code** *[jsfiddle](https://jsfiddle.net/sb2ofndy/)*
```
var obj = {method:function(p1,p2,p3){console.log("method:",arguments)}}
var str = "method('p1', 'p2', 'p3');"
var match = str.match(/^\s*(\S+)\((.*)\);\s*$/);
var func = match[1]
var parameters = match[2].split(',');
for(var i = 0; i < parameters.length; ++i) {
// clean up param begninning
parameters[i] = parameters[i].replace(/^\s*['"]?/,'');
// clean up param end
parameters[i] = parameters[i].replace(/['"]?\s*$/,'');
}
obj[func](parameters); // sends parameters as array
obj[func].apply(this, parameters); // sends parameters as individual values
``` |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | You can do it like this using Lodash get
```
_.get(object, 'a[0].b.c');
``` | You can use `getter` in Javascript
[getter Docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/get)
Check inside the Object whether the property in question exists,
If it does not exist, **take it from the window**
```
const something = {
get: (n) => this.n || something.n || window[n]
};
``` |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | Following is an ES6 example of how you can access the property of an object using a property name that has been dynamically generated by concatenating two strings.
```
var suffix = " name";
var person = {
["first" + suffix]: "Nicholas",
["last" + suffix]: "Zakas"
};
console.log(person["first name"]); // "Nicholas"
console.log(person["last name"]); // "Zakas"
```
This is called [computed property names](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer#Computed_property_names) | I came across a case where *I thought* I wanted to pass the "address" of an object property as data to another function and populate the object (with AJAX), do lookup from address array, and display in that other function. I couldn't use dot notation without doing string acrobatics so I thought an array might be nice to pass instead. I ended-up doing something different anyway, but seemed related to this post.
Here's a sample of a language file object like the one I wanted data from:
```
const locs = {
"audioPlayer": {
"controls": {
"start": "start",
"stop": "stop"
},
"heading": "Use controls to start and stop audio."
}
}
```
I wanted to be able to pass an array such as: ["audioPlayer", "controls", "stop"] to access the language text, "stop" in this case.
I created this little function that looks-up the "least specific" (first) address parameter, and reassigns the returned object to itself. Then it is ready to look-up the next-most-specific address parameter if one exists.
```
function getText(selectionArray, obj) {
selectionArray.forEach(key => {
obj = obj[key];
});
return obj;
}
```
usage:
```
/* returns 'stop' */
console.log(getText(["audioPlayer", "controls", "stop"], locs));
/* returns 'use controls to start and stop audio.' */
console.log(getText(["audioPlayer", "heading"], locs));
``` |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | This is my solution:
```
function resolve(path, obj) {
return path.split('.').reduce(function(prev, curr) {
return prev ? prev[curr] : null
}, obj || self)
}
```
Usage examples:
```
resolve("document.body.style.width")
// or
resolve("style.width", document.body)
// or even use array indexes
// (someObject has been defined in the question)
resolve("part.0.size", someObject)
// returns null when intermediate properties are not defined:
resolve('properties.that.do.not.exist', {hello:'world'})
``` | ES5 // Check Deeply Nested Variables
------------------------------------
This simple piece of code can check for deeply nested variable / value existence without having to check each variable along the way...
```
var getValue = function( s, context ){
return Function.call( context || null, 'return ' + s )();
}
```
Ex. - a deeply nested array of objects:
```
a = [
{
b : [
{
a : 1,
b : [
{
c : 1,
d : 2 // we want to check for this
}
]
}
]
}
]
```
Instead of :
```
if(a && a[0] && a[0].b && a[0].b[0] && a[0].b[0].b && a[0].b[0].b[0] && a[0].b[0].b[0].d && a[0].b[0].b[0].d == 2 ) // true
```
We can now :
```
if( getValue('a[0].b[0].b[0].d') == 2 ) // true
```
Cheers! |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | In javascript we can access with:
* dot notation - `foo.bar`
* square brackets - `foo[someVar]` or `foo["string"]`
But only second case allows to access properties dynamically:
```
var foo = { pName1 : 1, pName2 : [1, {foo : bar }, 3] , ...}
var name = "pName"
var num = 1;
foo[name + num]; // 1
// --
var a = 2;
var b = 1;
var c = "foo";
foo[name + a][b][c]; // bar
``` | To access a property dynamically, simply use [square brackets](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Property_accessors) `[]` as follows:
```js
const something = { bar: "Foobar!" };
const userInput = 'bar';
console.log(something[userInput])
```
The problem
-----------
There's a major gotchya in that solution! (I'm surprised other answers have not brought this up yet). Often you only want to access properties that you've put onto that object yourself, you don't want to grab inherited properties.
Here's an illustration of this issue. Here we have an innocent-looking program, but it has a subtle bug - can you spot it?
```js
const agesOfUsers = { sam: 16, sally: 22 }
const username = prompt('Enter a username:')
if (agesOfUsers[username] !== undefined) {
console.log(`${username} is ${agesOfUsers[username]} years old`)
} else {
console.log(`${username} is not found`)
}
```
When prompted for a username, if you supply "toString" as a username, it'll give you the following message: "toString is function toString() { [native code] } years old". The issue is that `agesOfUsers` is an object, and as such, automatically inherits certain properties like `.toString()` from the base Object class. You can look [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object) for a full list of properties that all objects inherit.
Solutions
=========
1. Use a [Map data structure](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) instead. The stored contents of a map don't suffer from prototype issues, so they provide a clean solution to this problem.
```js
const agesOfUsers = new Map()
agesOfUsers.set('sam', 16)
agesOfUsers.set('sally', 2)
console.log(agesOfUsers.get('sam')) // 16
```
2. Use an object with a null prototype, instead of the default prototype. You can use `Object.create(null)` to create such an object. This sort of object does not suffer from these prototype issues, because you've explicitly created it in a way that it does not inherit anything.
```js
const agesOfUsers = Object.create(null)
agesOfUsers.sam = 16
agesOfUsers.sally = 22;
console.log(agesOfUsers['sam']) // 16
console.log(agesOfUsers['toString']) // undefined - toString was not inherited
```
3. You can use `Object.hasOwn(yourObj, attrName)` to first check if the dynamic key you wish to access is directly on the object and not inherited (learn more [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/hasOwn)). This is a relatively newer feature, so check the compatibility tables before dropping it into your code. Before `Object.hasOwn(yourObj, attrName)` came around, you would achieve this same effect via `Object.prototype.hasOwnProperty.call(yourObj, attrName)`. Sometimes, you might see code using `yourObj.hasOwnProperty(attrName)` too, which sometimes works but it has some pitfalls that you can read about [here](https://stackoverflow.com/questions/12017693/why-use-object-prototype-hasownproperty-callmyobj-prop-instead-of-myobj-hasow).
```js
// Try entering the property name "toString",
// you'll see it gets handled correctly.
const user = { name: 'sam', age: 16 }
const propName = prompt('Enter a property name:')
if (Object.hasOwn(user, propName)) {
console.log(`${propName} = ${user[propName]}`)
} else {
console.log(`${propName} is not found`)
}
```
4. If you know the key you're trying to use will never be the name of an inherited property (e.g. maybe they're numbers, or they all have the same prefix, etc), you can choose to use the original solution. |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | There are [two ways to access properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Property_Accessors) of an object:
* Dot notation: `something.bar`
* Bracket notation: `something['bar']`
The value between the brackets can be any expression. Therefore, if the property name is stored in a variable, you have to use bracket notation:
```js
var something = {
bar: 'foo'
};
var foo = 'bar';
// both x = something[foo] and something[foo] = x work as expected
console.log(something[foo]);
console.log(something.bar)
``` | Finding Object by reference without, strings,
Note make sure the object you pass in is cloned , i use cloneDeep from lodash for that
if object looks like
```
const obj = {data: ['an Object',{person: {name: {first:'nick', last:'gray'} }]
```
path looks like
```
const objectPath = ['data',1,'person',name','last']
```
then call below method and it will return the sub object by path given
```
const child = findObjectByPath(obj, objectPath)
alert( child) // alerts "last"
const findObjectByPath = (objectIn: any, path: any[]) => {
let obj = objectIn
for (let i = 0; i <= path.length - 1; i++) {
const item = path[i]
// keep going up to the next parent
obj = obj[item] // this is by reference
}
return obj
}
``` |
4,244,896 | I'm trying to access a property of an object using a dynamic name. Is this possible?
```
const something = { bar: "Foobar!" };
const foo = 'bar';
something.foo; // The idea is to access something.bar, getting "Foobar!"
``` | 2010/11/22 | [
"https://Stackoverflow.com/questions/4244896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471628/"
] | This is my solution:
```
function resolve(path, obj) {
return path.split('.').reduce(function(prev, curr) {
return prev ? prev[curr] : null
}, obj || self)
}
```
Usage examples:
```
resolve("document.body.style.width")
// or
resolve("style.width", document.body)
// or even use array indexes
// (someObject has been defined in the question)
resolve("part.0.size", someObject)
// returns null when intermediate properties are not defined:
resolve('properties.that.do.not.exist', {hello:'world'})
``` | You can do it like this using Lodash get
```
_.get(object, 'a[0].b.c');
``` |
33,489,517 | I'm trying to build a speech recognition in Windows 10 (using Cortana) in Visual C#.
This is part of my code for speech recognition using old System.Speech.Recognition and works great, but it only support english.
```
SpeechSynthesizer sSynth = new SpeechSynthesizer();
PromptBuilder pBuilder = new PromptBuilder();
SpeechRecognitionEngine sRecognize = new SpeechRecognitionEngine();
Choices sList = new Choices();
private void Form1_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
pBuilder.ClearContent();
pBuilder.AppendText(textBox2.Text);
sSynth.Speak(pBuilder);
}
private void button2_Click(object sender, EventArgs e)
{
button2.Enabled = false;
button3.Enabled = true;
sList.Add(new string[] { "who are you", "play a song" });
Grammar gr = new Grammar(new GrammarBuilder(sList));
try
{
sRecognize.RequestRecognizerUpdate();
sRecognize.LoadGrammar(gr);
sRecognize.SpeechRecognized += sRecognize_SpeechRecognized;
sRecognize.SetInputToDefaultAudioDevice();
sRecognize.RecognizeAsync(RecognizeMode.Multiple);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Error");
}
}
private void sRecognize_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
textBox1.Text = textBox1.Text + " " + e.Result.Text.ToString() + "\r\n";
}
```
How can I do it using new speech recognition in windows 10? | 2015/11/03 | [
"https://Stackoverflow.com/questions/33489517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1430781/"
] | Use [Microsoft Speech Platform SDK v11.0](https://msdn.microsoft.com/en-us/library/office/hh361572%28v=office.14%29.aspx) (*Microsoft.Speech.Recognition*).
It works like System.Speech, but you can use Italian language (separeted install) and also use [SRGS Grammar](https://msdn.microsoft.com/en-us/library/office/hh361653%28v=office.14%29.aspx). I work with both kinect (*SetInputToAudioStream*) and default input device (*SetInputToDefaultAudioDevice*) without hassle.
Also it works offline, so no need to be online as with Cortana.
With the SRGS grammar you can get a decent level of complexity for your commands
**UPDATE**
Here is how I initialize the recognizer
```
private RecognizerInfo GetRecognizer(string culture, string recognizerId)
{
try
{
foreach (var recognizer in SpeechRecognitionEngine.InstalledRecognizers())
{
if (!culture.Equals(recognizer.Culture.Name, StringComparison.OrdinalIgnoreCase)) continue;
if (!string.IsNullOrWhiteSpace(recognizerId))
{
string value;
recognizer.AdditionalInfo.TryGetValue(recognizerId, out value);
if ("true".Equals(value, StringComparison.OrdinalIgnoreCase))
return recognizer;
}
else
return recognizer;
}
}
catch (Exception e)
{
log.Error(m => m("Recognizer not found"), e);
}
return null;
}
private void InitializeSpeechRecognizer(string culture, string recognizerId, Func<Stream> audioStream)
{
log.Debug(x => x("Initializing SpeechRecognizer..."));
try
{
var recognizerInfo = GetRecognizer(culture, recognizerId);
if (recognizerInfo != null)
{
recognizer = new SpeechRecognitionEngine(recognizerInfo.Id);
//recognizer.LoadGrammar(VoiceCommands.GetCommandsGrammar(recognizerInfo.Culture));
recognizer.LoadGrammar(grammar);
recognizer.SpeechRecognized += SpeechRecognized;
recognizer.SpeechRecognitionRejected += SpeechRejected;
if (audioStream == null)
{
log.Debug(x => x("...input on DefaultAudioDevice"));
recognizer.SetInputToDefaultAudioDevice();
}
else
{
log.Debug(x => x("SpeechRecognizer input on CustomAudioStream"));
recognizer.SetInputToAudioStream(audioStream(), new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
}
}
else
{
log.Error(x => x(Properties.Resources.SpeechRecognizerNotFound, recognizerId));
throw new Exception(string.Format(Properties.Resources.SpeechRecognizerNotFound, recognizerId));
}
log.Debug(x => x("...complete"));
}
catch (Exception e)
{
log.Error(m => m("Error while initializing SpeechEngine"), e);
throw;
}
}
``` | Cortana API usage example is [here](https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/CortanaVoiceCommand). You can copy it and start modifying according to your needs. It creates a dialog with the user. You can not exactly reproduce your System.Speech code with Cortana API because it is designed for another purpose. If you still want to recognize just few words, you can continue using System.Speech API.
System.Speech API supports other languages, not just English. You can find more information here:
[Change the language of Speech Recognition Engine library](https://stackoverflow.com/questions/13981294/change-the-language-of-speech-recognition-engine-library) |
15,564 | I am trying to install latest GIMP on Loki 0.4.1.
But every time I try to download the packages I get error like the following :
```
E: Unable to locate package pythonany
```
Note : I am trying to install by using the following commands :
```
sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo apt-get update
```
and then :
getdebs.sh
```
#!/bin/bash
package=gimp
apt-cache depends "$package" | grep Depends: >> deb.list
sed -i -e 's/[<>|:]//g' deb.list
sed -i -e 's/Depends//g' deb.list
sed -i -e 's/ //g' deb.list
filename="deb.list"
while read -r line
do
name="$line"
apt-get download "$name"
done < "$filename"
apt-get download "$package"
```
for downloading them for offline install.
---
**Maccer**
: Thanks for replying. I was quite anxious.
I am totally new to Linux. Elementary OS is my first Linux distro.
Anyway, I can install GIMP 2.8.16 without that ppa.
But I should be able to install GIMP 2.8.22 using that ppa.
So I want to install GIMP 2.8.22 or later in any possible ways. [ Except compile from source ]
The problem is the following :
* I should be able to install GIMP 2.10.0 using that ppa.
* But even with the ppa I can only get access to GIMP 2.8.22.
* 2.8.22 is no problem.
* I just want to use latest 2.8.x if I will get access only to older
versions.
I tried to install 2.8.22 and got that error :
```
E: Unable to locate package pythonany
```
Where can I get "pythonany" ?
Is 16.04.3 supported anymore ?
Why does the repository offer 2.8.16 instead of 2.8.22?
Anyway, I just want to use at least 2.8.22 or 2.10.0.
How can I do that? | 2018/05/17 | [
"https://elementaryos.stackexchange.com/questions/15564",
"https://elementaryos.stackexchange.com",
"https://elementaryos.stackexchange.com/users/14710/"
] | There is an update that fixes this issue. Just update in AppCentre or run the following commands in terminal:
```
sudo apt update
sudo apt full-upgrade
``` | I was having the same problem. Here is the solution that I used.
**Step 1**: Copy the python code and make it into an executable file and place it on your home folder
```
#!/usr/bin/python3
import os,signal
import time
a=1
while (a==1):
try:
#iterating through each instance of the proess
process=os.popen('ps aux | grep "brave.com/brave/brave --type=gpu-process" |
grep -v grep')
str=(process.read())
length=0
length=len(str)
if (length==0):
print("Process Teminated..")
a=2
else:
for line in os.popen('ps aux | grep "brave.com/brave/brave --type=gpu-process" | grep -v grep'):
fields = line.split()
#extracting Process ID from the output
pid = int(fields[1])
res=fields[3]
res=float(res)/100*7.69*1024 #7.69 is the amount of ram i have.... yours might vary.
# terminating process
if(res>100):
print(f"pid={pid} res={res}")
os.kill(int(pid), signal.SIGKILL)
print("Process Successfully terminated")
time.sleep(6)
length=0
except:
print("Error Encountered while running script")
time.sleep(10)
```
**Step 2**: Create a shell script to launch both the applications at the same time
```
cd /usr/bin/;brave-browser-stable & brave=$!
sleep 2
cd /home/<your-username-here>/;./kill-gpu-brave & kill=$! #I named my python execulable file kill-gpu-brave
wait $brave
wait $kill
```
**Step 3**: Make this an executable file with `chmod +x <filename>` and launch brave through this file.
**Note:** For chrome the command will change. Please check your task manager. It will come under `--type=gpu-process` |
25,741,563 | So I constructed my `unordered_set` passing 512 as min buckets, i.e. the `n` parameter.
My hash function will always return a value in the range `[0,511]`.
My question is, may I still get collision between two values which the hashes are not the same? Just to make it clearer, I tolerate any collision regarding values with the same hash, but I may not get collision with different hashes values. | 2014/09/09 | [
"https://Stackoverflow.com/questions/25741563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/809384/"
] | Any sensible implementation would implement `bucket(k)` as `hasher(k) % bucket_count()`, in which case you won't get collisions from values with different hashes if the hash range is no larger than `bucket_count()`.
However, there's no guarantee of this; only that equal hash values map to the same bucket. A bad implementation could (for example) ignore one of the buckets and still meet the container requirements; in which case, you would get collisions.
If your program's correctness relies on different hash values ending up in different buckets, then you'll have to either check the particular implementation you're using (perhaps writing a test for this behaviour), or implement your own container satisfying your requirements. | Since you don't have an infinite number of buckets and/or a perfect hash function, you would surely eventually get collisions (i.e. hashes referring to the same location) if you continue inserting keys (or even with fewer keys, take a look at the [birthday paradox](http://en.wikipedia.org/wiki/Birthday_problem)).
The key to minimize them is to tune your *load factor* and (as I suppose STL does internally) [deal with collisions](http://en.wikipedia.org/wiki/Hash_table#Collision_resolution). Regarding the bucket value you should choose it in order to avoid rehashing. |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | Are you saying that you think parsing a string out of "inventory" *doesn't* take any time or effort? Because everything you need to do to store/retrieve inventory items from a sub table is something you'd need to do with this string, and with the string you don't have any database tools to help you do it.
Also, if you had a separate subtable for inventory items, you could add and remove items in real time, meaning that if the app crashes or the user disconnects, they don't lose anything. | You should also think about the items. Are the items unique for every user or does user1 could have item1 and user2 have item1 to. If you now want to change item1 you have to go through your whole table and check which user have this item. If you would normalize your table, this would be much more easy.
But it the end, I think the answer is: It depends |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | You should also think about the items. Are the items unique for every user or does user1 could have item1 and user2 have item1 to. If you now want to change item1 you have to go through your whole table and check which user have this item. If you would normalize your table, this would be much more easy.
But it the end, I think the answer is: It depends | >
> Do my arguments and situation justify
> working against normalisation?
>
>
>
Not based on what I've seen so far.
Normalized database designs (appropriately indexed and with efficient usage of the database with UPSERTS, transactions, etc) in general-purpose engines will generally outperform code except where code is very tightly optimized. Typically in such code, some feature of the general purpose RDBMS engine is abandoned, such as one of the ACID properties or referntial integrity.
If you want to have very simple data access (you tout one table, one query as a benefit), perhaps you should look at a document centric database like mongodb or couchdb. |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | There are a lot of possible answers, but the one that works for you is the one to choose. Keep in mind, **your choice may need to change** over time.
If the amount of data you need to persist is small (ie: fits into a single table row) and you only need to update that data infrequently, and you don't have any reason to care about subsets of that data, then your approach makes sense. As time goes on and your players gain more items and you add more personalization to the game, you may begin to push up against the limits of SQLite, and you'll need to evolve your design. If you discover that you need to be able to query the item list to determine which players have what items, you'll need to evolve your design.
It's generally considered a good idea to get your data architecture right early, but there's no point in sitting in meetings today trying to guess how you'll use your software in 5-10 years. Better to get a design that meets this year's needs, and then plan to re-evaluate the design again after a year. | The reason that you use any technology is to leverage the technology's advantages. SQL has many advantages that you seem to not want to use, and that's fine, if you don't need them. In Neal Stephenson's *Zodiac*, the main character mentions that few things bought from a hardware store are used for their intended purpose. Software's like that, too. What counts is that it works, and it works nearly 100% of the time, and it works fast enough.
And yet, I can't help but think that someday you're going to have some overpowered item released into the wild, and you're going to want to deal with this problem at the database layer. Say you accidently gave out some superinstakillmegadeathsword inventory items that kill everything within 50 meters on use (wielder included), and you want to remove those things from play. As an apology to the people who lose their superinstakillmegadeathsword items, you want to give them 100 money for each superinstakillmegadeathsword you take away.
With a properly normalized database structure, that's a trivial task. With a denormalized structure, it's quite a bit harder and slower. A normalized database is also going to be easier to expand on the design in the future.
So are you sure you don't want to normalize your database? |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | Are you saying that you think parsing a string out of "inventory" *doesn't* take any time or effort? Because everything you need to do to store/retrieve inventory items from a sub table is something you'd need to do with this string, and with the string you don't have any database tools to help you do it.
Also, if you had a separate subtable for inventory items, you could add and remove items in real time, meaning that if the app crashes or the user disconnects, they don't lose anything. | What's going to happen when you have one hundred thousand items in your inventory and you only want to bring back two?
If this is something that you're throwing together for a one off class and that you won't ever use again, then yes, the quick and dirty route might be a quicker option for you.
However if this is something you're going to be working on for a few months, then you're going to run into long-term issues with that design decision. |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | Are you saying that you think parsing a string out of "inventory" *doesn't* take any time or effort? Because everything you need to do to store/retrieve inventory items from a sub table is something you'd need to do with this string, and with the string you don't have any database tools to help you do it.
Also, if you had a separate subtable for inventory items, you could add and remove items in real time, meaning that if the app crashes or the user disconnects, they don't lose anything. | Another case of premature optimization.
You are trying to optimize something that you don't have any performance metrics. What is the target platform? Even crappiest computers nowadays could run at least hundreds of your reading operation per second. Then you add better hardware for more users, then you can go to cloud and when you come into problem space that Google, Twitter and Facebook are dealing with, you can consider denormalizing. Even then, best solution is some sort of key-value database.
Maybe you should check Wikipedia article on [Database Normalization](http://en.wikipedia.org/wiki/Database_normalization) to remind you why normalized database is a good thing. |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | You should also think about the items. Are the items unique for every user or does user1 could have item1 and user2 have item1 to. If you now want to change item1 you have to go through your whole table and check which user have this item. If you would normalize your table, this would be much more easy.
But it the end, I think the answer is: It depends | The reason that you use any technology is to leverage the technology's advantages. SQL has many advantages that you seem to not want to use, and that's fine, if you don't need them. In Neal Stephenson's *Zodiac*, the main character mentions that few things bought from a hardware store are used for their intended purpose. Software's like that, too. What counts is that it works, and it works nearly 100% of the time, and it works fast enough.
And yet, I can't help but think that someday you're going to have some overpowered item released into the wild, and you're going to want to deal with this problem at the database layer. Say you accidently gave out some superinstakillmegadeathsword inventory items that kill everything within 50 meters on use (wielder included), and you want to remove those things from play. As an apology to the people who lose their superinstakillmegadeathsword items, you want to give them 100 money for each superinstakillmegadeathsword you take away.
With a properly normalized database structure, that's a trivial task. With a denormalized structure, it's quite a bit harder and slower. A normalized database is also going to be easier to expand on the design in the future.
So are you sure you don't want to normalize your database? |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | Are you saying that you think parsing a string out of "inventory" *doesn't* take any time or effort? Because everything you need to do to store/retrieve inventory items from a sub table is something you'd need to do with this string, and with the string you don't have any database tools to help you do it.
Also, if you had a separate subtable for inventory items, you could add and remove items in real time, meaning that if the app crashes or the user disconnects, they don't lose anything. | No, your arguments aren't valid. They basically boil down to "I want to do all of this processing in my client code instead of in SQL and then just write it all to a single field" because you are still doing all of the *exact same processing* to generate the string. By doing this you are removing the ability to easily load a small portion of the list and losing relationships to the actual `item` table which could contain more information about the items (I assume you're hard coding it all based on names instead of using internal item IDs which is a really bad idea, imo).
Don't do it. Long term the approach you are wanting to take will generate a lot more work for you as your needs evolve. |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | There are a lot of possible answers, but the one that works for you is the one to choose. Keep in mind, **your choice may need to change** over time.
If the amount of data you need to persist is small (ie: fits into a single table row) and you only need to update that data infrequently, and you don't have any reason to care about subsets of that data, then your approach makes sense. As time goes on and your players gain more items and you add more personalization to the game, you may begin to push up against the limits of SQLite, and you'll need to evolve your design. If you discover that you need to be able to query the item list to determine which players have what items, you'll need to evolve your design.
It's generally considered a good idea to get your data architecture right early, but there's no point in sitting in meetings today trying to guess how you'll use your software in 5-10 years. Better to get a design that meets this year's needs, and then plan to re-evaluate the design again after a year. | >
> Do my arguments and situation justify
> working against normalisation?
>
>
>
Not based on what I've seen so far.
Normalized database designs (appropriately indexed and with efficient usage of the database with UPSERTS, transactions, etc) in general-purpose engines will generally outperform code except where code is very tightly optimized. Typically in such code, some feature of the general purpose RDBMS engine is abandoned, such as one of the ACID properties or referntial integrity.
If you want to have very simple data access (you tout one table, one query as a benefit), perhaps you should look at a document centric database like mongodb or couchdb. |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | There are a lot of possible answers, but the one that works for you is the one to choose. Keep in mind, **your choice may need to change** over time.
If the amount of data you need to persist is small (ie: fits into a single table row) and you only need to update that data infrequently, and you don't have any reason to care about subsets of that data, then your approach makes sense. As time goes on and your players gain more items and you add more personalization to the game, you may begin to push up against the limits of SQLite, and you'll need to evolve your design. If you discover that you need to be able to query the item list to determine which players have what items, you'll need to evolve your design.
It's generally considered a good idea to get your data architecture right early, but there's no point in sitting in meetings today trying to guess how you'll use your software in 5-10 years. Better to get a design that meets this year's needs, and then plan to re-evaluate the design again after a year. | You should also think about the items. Are the items unique for every user or does user1 could have item1 and user2 have item1 to. If you now want to change item1 you have to go through your whole table and check which user have this item. If you would normalize your table, this would be much more easy.
But it the end, I think the answer is: It depends |
3,075,416 | I recently learned about normalisation in my informatics class and I'm developing a multiplayer game using SQLite as backend database at the moment.
Some information on it:
The simplified structure looks a bit like the following:
```
player_id | level | exp | money | inventory
---------------------------------------------------------
1 | 3 | 120 | 400 | {item a; item b; item c}
```
Okay. As you can see, I'm storing a table/array in string form in the column "inventory". This is against normalization.
**But the thing is: Making an extra table for the inventory of players brings only disadvantages for me!**
* The only points where I access the database is:
+ When a player joins the game and his profile is loaded
+ When a player's profile is saved
When a player joins, I load his data from the DB and store it in memory. I only write to the DB like every five minutes when the player is saved. So there are actually very few SQL queries in my script.
If I used an extra table for the inventory I would have to, upon loading:
* Perform an performance and probably more data-intensive query to fetch all items from the inventory table which belong to player X
* Walk through the results and convert them into a table for storage in memory
And upon saving:
* Delete all items from the inventory table which belong to player X (player might have dropped/sold some items?)
* Walk through the table and perform a query for each item the player owns
If I kept all the player data in one table:
* I'd only have one query for saving and loading
* Everything would be in one place
* I would only have to (de)serialize the tables upon loading and saving, in my script
**What should I do now?**
**Do my arguments and situation justify working against normalisation?** | 2010/06/19 | [
"https://Stackoverflow.com/questions/3075416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/370999/"
] | Another case of premature optimization.
You are trying to optimize something that you don't have any performance metrics. What is the target platform? Even crappiest computers nowadays could run at least hundreds of your reading operation per second. Then you add better hardware for more users, then you can go to cloud and when you come into problem space that Google, Twitter and Facebook are dealing with, you can consider denormalizing. Even then, best solution is some sort of key-value database.
Maybe you should check Wikipedia article on [Database Normalization](http://en.wikipedia.org/wiki/Database_normalization) to remind you why normalized database is a good thing. | >
> Do my arguments and situation justify
> working against normalisation?
>
>
>
Not based on what I've seen so far.
Normalized database designs (appropriately indexed and with efficient usage of the database with UPSERTS, transactions, etc) in general-purpose engines will generally outperform code except where code is very tightly optimized. Typically in such code, some feature of the general purpose RDBMS engine is abandoned, such as one of the ACID properties or referntial integrity.
If you want to have very simple data access (you tout one table, one query as a benefit), perhaps you should look at a document centric database like mongodb or couchdb. |
66,219,282 | I need to remove the outer key from array and reindex the left array. I have array in this format:
```
Array (
[0] => Array (
[0] => Array(
[id] => 123
),
[1] => Array (
[id] => 144
)
),
[1] => Array (
[0] => Array (
[id] => 354
)
)
)
```
I want to format this to this one:
```
Array (
[0] => Array (
[id] => 123
),
[1] => Array (
[id] => 144
),
[2] => Array (
[id] => 354
)
)
``` | 2021/02/16 | [
"https://Stackoverflow.com/questions/66219282",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/584259/"
] | You can use `call_user_func_array()` with `array_merge`:
```php
$array = call_user_func_array('array_merge', $array);
``` | Please try my custom function and get your results.
```
print_r(array_fun($your_array_variable));
function array_fun($c) {
if (!is_array($c)) {
return FALSE;
}
$result = array();
foreach ($c as $key => $value) {
if (is_array($value)) {
$result = array_merge($result, array_fun($value));
} else {
$result[][$key] = $value;
}
}
return $result;
}
``` |
66,219,282 | I need to remove the outer key from array and reindex the left array. I have array in this format:
```
Array (
[0] => Array (
[0] => Array(
[id] => 123
),
[1] => Array (
[id] => 144
)
),
[1] => Array (
[0] => Array (
[id] => 354
)
)
)
```
I want to format this to this one:
```
Array (
[0] => Array (
[id] => 123
),
[1] => Array (
[id] => 144
),
[2] => Array (
[id] => 354
)
)
``` | 2021/02/16 | [
"https://Stackoverflow.com/questions/66219282",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/584259/"
] | You can use a splat operator [PHP 5.6+]
```
$result = array_merge(...$array);
``` | Please try my custom function and get your results.
```
print_r(array_fun($your_array_variable));
function array_fun($c) {
if (!is_array($c)) {
return FALSE;
}
$result = array();
foreach ($c as $key => $value) {
if (is_array($value)) {
$result = array_merge($result, array_fun($value));
} else {
$result[][$key] = $value;
}
}
return $result;
}
``` |
11,001,178 | I have a function-like macro that takes in an enum return code and a function call.
```
#define HANDLE_CALL(result,call) \
do { \
Result callResult = call; \
Result* resultVar = (Result*)result; \
// some additional processing
(*resultVar) = callResult; \
} while(0)
```
Does fancy pointer casting to `Result*` and subsequent de-referencing gain you anything? That is, is there any advantage to doing this over just:
```
callResult = call;
// additional processing
*result = callResult;
```
The macro is used like this:
```
Result result;
HANDLE_CALL(&result,some_function());
```
By the way, this isn't my code and I'm not a power C user so I'm just trying to understand if there is any logic behind doing this. | 2012/06/12 | [
"https://Stackoverflow.com/questions/11001178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/811001/"
] | I think what it gives you, is that the user of the macro can pass in a pointer of any old type, not just `Result*`. Personally I'd either do it your way, or if I really wanted to allow (for example) a `void*` macro argument I'd write `*(Result*)(result) = callResult;`.
There's another thing it *might* be, depending what the rest of the macro looks like. Does "some additional processing" mention `resultVar` at all, or is the line `(*resultVar) = callResult;` conditional? If so, then `resultVar` exists in order to ensure that the macro evaluates each of its arguments *exactly* once, and therefore behaves more like a function call than it would if it evaluated them any other number of times (including zero).
So, if you call it in a loop like `HANDLE_CALL(output++, *input++)` then it does something vaguely predictable. At least, it does provided that `output` is a `Result*` as the macro author intended. I'm still not sure what the cast gives you, other than allowing different argument types like `void*` or `char*`.
There are some situations where it could make another difference whether you have that extra cast or not. For example, consider:
```
typedef double Result;
int foo() { return 1; }
int i;
HANDLE_CALL(&i, foo());
```
if `typedef double Result;` isn't visible on screen, the other three lines appear pretty innocuous. What's wrong with assigning an int to an int, right? But once the macro is expanded, bad stuff happens when you cast an `int*` to `double*`. With the cast, that bad stuff is undefined behavior, most likely `double` is bigger than `int` and it overruns the memory for `i`. If you're lucky, you'll get a compiler warning about "strict aliasing".
Without the cast you can't do `double* resultVar = &i;`, so the original macro with that change catches the error and rejects the code, instead of doing something nasty. Your version with `*result = callResult;` actually works, provided that `double` can accurately represent every value of `int`. Which with an IEEE double and an `int` smaller than 53 bits, it can.
Presumably `Result` is really a struct, and nobody would really write the code I give above. But I think it serves as an example why macros always end up being more fiddly than you think. | It all depends on the daft things people pass in to the instantiations. Do you really need a macro - an inline function would be better. |
11,001,178 | I have a function-like macro that takes in an enum return code and a function call.
```
#define HANDLE_CALL(result,call) \
do { \
Result callResult = call; \
Result* resultVar = (Result*)result; \
// some additional processing
(*resultVar) = callResult; \
} while(0)
```
Does fancy pointer casting to `Result*` and subsequent de-referencing gain you anything? That is, is there any advantage to doing this over just:
```
callResult = call;
// additional processing
*result = callResult;
```
The macro is used like this:
```
Result result;
HANDLE_CALL(&result,some_function());
```
By the way, this isn't my code and I'm not a power C user so I'm just trying to understand if there is any logic behind doing this. | 2012/06/12 | [
"https://Stackoverflow.com/questions/11001178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/811001/"
] | As Steve says, the cast is to be able to pass something different than `Result*` to the macro. But I think it just shouldn't do the cast, this is dangerous, but only the initialization. Better would be
```
#define HANDLE_CALL(result,call) \
do { \
Result callResult = (call); \
Result* resultVar = (result); \
/* some additional processing */ \
(*resultVar) = callResult; \
} while(0)
```
this would enforce that `result` is assignment compatible to `Result*`, so it could either be `Result*` or `void*`. All other uses to force-cast a (pointer to a larger structure) to (its first field of type `Result`) or so, are useless playing with fire. | It all depends on the daft things people pass in to the instantiations. Do you really need a macro - an inline function would be better. |
11,001,178 | I have a function-like macro that takes in an enum return code and a function call.
```
#define HANDLE_CALL(result,call) \
do { \
Result callResult = call; \
Result* resultVar = (Result*)result; \
// some additional processing
(*resultVar) = callResult; \
} while(0)
```
Does fancy pointer casting to `Result*` and subsequent de-referencing gain you anything? That is, is there any advantage to doing this over just:
```
callResult = call;
// additional processing
*result = callResult;
```
The macro is used like this:
```
Result result;
HANDLE_CALL(&result,some_function());
```
By the way, this isn't my code and I'm not a power C user so I'm just trying to understand if there is any logic behind doing this. | 2012/06/12 | [
"https://Stackoverflow.com/questions/11001178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/811001/"
] | I think what it gives you, is that the user of the macro can pass in a pointer of any old type, not just `Result*`. Personally I'd either do it your way, or if I really wanted to allow (for example) a `void*` macro argument I'd write `*(Result*)(result) = callResult;`.
There's another thing it *might* be, depending what the rest of the macro looks like. Does "some additional processing" mention `resultVar` at all, or is the line `(*resultVar) = callResult;` conditional? If so, then `resultVar` exists in order to ensure that the macro evaluates each of its arguments *exactly* once, and therefore behaves more like a function call than it would if it evaluated them any other number of times (including zero).
So, if you call it in a loop like `HANDLE_CALL(output++, *input++)` then it does something vaguely predictable. At least, it does provided that `output` is a `Result*` as the macro author intended. I'm still not sure what the cast gives you, other than allowing different argument types like `void*` or `char*`.
There are some situations where it could make another difference whether you have that extra cast or not. For example, consider:
```
typedef double Result;
int foo() { return 1; }
int i;
HANDLE_CALL(&i, foo());
```
if `typedef double Result;` isn't visible on screen, the other three lines appear pretty innocuous. What's wrong with assigning an int to an int, right? But once the macro is expanded, bad stuff happens when you cast an `int*` to `double*`. With the cast, that bad stuff is undefined behavior, most likely `double` is bigger than `int` and it overruns the memory for `i`. If you're lucky, you'll get a compiler warning about "strict aliasing".
Without the cast you can't do `double* resultVar = &i;`, so the original macro with that change catches the error and rejects the code, instead of doing something nasty. Your version with `*result = callResult;` actually works, provided that `double` can accurately represent every value of `int`. Which with an IEEE double and an `int` smaller than 53 bits, it can.
Presumably `Result` is really a struct, and nobody would really write the code I give above. But I think it serves as an example why macros always end up being more fiddly than you think. | As Steve says, the cast is to be able to pass something different than `Result*` to the macro. But I think it just shouldn't do the cast, this is dangerous, but only the initialization. Better would be
```
#define HANDLE_CALL(result,call) \
do { \
Result callResult = (call); \
Result* resultVar = (result); \
/* some additional processing */ \
(*resultVar) = callResult; \
} while(0)
```
this would enforce that `result` is assignment compatible to `Result*`, so it could either be `Result*` or `void*`. All other uses to force-cast a (pointer to a larger structure) to (its first field of type `Result`) or so, are useless playing with fire. |
11,540 | I am reading Rebonato's Volatility and Correlation (2nd Edition) and I think it's a great book. I'm having difficulty trying to derive a formula he used that he described as the expression for standard deviation in a simple binomial replication example:
\begin{eqnarray}\sigma\_S\sqrt{\Delta t}=\frac{\ln S\_2-\ln S\_1}{2}\end{eqnarray}
This expression is equation (2.48) on page 45. You can read that page and get some context from Google Books:
<http://goo.gl/uDgYg3>
I understand continuous compounding is used in the example, if that helps any. It's a little confusing because the equations he listed a few pages above (pg.43; not available in Google Books) use a discrete rate of return, not continuous compounding. But in any case, this discrepancy does not seem to provide any hint as to how the standard deviation is obtained.
Any help is much appreciated. | 2014/06/03 | [
"https://quant.stackexchange.com/questions/11540",
"https://quant.stackexchange.com",
"https://quant.stackexchange.com/users/3510/"
] | To solve the expectation directly, you need to remember that a density function is not the same as the probability of the event.
We have, $\frac{S\_1}{S\_0} \sim \ln \mathcal{N} \left(-\frac{\sigma^2}{2},\sigma\right)$, therefore,
\begin{eqnarray}
\mathbb{E}\left(\frac{S\_1}{S\_0}\right) &=& \int\_{-\infty}^\infty x\, f\_{\frac{S\_1}{S\_0}}(x)dx\\
&=&\int\_{-\infty}^\infty x \, \frac{1}{x\sqrt{2\pi}\sigma}\exp\left[-\frac{\left(\ln x+\frac{\sigma^2}{2}\right)^2}{2\sigma^2}\right]dx.
\end{eqnarray}
The usual substitution method aims to manipulate the expression into looking like a standard normal. To do this we use $z=\frac{\ln x + \frac{\sigma^2}{2}}{\sigma}-\sigma$ and we get the substitutions $dx=x \sigma dz$ and $x=\exp(\sigma z + \frac{\sigma^2}{2})$.
\begin{eqnarray}
\mathbb{E}\left(\frac{S\_1}{S\_0}\right) &=& \int\_{-\infty}^\infty \frac{1}{\sqrt{2\pi} \sigma} \sigma \exp\left(\sigma z + \frac{\sigma^2}{2}\right) \exp\left( \frac{-z^2 - 2\sigma z -\sigma^2}{2} \right)dz\\
&=&\int\_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{z^2}{2}\right) dz\\
&=&1.
\end{eqnarray} | If $S\_t = S\_0 e^{(\mu-\sigma^2/2)t + \sigma W\_t}$, we can compute
$$\mathbb{E}^Q\left[S\_T\middle\vert \mathcal{F}\_0\right] = S\_0 e^{r T} = \text{forward price of } S\_T \text { at time } 0. $$
To show the details,
$\mathbb{E}^Q\left[S\_T\middle\vert \mathcal{F}\_0\right] = S\_0 e^{(r-\sigma^2/2) T} \mathbb{E}^Q\left[e^{\sigma W\_T} \middle\vert \mathcal{F}\_0\right]$. To compute $\mathbb{E}^Q\left[e^{\sigma W\_T} \middle\vert \mathcal{F}\_0 \right]$, note that, for $X \sim \mathcal{N}(\mu,\sigma^2)$, $\mathbb{E}\left[e^X \right]=e^{\mu+\sigma^2/2}$.
In your case, since $r=0$ and $T = 1$,
$$\mathbb{E}^Q\left[\frac{S\_1}{S\_0}\middle\vert \mathcal{F}\_0\right] = 1. $$ |
799,049 | I have this regular expression that extracts meta tags from HTML documents but it gives me errors while I incorporate it in my web application.
the expression is
```
@"<meta[\\s]+[^>]*?name[\\s]?=[\\s\"\']+(.*?)[\\s\"\']+content[\\s]?=[\\s\"\']+(.*?)[\"\']+.*?>" ;
```
is there anything wrong with it? | 2009/04/28 | [
"https://Stackoverflow.com/questions/799049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/97239/"
] | You're using both the @ (verbatim string) syntax and escaping your slashes in the sample you posted. You need to either remove the @, or remove the extra slashes and escape your double quotes by doubling them up, then it should work.
(For what it's worth, if you're going to be working with regular expression on an ongoing basis, I would suggest investing in a copy of [RegExBuddy](http://www.regexbuddy.com/).) | When using a string literal (@"") you don't need to double the back-slashes -- everything in the string is accepted as it is -- except for double quotes, which need to be doubled:
`@"<meta[\s]+[^>]*?name[\s]?=[\s""']+(.*?)[\s""']+content[\s]?=[\s""']+(.*?)[""']+.*?>"` |
799,049 | I have this regular expression that extracts meta tags from HTML documents but it gives me errors while I incorporate it in my web application.
the expression is
```
@"<meta[\\s]+[^>]*?name[\\s]?=[\\s\"\']+(.*?)[\\s\"\']+content[\\s]?=[\\s\"\']+(.*?)[\"\']+.*?>" ;
```
is there anything wrong with it? | 2009/04/28 | [
"https://Stackoverflow.com/questions/799049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/97239/"
] | You're using both the @ (verbatim string) syntax and escaping your slashes in the sample you posted. You need to either remove the @, or remove the extra slashes and escape your double quotes by doubling them up, then it should work.
(For what it's worth, if you're going to be working with regular expression on an ongoing basis, I would suggest investing in a copy of [RegExBuddy](http://www.regexbuddy.com/).) | Jeromy is right. You're using an escaped string and a string litteral. The regex itself is fine... So I guess that's where the problem is. |
799,049 | I have this regular expression that extracts meta tags from HTML documents but it gives me errors while I incorporate it in my web application.
the expression is
```
@"<meta[\\s]+[^>]*?name[\\s]?=[\\s\"\']+(.*?)[\\s\"\']+content[\\s]?=[\\s\"\']+(.*?)[\"\']+.*?>" ;
```
is there anything wrong with it? | 2009/04/28 | [
"https://Stackoverflow.com/questions/799049",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/97239/"
] | When using a string literal (@"") you don't need to double the back-slashes -- everything in the string is accepted as it is -- except for double quotes, which need to be doubled:
`@"<meta[\s]+[^>]*?name[\s]?=[\s""']+(.*?)[\s""']+content[\s]?=[\s""']+(.*?)[""']+.*?>"` | Jeromy is right. You're using an escaped string and a string litteral. The regex itself is fine... So I guess that's where the problem is. |
18,647 | One of the comments on [this question](https://mechanics.stackexchange.com/questions/18623/mileage-varies-by-driving-at-different-speeds) reminded me of something I've been wondering for a while. Given that engines get the best efficiency at peak torque, why do most hybrid cars still use a mechanical transmission (which requires the engine to change speed with roadspeed, as in a conventional car), rather than electrical, with the engine running at a constant rate as a generator?
Is it simply a case of giving people what they are used to? bearing in mind that pure-electric cars obviously have electric transmission...
Trains have been using diesel-electric transmission since at least the 1950s, so it's not exactly a new concept... | 2015/07/21 | [
"https://mechanics.stackexchange.com/questions/18647",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/373/"
] | It depends on the type of hybrid car you are talking about. In one type of hybrid, there will be a gasoline engine and at least one electric engine capable of driving the wheels. In this case, the gasoline engine must still use a transmission because it cannot be revved too high without causing major damage or shortened life. One possible solution to this transmission issue would be to use a Continuously Variable Transmission (CVT), which can be more expensive to manufacture but keeps the engine at peak efficiency.
The other type of hybrid, where the wheels are driven entirely by electric motors (and has a gasoline generator to charge the battery) does not require a transmission because electric motors have a very wide range of acceptable RPM. Additionally, electric motors have a relatively flat torque curve and the max torque is available instantly. I should note that this type of "hybrid" car is generally just called an electric vehicle (EV) since the gasoline motor is used to charge the battery only and does not drive the wheels.
I was going to comment this, but don't have the reputation:
The diesel-electric transmission you referred to is more closely related to today's EVs in that a diesel engine charges batteries, which then in turn power electric motors at the wheels. Transmissions in trains are impractical for several reasons (such as need to power up to four axles and the number of gears that would be required to keep it at peak efficiency) and this eliminates the need for a true transmission. | I don't think the accepted answer answers this question acceptably. The reason for hybrid vehicles having a mechanical power transfer pathway is that mechanical power transfer has a higher efficiency than electric power transfer.
I have read somewhere (but cannot find the source right now) that the electrical power transfer pathway is approximately 70% efficient in Toyota Prius. To understand this low efficiency, consider that it has a motor-generator operating as a generator, power electronic components, cables, and a motor-generator operating as a motor. Quite many components. This efficiency is considerably lower than the efficiency of the mechanical power transfer pathway.
Actually, Toyota Prius has both mechanical and electrical power transfer pathways. It has a gearbox with one speed and constant ratio but three axles, two of which have electric motors. Changing how much power will be transferred through the electrical pathway changes the relative speeds of the input and output axles, and thus it functions as an electrical continuously variable transmission (eCVT).
The reason for the mechanical pathway is the higher efficiency. The reason for the electrical pathway is that it allows CVT operation with very low cost of components and higher reliability than traditional CVTs. And, also to provide regenerative braking and power boost to the internal combustion engine from a battery.
Have you seen water cooling in a conventional manual transmission? Probably not. However, the inverters in Prius are water cooled due to the high amount of waste heat produced. This illustrates that inverters are less efficient than mechanical transmissions. |
62,180,045 | I have a github [repo](https://github.com/NEOdinok/lcthw_ex29) representing my exercise folder. Upon running `make all` the compiler throws error messages saying (Ubuntu):
```
cc -g -O2 -Wall -Wextra -Isrc -DNDEBUG -fPIC -c -o src/libex29.o src/libex29.c
src/libex29.c: In function ‘fail_on_purpose’:
src/libex29.c:36:33: warning: unused parameter ‘msg’ [-Wunused-parameter]
int fail_on_purpose(const char* msg)
^~~
ar rcs build/libex29.a src/libex29.o
ranlib build/libex29.a
cc -shared -o build/libex29.so src/libex29.o
cc -g -Wall -Wextra -Isrc test/ex29_test.c -o test/ex29_test
/tmp/cc7dbqDt.o: In function `main':
/home/givi/Desktop/lcthw_dir/29/test/ex29_test.c:21: undefined reference to `dlopen'
/home/givi/Desktop/lcthw_dir/29/test/ex29_test.c:22: undefined reference to `dlerror'
/home/givi/Desktop/lcthw_dir/29/test/ex29_test.c:25: undefined reference to `dlsym'
/home/givi/Desktop/lcthw_dir/29/test/ex29_test.c:26: undefined reference to `dlerror'
/home/givi/Desktop/lcthw_dir/29/test/ex29_test.c:33: undefined reference to `dlclose'
collect2: error: ld returned 1 exit status
<builtin>: recipe for target 'test/ex29_test' failed
make: *** [test/ex29_test] Error 1
```
i spent quite a lot trying to figure out how to `fix undefined references`. dlfcn.h is included, everything seems to be ok. Am i missing something? Please help | 2020/06/03 | [
"https://Stackoverflow.com/questions/62180045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12868956/"
] | You must add following option when linking code using [dlopen(3)](https://man7.org/linux/man-pages/man3/dlopen.3.html) :
```
-ldl
```
Here is a demo for Ubuntu 18:
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
$ cat dl.c
#include <dlfcn.h>
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
void *r;
r = dlopen("", RTLD_LAZY);
if (r != NULL)
printf("r is not null \n");
return 0;
}
$ gcc -o dl -Wall -pedantic -std=c11 dl.c -ldl
$ echo $?
0
```
Here is a very simple Makefile(note position of `-ldl`) and related make commands:
```
$ cat Makefile
CFLAGS=-g -O2 -Wall -Wextra -Isrc -DNDEBUG $(OPTFLAGS) $(OPTLIBS)
dl.o :dl.c
$(CC) -c $< $(CFLAGS)
dl :dl.o
$(CC) -o $@ $^ $(CFLAGS) -ldl
clean :
rm -f dl dl.o
$ make clean
rm -f dl dl.o
$ make dl
cc -c dl.c -g -O2 -Wall -Wextra -Isrc -DNDEBUG
dl.c: In function ‘main’:
dl.c:5:14: warning: unused parameter ‘argc’ [-Wunused-parameter]
int main(int argc, char **argv)
^~~~
dl.c:5:27: warning: unused parameter ‘argv’ [-Wunused-parameter]
int main(int argc, char **argv)
^~~~
cc -o dl dl.o -g -O2 -Wall -Wextra -Isrc -DNDEBUG -ldl
$ ./dl
r is not null
``` | Usually the references you're missing can be resolved by adding linker flag `-ldl`.
You didn't mention which operating system you're using.
In case you're on Windows you'll need this library: <https://github.com/dlfcn-win32/dlfcn-win32> |
152,186 | I have an attribute of type CustomSetting\_\_c(which is a list type of CUSTOM SETTING) and I want to iterate over it to fetch its each record, and print that record's certain fields. I am unable to iterate over it and '.length' property is givng me error as:
>
> [Cannot read property 'length' of undefined]
>
>
>
Please find related code:
>
> Lightning component attribute:
>
>
>
```
<aura:attribute name="customSettingList" type="CustomSetting__c[]" />
```
On click of certain button, I want to go to helper of this component and execute this code:
>
> Helper.js:
>
>
>
```
//some code before this
component.set("v.customSettingList", response.getReturnValue());
//some code after this statement
var customSettingList=[];
customSettingList=component.get("v.customSettingList");
for(var i=0;i<customSettingList.length;i++){
console.log('customSettingList type: '+ customSettingList[i].Type__c);
}
```
where Type\_\_c is a custom field of this custom setting.
What is the reason for this? How can I access the field values of individual records from list?
Note: This is a list type of custom setting if that has to do anything with this.(Can we not access list custom setting in JS?) | 2016/12/12 | [
"https://salesforce.stackexchange.com/questions/152186",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/31234/"
] | The issue you are facing here is the fact that your customSettingList is undefined.
This means you have not assigned any values to it.
This being said I'm missing the part where you are actually fetching the custom settings list.
Simply declaring this on your component is not sufficient:
```
<aura:attribute name="customSettingList" type="CustomSetting__c[]" />
```
The above code is simply telling lightning/aura that your component has an attribute of type CustomSetting\_\_c[]. By default this attribute has no values assigned, nor is lightning capable of doing this for you.
You will need to access the server yourself in order to get the list of custom settings and assign it to your attribute. You can do this by adding an Apex controller to your component.
More details can be found here:
<https://developer.salesforce.com/docs/atlas.en-us.lightning.meta/lightning/apex_intro.htm> | I think I got it. It was because the part where I set the array using set() was in callback and the code where I was printing using component.get() was called before this callback, though it was after it in the flow. Since it was callback, the value was not set till the time I was trying to access it |
32,657,517 | When I run this:
```
<html>
<head>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.js">
</script>
<script type="text/javascript">
$(document).ready(function(){
$("#btn").click(function(){
try {
$("#div1").load("demoddd.txt"); //there is no demoddd.txt
}
catch (err)
{
alert("Error: " + err); //this never runs
}
finally {
alert("Finally");
}
});
});
</script></head>
<body>
<button id="btn">Load file</button>
<div id="div1"></div>
</body>
</html>
```
I get "Finally" but no error. In the debug console, I see the 404. Can I trap 404 errors when using the load() function? | 2015/09/18 | [
"https://Stackoverflow.com/questions/32657517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1649677/"
] | Use the `complete` function as shown in the [**documentation**](http://api.jquery.com/load/):
```
$( "#success" ).load( "/not-here.php", function( response, status, xhr ) {
if ( status == "error" ) {
var msg = "Sorry but there was an error: ";
$( "#error" ).html( msg + xhr.status + " " + xhr.statusText );
}
});
``` | You need to get the httprequest status, you can't catch an 404 with that catch.
Use this:
```
$("#div1").load("/demoddd.txt", function(responseText, statusText, xhr){
if(statusText == "success")
alert("Successfully loaded!");
if(statusText == "error")
alert("An error occurred: " + xhr.status + " - " + xhr.statusText);
});
```
The first thing I would try is to set the full URL, and not a relative one. See if that works first. |
30,275,753 | I am trying to do something like this
```
var teller = 1;
if (teller % 2 === 0) {
"do this"
} else {
"do something else"
}
teller++;
```
The problem is that he always comes into the else loop.
Someone knows why? | 2015/05/16 | [
"https://Stackoverflow.com/questions/30275753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4695733/"
] | because `1 % 2 === 0` returns `false`
you might want to put it inside a loop
```
var teller = 1;
while(teller < 100){ //100 is the range, just a sample
if (teller % 2 === 0) {
"do this"
} else {
"do something else"
}
teller++;
}
``` | Stepping through your code:
```
var teller = 1; //teller is now 1
if (teller % 2 === 0) { //1 % 2 === 1, so this statement is skipped
"do this"
} else { //since the if statement was skipped, this gets run
"do something else"
}
teller++; //this has no affect on the above
```
If you want to put this in a loop, see @DryrandzFamador's answer. |
26,288 | A commercial came on the radio last night while I was driving home that was a spoof on the old [news reels of the 30s and 40s](https://en.wikipedia.org/wiki/Newsreel). And, being radio, the spoof was entirely centered on the iconic 'voice' of those old news reals...quick talking with an inflection unique enough that when you hear it you immediately think "ah, news reels!"
That got me wondering what the history of that voice is. Did news reels typically sound like that? Or is that a stereotype that, overtime, has become emblematic of those reels even if not entirely accurate? If news reels did sound like that, was that a general style adopted by narrators, or was it something that came from one prominent/prolific narrator in particular?
If you look through old newsreels on YouTube, they definitely do *not* all share that iconic narration voice, so I'm assuming this was a fabrication at some point in time.
(PS, I wasn't sure which site to post this on...perhaps this would be better asked on Moviews/TV?)
UPDATE:
Trying to find some specific examples. The one I heard on the radio was actually a Geico commercial. But can't find that.
The Legend of Korra uses this stereotypical news reel voice as a recap for each episode. You can see an example here: <https://www.youtube.com/watch?v=bigqWlscHYc> It's not quite as over-the-top as you find in more satirical uses but seems to be fairly consistent with the style elsewhere.
If I had to describe the style I'd say it's"
* fast talking
* un-natural inflection
* 'showmanship'
* a trace of an accent...maybe New-Englandish? | 2015/11/09 | [
"https://history.stackexchange.com/questions/26288",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/10250/"
] | It wasn't just the newsreels. The ultra-fast talking high-pitched (and almost rhythmic) voice was actually common in media of that era. For example, here's the [final scene from Casablanca](https://www.youtube.com/watch?v=5kiNJcDG4E0) in 1942. By modern standards, it sounds like a lot of bursts of rapid-fire dialog, interspersed by pauses for you to mentally catch your breath and process what you just heard. Dialog bits from [Dragnet](https://www.youtube.com/watch?v=sHmw2RgznSo&t=4m50s) are also famous for this, particularly when interrogating suspects.
Another thing one may notice is that women are typically portrayed speaking much slower (as are the aforementioned Dragnet interrogation subjects). This is a big clue. Dialog speed was used as a marker for dispassionate intelligence. People talking slower and lower were indicating that they were being more emotional and/or not thinking clearly.
A great illustration of this is this other [classic Cassablanca scene](https://www.youtube.com/watch?v=1_a57ZNlU6o). Look away from the screen, and pay attention to how fast and low the actors (particularly Ingrid Bergman) speak depending on who they are speaking to and how emotional they are about what they are saying (or how emotional they are *trying* to be). For instance, whenever anyone is talking about the sentimental song, their voice gets lower and slower. In contrast, when her husband and the Chief show up, both Bergman and Bogart start talking much quicker and higher.
To the best of my knowledge, the only person who has looked at this effect from a historical perspective is Maria DiBattista in her book [Fast Talking Dames](http://rads.stackoverflow.com/amzn/click/0300099037). "Fast Talking" in this context is explained to be an indicator of intelligence and power.
As for the unusual accent you are hearing, CGCambell in the comments pointed out that this would have been the ["Transatlantic" accent](https://en.wikipedia.org/wiki/Mid-Atlantic_accent). It was common among the upper class in New York around the turn of the 20th century. Since those were the people who patronized the theater, it was common there as well. When the first media companies started up in New York, they tended to pick it up, as it was a much more "prestigious" accent than the other locally-available alternatives. This was the accent of the late George Plimpton and William F. Buckley Jr.
It started to wane after the movie companies moved out to California and much of television production followed. Today most of the accents you hear in American media are a kind of homogenized [American Midlands](https://en.wikipedia.org/wiki/Midland_American_English) (which many Americans consider "no accent"), with a smattering of weak [AAVE](https://en.wikipedia.org/wiki/African_American_Vernacular_English) used to indicate someone from a rougher background. | Please see the [following list, also on WikiPedia](https://en.wikipedia.org/wiki/Wikipedia:List_of_online_video_archives), along with [A Bijou Flashback: The History of Movie Newsreels](http://www.moviefanfare.com/the-history-of-movie-newsreels/).
These newsreels were produced by each of the major film corporations between the late 20's to the early 60's. Each Corporation used a different narrator. See my quoted example below, as Movietone and Universal/MGM are the most widely known. Most of them served as the current news for moviegoers, as these reels would run before the movie started, much like we get previews today. Most of them are known and archived because this was a major method of news that IMO reached it's prominence during WWII, as moviegoers would attend movies to check on their loved ones on the front. These reels also provided pictures from the front, which was quite a deal back then.
Notice the ad in the lower right of [this movie trailer](https://en.wikipedia.org/wiki/Movietone_News#/media/File:JazzSingerAndFox.jpg), indicating that the theater would show the newsreel before the movie for the price of your ticket.
See [Movietone News](https://en.wikipedia.org/wiki/Movietone_News):
>
> [snip]
>
>
> Fox's first use of recording a news event was on May 20, 1927: Charles
> Lindbergh's take-off from Roosevelt Field for his historic solo flight
> across the Atlantic Ocean was filmed with sound and shown in a New
> York theater that same night, inspiring Fox to create Movietone News.
> A regular narrator of the newsreels was broadcaster/journalist Lowell
> Thomas.
>
>
> After Fox Films merged with 20th Century Pictures in 1935 to form 20th
> Century-Fox, the name of Fox Movietone News was shortened to Movietone
> News.
>
>
> In Australia, Movietone and Cinesound were competitors for newsreel
> coverage, but have now combined under the Movietone News name.
>
>
>
---
[Lowell Thomas](https://en.wikipedia.org/wiki/Lowell_Thomas), the voice of Movietone news made himself and fellow journalist [TE Lawrence](https://en.wikipedia.org/wiki/T._E._Lawrence) famous when he visited the [Western Front](https://en.wikipedia.org/wiki/Western_Front_(World_War_I)) during WWI. TE Lawrence is best known for the book and the later adaption into a movie *Lawrence of Arabia*
[Ed Herlihy](https://en.wikipedia.org/wiki/Ed_Herlihy), the voice of Universal/MGM. He narrated most of the reels dealing with WWII. These are some of my favorites as he perfectly puts emotion into the attack on Pearl Harbor, and later the death of President Roosevelt. Later in life he served as the voiceover in commercials for Kraft Foods.
---
Update as per Comments
----------------------
There is no iconic voice nowadays. The audio for a radio commercial/TV Spot is made by standing in front of a microphone and recording a voice, and then adding the voice track to the rest of the audio. The inflections etc that you here come from someone's actual voice, and with the advent of television, the announcer you heard probably works for a major network, unless the commercial you heard was local only to your area, i.e. not a national campaign.
The example you gave points to [J.K. Simmons](https://en.wikipedia.org/wiki/J._K._Simmons), as Tenzin, who I know as the voice of [The University of Farmers - National Ad Campaign](http://www.farmers.com/pdf/UofF_brochure.pdf). His degree in Music, and a stint on Broadway most likely are where the roots for that voice inflection come in. As I said earlier, each example you give me will end up being attributed to a person, most likely with a degree in the stage acting, acting, or music arenas. Note that most people who do this for a living have had years of practice. As such, the only way to answer your question is to tell you there are no origins, just highly skilled people.
On that same note, I present Johnny Bravo, a.k.a. [Jeff Bennett](http://disney.wikia.com/wiki/Jeff_Bennett):
 |
26,288 | A commercial came on the radio last night while I was driving home that was a spoof on the old [news reels of the 30s and 40s](https://en.wikipedia.org/wiki/Newsreel). And, being radio, the spoof was entirely centered on the iconic 'voice' of those old news reals...quick talking with an inflection unique enough that when you hear it you immediately think "ah, news reels!"
That got me wondering what the history of that voice is. Did news reels typically sound like that? Or is that a stereotype that, overtime, has become emblematic of those reels even if not entirely accurate? If news reels did sound like that, was that a general style adopted by narrators, or was it something that came from one prominent/prolific narrator in particular?
If you look through old newsreels on YouTube, they definitely do *not* all share that iconic narration voice, so I'm assuming this was a fabrication at some point in time.
(PS, I wasn't sure which site to post this on...perhaps this would be better asked on Moviews/TV?)
UPDATE:
Trying to find some specific examples. The one I heard on the radio was actually a Geico commercial. But can't find that.
The Legend of Korra uses this stereotypical news reel voice as a recap for each episode. You can see an example here: <https://www.youtube.com/watch?v=bigqWlscHYc> It's not quite as over-the-top as you find in more satirical uses but seems to be fairly consistent with the style elsewhere.
If I had to describe the style I'd say it's"
* fast talking
* un-natural inflection
* 'showmanship'
* a trace of an accent...maybe New-Englandish? | 2015/11/09 | [
"https://history.stackexchange.com/questions/26288",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/10250/"
] | Please see the [following list, also on WikiPedia](https://en.wikipedia.org/wiki/Wikipedia:List_of_online_video_archives), along with [A Bijou Flashback: The History of Movie Newsreels](http://www.moviefanfare.com/the-history-of-movie-newsreels/).
These newsreels were produced by each of the major film corporations between the late 20's to the early 60's. Each Corporation used a different narrator. See my quoted example below, as Movietone and Universal/MGM are the most widely known. Most of them served as the current news for moviegoers, as these reels would run before the movie started, much like we get previews today. Most of them are known and archived because this was a major method of news that IMO reached it's prominence during WWII, as moviegoers would attend movies to check on their loved ones on the front. These reels also provided pictures from the front, which was quite a deal back then.
Notice the ad in the lower right of [this movie trailer](https://en.wikipedia.org/wiki/Movietone_News#/media/File:JazzSingerAndFox.jpg), indicating that the theater would show the newsreel before the movie for the price of your ticket.
See [Movietone News](https://en.wikipedia.org/wiki/Movietone_News):
>
> [snip]
>
>
> Fox's first use of recording a news event was on May 20, 1927: Charles
> Lindbergh's take-off from Roosevelt Field for his historic solo flight
> across the Atlantic Ocean was filmed with sound and shown in a New
> York theater that same night, inspiring Fox to create Movietone News.
> A regular narrator of the newsreels was broadcaster/journalist Lowell
> Thomas.
>
>
> After Fox Films merged with 20th Century Pictures in 1935 to form 20th
> Century-Fox, the name of Fox Movietone News was shortened to Movietone
> News.
>
>
> In Australia, Movietone and Cinesound were competitors for newsreel
> coverage, but have now combined under the Movietone News name.
>
>
>
---
[Lowell Thomas](https://en.wikipedia.org/wiki/Lowell_Thomas), the voice of Movietone news made himself and fellow journalist [TE Lawrence](https://en.wikipedia.org/wiki/T._E._Lawrence) famous when he visited the [Western Front](https://en.wikipedia.org/wiki/Western_Front_(World_War_I)) during WWI. TE Lawrence is best known for the book and the later adaption into a movie *Lawrence of Arabia*
[Ed Herlihy](https://en.wikipedia.org/wiki/Ed_Herlihy), the voice of Universal/MGM. He narrated most of the reels dealing with WWII. These are some of my favorites as he perfectly puts emotion into the attack on Pearl Harbor, and later the death of President Roosevelt. Later in life he served as the voiceover in commercials for Kraft Foods.
---
Update as per Comments
----------------------
There is no iconic voice nowadays. The audio for a radio commercial/TV Spot is made by standing in front of a microphone and recording a voice, and then adding the voice track to the rest of the audio. The inflections etc that you here come from someone's actual voice, and with the advent of television, the announcer you heard probably works for a major network, unless the commercial you heard was local only to your area, i.e. not a national campaign.
The example you gave points to [J.K. Simmons](https://en.wikipedia.org/wiki/J._K._Simmons), as Tenzin, who I know as the voice of [The University of Farmers - National Ad Campaign](http://www.farmers.com/pdf/UofF_brochure.pdf). His degree in Music, and a stint on Broadway most likely are where the roots for that voice inflection come in. As I said earlier, each example you give me will end up being attributed to a person, most likely with a degree in the stage acting, acting, or music arenas. Note that most people who do this for a living have had years of practice. As such, the only way to answer your question is to tell you there are no origins, just highly skilled people.
On that same note, I present Johnny Bravo, a.k.a. [Jeff Bennett](http://disney.wikia.com/wiki/Jeff_Bennett):
 | Cam Cornelius. <https://m.youtube.com/watch?v=ZfKRKI3daoM> He has that old times news voice. |
26,288 | A commercial came on the radio last night while I was driving home that was a spoof on the old [news reels of the 30s and 40s](https://en.wikipedia.org/wiki/Newsreel). And, being radio, the spoof was entirely centered on the iconic 'voice' of those old news reals...quick talking with an inflection unique enough that when you hear it you immediately think "ah, news reels!"
That got me wondering what the history of that voice is. Did news reels typically sound like that? Or is that a stereotype that, overtime, has become emblematic of those reels even if not entirely accurate? If news reels did sound like that, was that a general style adopted by narrators, or was it something that came from one prominent/prolific narrator in particular?
If you look through old newsreels on YouTube, they definitely do *not* all share that iconic narration voice, so I'm assuming this was a fabrication at some point in time.
(PS, I wasn't sure which site to post this on...perhaps this would be better asked on Moviews/TV?)
UPDATE:
Trying to find some specific examples. The one I heard on the radio was actually a Geico commercial. But can't find that.
The Legend of Korra uses this stereotypical news reel voice as a recap for each episode. You can see an example here: <https://www.youtube.com/watch?v=bigqWlscHYc> It's not quite as over-the-top as you find in more satirical uses but seems to be fairly consistent with the style elsewhere.
If I had to describe the style I'd say it's"
* fast talking
* un-natural inflection
* 'showmanship'
* a trace of an accent...maybe New-Englandish? | 2015/11/09 | [
"https://history.stackexchange.com/questions/26288",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/10250/"
] | It wasn't just the newsreels. The ultra-fast talking high-pitched (and almost rhythmic) voice was actually common in media of that era. For example, here's the [final scene from Casablanca](https://www.youtube.com/watch?v=5kiNJcDG4E0) in 1942. By modern standards, it sounds like a lot of bursts of rapid-fire dialog, interspersed by pauses for you to mentally catch your breath and process what you just heard. Dialog bits from [Dragnet](https://www.youtube.com/watch?v=sHmw2RgznSo&t=4m50s) are also famous for this, particularly when interrogating suspects.
Another thing one may notice is that women are typically portrayed speaking much slower (as are the aforementioned Dragnet interrogation subjects). This is a big clue. Dialog speed was used as a marker for dispassionate intelligence. People talking slower and lower were indicating that they were being more emotional and/or not thinking clearly.
A great illustration of this is this other [classic Cassablanca scene](https://www.youtube.com/watch?v=1_a57ZNlU6o). Look away from the screen, and pay attention to how fast and low the actors (particularly Ingrid Bergman) speak depending on who they are speaking to and how emotional they are about what they are saying (or how emotional they are *trying* to be). For instance, whenever anyone is talking about the sentimental song, their voice gets lower and slower. In contrast, when her husband and the Chief show up, both Bergman and Bogart start talking much quicker and higher.
To the best of my knowledge, the only person who has looked at this effect from a historical perspective is Maria DiBattista in her book [Fast Talking Dames](http://rads.stackoverflow.com/amzn/click/0300099037). "Fast Talking" in this context is explained to be an indicator of intelligence and power.
As for the unusual accent you are hearing, CGCambell in the comments pointed out that this would have been the ["Transatlantic" accent](https://en.wikipedia.org/wiki/Mid-Atlantic_accent). It was common among the upper class in New York around the turn of the 20th century. Since those were the people who patronized the theater, it was common there as well. When the first media companies started up in New York, they tended to pick it up, as it was a much more "prestigious" accent than the other locally-available alternatives. This was the accent of the late George Plimpton and William F. Buckley Jr.
It started to wane after the movie companies moved out to California and much of television production followed. Today most of the accents you hear in American media are a kind of homogenized [American Midlands](https://en.wikipedia.org/wiki/Midland_American_English) (which many Americans consider "no accent"), with a smattering of weak [AAVE](https://en.wikipedia.org/wiki/African_American_Vernacular_English) used to indicate someone from a rougher background. | Cam Cornelius. <https://m.youtube.com/watch?v=ZfKRKI3daoM> He has that old times news voice. |
26,288 | A commercial came on the radio last night while I was driving home that was a spoof on the old [news reels of the 30s and 40s](https://en.wikipedia.org/wiki/Newsreel). And, being radio, the spoof was entirely centered on the iconic 'voice' of those old news reals...quick talking with an inflection unique enough that when you hear it you immediately think "ah, news reels!"
That got me wondering what the history of that voice is. Did news reels typically sound like that? Or is that a stereotype that, overtime, has become emblematic of those reels even if not entirely accurate? If news reels did sound like that, was that a general style adopted by narrators, or was it something that came from one prominent/prolific narrator in particular?
If you look through old newsreels on YouTube, they definitely do *not* all share that iconic narration voice, so I'm assuming this was a fabrication at some point in time.
(PS, I wasn't sure which site to post this on...perhaps this would be better asked on Moviews/TV?)
UPDATE:
Trying to find some specific examples. The one I heard on the radio was actually a Geico commercial. But can't find that.
The Legend of Korra uses this stereotypical news reel voice as a recap for each episode. You can see an example here: <https://www.youtube.com/watch?v=bigqWlscHYc> It's not quite as over-the-top as you find in more satirical uses but seems to be fairly consistent with the style elsewhere.
If I had to describe the style I'd say it's"
* fast talking
* un-natural inflection
* 'showmanship'
* a trace of an accent...maybe New-Englandish? | 2015/11/09 | [
"https://history.stackexchange.com/questions/26288",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/10250/"
] | It wasn't just the newsreels. The ultra-fast talking high-pitched (and almost rhythmic) voice was actually common in media of that era. For example, here's the [final scene from Casablanca](https://www.youtube.com/watch?v=5kiNJcDG4E0) in 1942. By modern standards, it sounds like a lot of bursts of rapid-fire dialog, interspersed by pauses for you to mentally catch your breath and process what you just heard. Dialog bits from [Dragnet](https://www.youtube.com/watch?v=sHmw2RgznSo&t=4m50s) are also famous for this, particularly when interrogating suspects.
Another thing one may notice is that women are typically portrayed speaking much slower (as are the aforementioned Dragnet interrogation subjects). This is a big clue. Dialog speed was used as a marker for dispassionate intelligence. People talking slower and lower were indicating that they were being more emotional and/or not thinking clearly.
A great illustration of this is this other [classic Cassablanca scene](https://www.youtube.com/watch?v=1_a57ZNlU6o). Look away from the screen, and pay attention to how fast and low the actors (particularly Ingrid Bergman) speak depending on who they are speaking to and how emotional they are about what they are saying (or how emotional they are *trying* to be). For instance, whenever anyone is talking about the sentimental song, their voice gets lower and slower. In contrast, when her husband and the Chief show up, both Bergman and Bogart start talking much quicker and higher.
To the best of my knowledge, the only person who has looked at this effect from a historical perspective is Maria DiBattista in her book [Fast Talking Dames](http://rads.stackoverflow.com/amzn/click/0300099037). "Fast Talking" in this context is explained to be an indicator of intelligence and power.
As for the unusual accent you are hearing, CGCambell in the comments pointed out that this would have been the ["Transatlantic" accent](https://en.wikipedia.org/wiki/Mid-Atlantic_accent). It was common among the upper class in New York around the turn of the 20th century. Since those were the people who patronized the theater, it was common there as well. When the first media companies started up in New York, they tended to pick it up, as it was a much more "prestigious" accent than the other locally-available alternatives. This was the accent of the late George Plimpton and William F. Buckley Jr.
It started to wane after the movie companies moved out to California and much of television production followed. Today most of the accents you hear in American media are a kind of homogenized [American Midlands](https://en.wikipedia.org/wiki/Midland_American_English) (which many Americans consider "no accent"), with a smattering of weak [AAVE](https://en.wikipedia.org/wiki/African_American_Vernacular_English) used to indicate someone from a rougher background. | I am Cam Cornelius and I can assure you that I did not originate this style, lol! The voice you are referring to is a style of the Mid-Atlantic accent...and yes, as a voice actor I do have opportunities to use this style for historical projects.
Here is a great article on the origins: [That Weirdo Announcer-Voice Accent: Where It Came From and Why It Went Away](https://www.theatlantic.com/national/archive/2015/06/that-weirdo-announcer-voice-accent-where-it-came-from-and-why-it-went-away/395141/) |
26,288 | A commercial came on the radio last night while I was driving home that was a spoof on the old [news reels of the 30s and 40s](https://en.wikipedia.org/wiki/Newsreel). And, being radio, the spoof was entirely centered on the iconic 'voice' of those old news reals...quick talking with an inflection unique enough that when you hear it you immediately think "ah, news reels!"
That got me wondering what the history of that voice is. Did news reels typically sound like that? Or is that a stereotype that, overtime, has become emblematic of those reels even if not entirely accurate? If news reels did sound like that, was that a general style adopted by narrators, or was it something that came from one prominent/prolific narrator in particular?
If you look through old newsreels on YouTube, they definitely do *not* all share that iconic narration voice, so I'm assuming this was a fabrication at some point in time.
(PS, I wasn't sure which site to post this on...perhaps this would be better asked on Moviews/TV?)
UPDATE:
Trying to find some specific examples. The one I heard on the radio was actually a Geico commercial. But can't find that.
The Legend of Korra uses this stereotypical news reel voice as a recap for each episode. You can see an example here: <https://www.youtube.com/watch?v=bigqWlscHYc> It's not quite as over-the-top as you find in more satirical uses but seems to be fairly consistent with the style elsewhere.
If I had to describe the style I'd say it's"
* fast talking
* un-natural inflection
* 'showmanship'
* a trace of an accent...maybe New-Englandish? | 2015/11/09 | [
"https://history.stackexchange.com/questions/26288",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/10250/"
] | I am Cam Cornelius and I can assure you that I did not originate this style, lol! The voice you are referring to is a style of the Mid-Atlantic accent...and yes, as a voice actor I do have opportunities to use this style for historical projects.
Here is a great article on the origins: [That Weirdo Announcer-Voice Accent: Where It Came From and Why It Went Away](https://www.theatlantic.com/national/archive/2015/06/that-weirdo-announcer-voice-accent-where-it-came-from-and-why-it-went-away/395141/) | Cam Cornelius. <https://m.youtube.com/watch?v=ZfKRKI3daoM> He has that old times news voice. |
7,414,303 | As the title, the code itself as following
```
internal static class ThumbnailPresentationLogic
{
public static string GetThumbnailUrl(List<Image> images)
{
if (images == null || images.FirstOrDefault() == null)
{
return ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
Image latestImage = (from image in images
orderby image.CreatedDate descending
select image).First();
Uri fullUrl;
return
Uri.TryCreate(new Uri(ImageRetrievalConfiguration.GetConfig().ImageRepositoryName), latestImage.FileName,
out fullUrl)
? fullUrl.AbsoluteUri
: ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
}
```
I don't want the unit test go through any methods in `ImageRetrievalConfiguration` class, so how can I mock `ImageRetrievalConfiguration` and pass it into `ThumbnailPresentationLogic` class ?? | 2011/09/14 | [
"https://Stackoverflow.com/questions/7414303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877438/"
] | How about you split the method into two - one of which takes a "base URI" and "default Url" and one of which doesn't:
```
internal static class ThumbnailPresentationLogic
{
public static string GetThumbnailUrl(List<Image> images)
{
return GetThumbnailUrl(images,
new Uri(ImageRetrievalConfiguration.GetConfig().ImageRepositoryName),
ImageRetrievalConfiguration.MiniDefaultImageFullUrl);
}
public static string GetThumbnailUrl(List<Image> images, Uri baseUri,
string defaultImageFullUrl)
{
if (images == null || images.FirstOrDefault() == null)
{
return defaultImageFullUrl;
}
Image latestImage = (from image in images
orderby image.CreatedDate descending
select image).First();
Uri fullUrl;
return
Uri.TryCreate(baseUri, latestImage.FileName, out fullUrl)
? fullUrl.AbsoluteUri
: defaultImageFullUrl;
}
}
```
Then you can test the logic in the "three-parameter" overload, but the public method doesn't really contain any logic. You won't get 100% coverage, but you'll be able to test the real *logic* involved. | You can't do this with Moq, because you would need to intercept the call to the methods of this static class and that is something all "normal" mocking frameworks can't achieve, because they are working purely with type inheritance, automatic code generation and stuff like that.
Intercepting a call to a static method however needs other mechanisms.
Intercepting calls to .NET framework static classes can be done using [Moles](http://research.microsoft.com/en-us/projects/pex/). I am not sure if it works with your own static classes though.
[TypeMock Isolator](http://www.typemock.com/isolator-product-page-a) works with all static classes but it is not free.
However, I really think, you should reconsider your architecture instead. |
7,414,303 | As the title, the code itself as following
```
internal static class ThumbnailPresentationLogic
{
public static string GetThumbnailUrl(List<Image> images)
{
if (images == null || images.FirstOrDefault() == null)
{
return ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
Image latestImage = (from image in images
orderby image.CreatedDate descending
select image).First();
Uri fullUrl;
return
Uri.TryCreate(new Uri(ImageRetrievalConfiguration.GetConfig().ImageRepositoryName), latestImage.FileName,
out fullUrl)
? fullUrl.AbsoluteUri
: ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
}
```
I don't want the unit test go through any methods in `ImageRetrievalConfiguration` class, so how can I mock `ImageRetrievalConfiguration` and pass it into `ThumbnailPresentationLogic` class ?? | 2011/09/14 | [
"https://Stackoverflow.com/questions/7414303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877438/"
] | How about you split the method into two - one of which takes a "base URI" and "default Url" and one of which doesn't:
```
internal static class ThumbnailPresentationLogic
{
public static string GetThumbnailUrl(List<Image> images)
{
return GetThumbnailUrl(images,
new Uri(ImageRetrievalConfiguration.GetConfig().ImageRepositoryName),
ImageRetrievalConfiguration.MiniDefaultImageFullUrl);
}
public static string GetThumbnailUrl(List<Image> images, Uri baseUri,
string defaultImageFullUrl)
{
if (images == null || images.FirstOrDefault() == null)
{
return defaultImageFullUrl;
}
Image latestImage = (from image in images
orderby image.CreatedDate descending
select image).First();
Uri fullUrl;
return
Uri.TryCreate(baseUri, latestImage.FileName, out fullUrl)
? fullUrl.AbsoluteUri
: defaultImageFullUrl;
}
}
```
Then you can test the logic in the "three-parameter" overload, but the public method doesn't really contain any logic. You won't get 100% coverage, but you'll be able to test the real *logic* involved. | I am not sure that is possible through Moq I use Rhino Mocks. What I usually do in this situation is use Spring.NET and provide an alternative mock that I call in tests as apposed to the one in production. This works really well for me especially with areas that use external webservices, datasources or the situation your have raised.
You then Unit Test **ImageRetrievalConfiguration** seperatly and ensure it works as expected. **MockImageRetrievalConfiguration** can return results based on how you wish it to react in your testing environment. This will maximize your test converage with the flexibility of mocking.
```
internal static class SpringApplicationContext
{
private static IApplicationContext applicationContext = null;
static SpringApplicationContext()
{
applicationContext = ContextRegistry.GetContext();
}
public static IApplicationContext ApplicationContext
{
get { return applicationContext; }
}
}
public interface IImageRetrievalData
{
string ImageRepositoryName{get;set;}
}
public interface IImageRetrievalConfiguration
{
IImageRetrievalData GetConfig();
}
public class MockImageRetrievalConfiguration : IImageRetrievalConfiguration
{
public IImageRetrievalData GetConfig()
{
//mock implementation
}
}
public class ImageRetrievalConfiguration : IImageRetrievalConfiguration
{
public IImageRetrievalData GetConfig()
{
//Concrete implementation
}
}
//your code
internal static class ThumbnailPresentationLogic
{
public static string GetThumbnailUrl(List<Image> images)
{
if (images == null || images.FirstOrDefault() == null)
{
return ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
Image latestImage = (from image in images orderby image.CreatedDate descending select image).First();
Uri fullUrl;
//Spring
IImageRetrievalConfiguration imageRetrievalConfig = (IImageRetrievalConfiguration) SpringApplicationContext.ApplicationContext["ImageRetrievalConfiguration"];
return Uri.TryCreate(new Uri(imageRetrievalConfig.GetConfig().ImageRepositoryName), latestImage.FileName, out fullUrl) ? fullUrl.AbsoluteUri : ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
}
```
//This would be your testing configuration
```
<spring>
<context>
<resource uri="config://spring/objects" />
</context>
<objects xmlns="http://www.springframework.net">
<object name="ImageRetrievalConfiguration" type="Tests.MockImageRetrievalConfiguration, Tests" singleton="false" />
</objects>
</spring>
```
//This would be your production configuration
```
<spring>
<context>
<resource uri="config://spring/objects" />
</context>
<objects xmlns="http://www.springframework.net">
<object name="ImageRetrievalConfiguration" type="Web.ImageRetrievalConfiguration , Web" singleton="false" />
</objects>
</spring>
```
You can download the Spring.NET framework from <http://www.springframework.net/> |
7,414,303 | As the title, the code itself as following
```
internal static class ThumbnailPresentationLogic
{
public static string GetThumbnailUrl(List<Image> images)
{
if (images == null || images.FirstOrDefault() == null)
{
return ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
Image latestImage = (from image in images
orderby image.CreatedDate descending
select image).First();
Uri fullUrl;
return
Uri.TryCreate(new Uri(ImageRetrievalConfiguration.GetConfig().ImageRepositoryName), latestImage.FileName,
out fullUrl)
? fullUrl.AbsoluteUri
: ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
}
```
I don't want the unit test go through any methods in `ImageRetrievalConfiguration` class, so how can I mock `ImageRetrievalConfiguration` and pass it into `ThumbnailPresentationLogic` class ?? | 2011/09/14 | [
"https://Stackoverflow.com/questions/7414303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877438/"
] | You can't do this with Moq, because you would need to intercept the call to the methods of this static class and that is something all "normal" mocking frameworks can't achieve, because they are working purely with type inheritance, automatic code generation and stuff like that.
Intercepting a call to a static method however needs other mechanisms.
Intercepting calls to .NET framework static classes can be done using [Moles](http://research.microsoft.com/en-us/projects/pex/). I am not sure if it works with your own static classes though.
[TypeMock Isolator](http://www.typemock.com/isolator-product-page-a) works with all static classes but it is not free.
However, I really think, you should reconsider your architecture instead. | I am not sure that is possible through Moq I use Rhino Mocks. What I usually do in this situation is use Spring.NET and provide an alternative mock that I call in tests as apposed to the one in production. This works really well for me especially with areas that use external webservices, datasources or the situation your have raised.
You then Unit Test **ImageRetrievalConfiguration** seperatly and ensure it works as expected. **MockImageRetrievalConfiguration** can return results based on how you wish it to react in your testing environment. This will maximize your test converage with the flexibility of mocking.
```
internal static class SpringApplicationContext
{
private static IApplicationContext applicationContext = null;
static SpringApplicationContext()
{
applicationContext = ContextRegistry.GetContext();
}
public static IApplicationContext ApplicationContext
{
get { return applicationContext; }
}
}
public interface IImageRetrievalData
{
string ImageRepositoryName{get;set;}
}
public interface IImageRetrievalConfiguration
{
IImageRetrievalData GetConfig();
}
public class MockImageRetrievalConfiguration : IImageRetrievalConfiguration
{
public IImageRetrievalData GetConfig()
{
//mock implementation
}
}
public class ImageRetrievalConfiguration : IImageRetrievalConfiguration
{
public IImageRetrievalData GetConfig()
{
//Concrete implementation
}
}
//your code
internal static class ThumbnailPresentationLogic
{
public static string GetThumbnailUrl(List<Image> images)
{
if (images == null || images.FirstOrDefault() == null)
{
return ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
Image latestImage = (from image in images orderby image.CreatedDate descending select image).First();
Uri fullUrl;
//Spring
IImageRetrievalConfiguration imageRetrievalConfig = (IImageRetrievalConfiguration) SpringApplicationContext.ApplicationContext["ImageRetrievalConfiguration"];
return Uri.TryCreate(new Uri(imageRetrievalConfig.GetConfig().ImageRepositoryName), latestImage.FileName, out fullUrl) ? fullUrl.AbsoluteUri : ImageRetrievalConfiguration.MiniDefaultImageFullUrl;
}
}
```
//This would be your testing configuration
```
<spring>
<context>
<resource uri="config://spring/objects" />
</context>
<objects xmlns="http://www.springframework.net">
<object name="ImageRetrievalConfiguration" type="Tests.MockImageRetrievalConfiguration, Tests" singleton="false" />
</objects>
</spring>
```
//This would be your production configuration
```
<spring>
<context>
<resource uri="config://spring/objects" />
</context>
<objects xmlns="http://www.springframework.net">
<object name="ImageRetrievalConfiguration" type="Web.ImageRetrievalConfiguration , Web" singleton="false" />
</objects>
</spring>
```
You can download the Spring.NET framework from <http://www.springframework.net/> |
61,567,981 | ```
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-05-02T19:00:36.477-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-05-02T19:00:36.483-0500 F - [main] exception: connect failed
2020-05-02T19:00:36.483-0500 E - [main] exiting with code 1
```
That is the error that I am getting after running Mongo and this is the response that I get after running mongod
```
2020-05-02T19:00:34.303-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] MongoDB starting : pid=3964 port=27017 dbpath=/data/db 64-bit host=Mateos-MBP.attlocal.net
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] db version v4.2.3
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] allocator: system
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] modules: none
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] build environment:
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] distarch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] target_arch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] options: {}
2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2020-05-02T19:00:34.310-0500 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2020-05-02T19:00:34.313-0500 I - [initandlisten] Stopping further Flow Control ticket acquisitions.
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] now exiting
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] shutting down with code:100
```
I have looked at multiple other questions and still can't seem to figure it out any help would be appreciated. | 2020/05/03 | [
"https://Stackoverflow.com/questions/61567981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12802757/"
] | Create a directory /data/db and give permission to the MongoDB user so that MongoDB can access it. To create the directory:
```
sudo mkdir -p /data/db
```
To change owner:
```
sudo chown -R $USER /data/db
``` | It's obvious, that the MongoDB startup failed.
>
> 2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
>
>
>
That's why you can't connect it. Because it's never startup. You can try to fix the MongoDB startup issue, then try to log in through CMD to make sure it's ok. |
61,567,981 | ```
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-05-02T19:00:36.477-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-05-02T19:00:36.483-0500 F - [main] exception: connect failed
2020-05-02T19:00:36.483-0500 E - [main] exiting with code 1
```
That is the error that I am getting after running Mongo and this is the response that I get after running mongod
```
2020-05-02T19:00:34.303-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] MongoDB starting : pid=3964 port=27017 dbpath=/data/db 64-bit host=Mateos-MBP.attlocal.net
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] db version v4.2.3
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] allocator: system
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] modules: none
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] build environment:
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] distarch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] target_arch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] options: {}
2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2020-05-02T19:00:34.310-0500 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2020-05-02T19:00:34.313-0500 I - [initandlisten] Stopping further Flow Control ticket acquisitions.
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] now exiting
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] shutting down with code:100
```
I have looked at multiple other questions and still can't seem to figure it out any help would be appreciated. | 2020/05/03 | [
"https://Stackoverflow.com/questions/61567981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12802757/"
] | I followed this simple steps
```
Open your task manager if on windows
go to services -> go to MongoDB on services list provided
left click to highlight then down below click
->open services
on the list provided find MongoDB server
left click to highlight
click on the green play button to start the mongoDbServer
wait for a while
run your command again it will work
``` | It's obvious, that the MongoDB startup failed.
>
> 2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
>
>
>
That's why you can't connect it. Because it's never startup. You can try to fix the MongoDB startup issue, then try to log in through CMD to make sure it's ok. |
61,567,981 | ```
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-05-02T19:00:36.477-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-05-02T19:00:36.483-0500 F - [main] exception: connect failed
2020-05-02T19:00:36.483-0500 E - [main] exiting with code 1
```
That is the error that I am getting after running Mongo and this is the response that I get after running mongod
```
2020-05-02T19:00:34.303-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] MongoDB starting : pid=3964 port=27017 dbpath=/data/db 64-bit host=Mateos-MBP.attlocal.net
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] db version v4.2.3
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] allocator: system
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] modules: none
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] build environment:
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] distarch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] target_arch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] options: {}
2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2020-05-02T19:00:34.310-0500 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2020-05-02T19:00:34.313-0500 I - [initandlisten] Stopping further Flow Control ticket acquisitions.
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] now exiting
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] shutting down with code:100
```
I have looked at multiple other questions and still can't seem to figure it out any help would be appreciated. | 2020/05/03 | [
"https://Stackoverflow.com/questions/61567981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12802757/"
] | Open your task manager>services>MongoDB>"right-click and start the server" | It's obvious, that the MongoDB startup failed.
>
> 2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
>
>
>
That's why you can't connect it. Because it's never startup. You can try to fix the MongoDB startup issue, then try to log in through CMD to make sure it's ok. |
61,567,981 | ```
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-05-02T19:00:36.477-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-05-02T19:00:36.483-0500 F - [main] exception: connect failed
2020-05-02T19:00:36.483-0500 E - [main] exiting with code 1
```
That is the error that I am getting after running Mongo and this is the response that I get after running mongod
```
2020-05-02T19:00:34.303-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] MongoDB starting : pid=3964 port=27017 dbpath=/data/db 64-bit host=Mateos-MBP.attlocal.net
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] db version v4.2.3
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] allocator: system
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] modules: none
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] build environment:
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] distarch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] target_arch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] options: {}
2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2020-05-02T19:00:34.310-0500 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2020-05-02T19:00:34.313-0500 I - [initandlisten] Stopping further Flow Control ticket acquisitions.
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] now exiting
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] shutting down with code:100
```
I have looked at multiple other questions and still can't seem to figure it out any help would be appreciated. | 2020/05/03 | [
"https://Stackoverflow.com/questions/61567981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12802757/"
] | Create a directory /data/db and give permission to the MongoDB user so that MongoDB can access it. To create the directory:
```
sudo mkdir -p /data/db
```
To change owner:
```
sudo chown -R $USER /data/db
``` | I followed this simple steps
```
Open your task manager if on windows
go to services -> go to MongoDB on services list provided
left click to highlight then down below click
->open services
on the list provided find MongoDB server
left click to highlight
click on the green play button to start the mongoDbServer
wait for a while
run your command again it will work
``` |
61,567,981 | ```
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-05-02T19:00:36.477-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-05-02T19:00:36.483-0500 F - [main] exception: connect failed
2020-05-02T19:00:36.483-0500 E - [main] exiting with code 1
```
That is the error that I am getting after running Mongo and this is the response that I get after running mongod
```
2020-05-02T19:00:34.303-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] MongoDB starting : pid=3964 port=27017 dbpath=/data/db 64-bit host=Mateos-MBP.attlocal.net
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] db version v4.2.3
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] allocator: system
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] modules: none
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] build environment:
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] distarch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] target_arch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] options: {}
2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2020-05-02T19:00:34.310-0500 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2020-05-02T19:00:34.313-0500 I - [initandlisten] Stopping further Flow Control ticket acquisitions.
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] now exiting
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] shutting down with code:100
```
I have looked at multiple other questions and still can't seem to figure it out any help would be appreciated. | 2020/05/03 | [
"https://Stackoverflow.com/questions/61567981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12802757/"
] | Create a directory /data/db and give permission to the MongoDB user so that MongoDB can access it. To create the directory:
```
sudo mkdir -p /data/db
```
To change owner:
```
sudo chown -R $USER /data/db
``` | Open your task manager>services>MongoDB>"right-click and start the server" |
61,567,981 | ```
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-05-02T19:00:36.477-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-05-02T19:00:36.483-0500 F - [main] exception: connect failed
2020-05-02T19:00:36.483-0500 E - [main] exiting with code 1
```
That is the error that I am getting after running Mongo and this is the response that I get after running mongod
```
2020-05-02T19:00:34.303-0500 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] MongoDB starting : pid=3964 port=27017 dbpath=/data/db 64-bit host=Mateos-MBP.attlocal.net
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] db version v4.2.3
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] allocator: system
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] modules: none
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] build environment:
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] distarch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] target_arch: x86_64
2020-05-02T19:00:34.309-0500 I CONTROL [initandlisten] options: {}
2020-05-02T19:00:34.310-0500 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2020-05-02T19:00:34.310-0500 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2020-05-02T19:00:34.313-0500 I - [initandlisten] Stopping further Flow Control ticket acquisitions.
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] now exiting
2020-05-02T19:00:34.313-0500 I CONTROL [initandlisten] shutting down with code:100
```
I have looked at multiple other questions and still can't seem to figure it out any help would be appreciated. | 2020/05/03 | [
"https://Stackoverflow.com/questions/61567981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12802757/"
] | I followed this simple steps
```
Open your task manager if on windows
go to services -> go to MongoDB on services list provided
left click to highlight then down below click
->open services
on the list provided find MongoDB server
left click to highlight
click on the green play button to start the mongoDbServer
wait for a while
run your command again it will work
``` | Open your task manager>services>MongoDB>"right-click and start the server" |
4,429,853 | I have homework which is about CFGs, their simplification, and their normalized forms. I have also seen some examples on the internet, but unfortunately, I could not solve the below question.
>
> All the binary numbers, in which the $i$th character is equal to the
> character which is located in $i + 2$ th position, and the length of
> these strings is at least $2$.
>
>
>
My problem is with positions, and how to show them using our grammar.
My idea was to have $S$ as our initial state, and then have a production like this:
$S\Rightarrow A|B$
------------------
And then $A$ be for all the strings which start by $0$, and $B$ be for all the strings which start by $1$.
But I will be really grateful for your help. | 2022/04/17 | [
"https://math.stackexchange.com/questions/4429853",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/1003105/"
] | The subsequence $\left((-1)^{2n}\frac{2n}{2n+1}\right)\_{n\in\Bbb N}$ of the sequence$$\left((-1)^n\frac n{n+1}\right)\_{n\in\Bbb N}\tag1$$ is the sequence $\left(\frac{2n}{2n+1}\right)\_{n\in\Bbb N}$ whose limit is $1$. Since the sequence $(1)$ has a subsequence which converges to a number different from $0$, it does not converge to $0$, and therefore your series diverges. | You can argue like this: If $(a\_n)\_{n}$ is not a nullsequence also $(-1)^n a\_n$ is not a null sequence and hence the sequence is diverging. |
4,429,853 | I have homework which is about CFGs, their simplification, and their normalized forms. I have also seen some examples on the internet, but unfortunately, I could not solve the below question.
>
> All the binary numbers, in which the $i$th character is equal to the
> character which is located in $i + 2$ th position, and the length of
> these strings is at least $2$.
>
>
>
My problem is with positions, and how to show them using our grammar.
My idea was to have $S$ as our initial state, and then have a production like this:
$S\Rightarrow A|B$
------------------
And then $A$ be for all the strings which start by $0$, and $B$ be for all the strings which start by $1$.
But I will be really grateful for your help. | 2022/04/17 | [
"https://math.stackexchange.com/questions/4429853",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/1003105/"
] | Remember that the precise statement of this test is "**If $\lim\_{n\to\infty} a\_n$ does not converge to $0$**, then $\sum\_{n=1}^\infty a\_n$ diverges". That's broader than "If $\lim\_{n\to\infty} a\_n$ converges to a nonzero value" because it includes the possibility that $\lim\_{n\to\infty} a\_n$ doesn't exist, as is the case here. | You can argue like this: If $(a\_n)\_{n}$ is not a nullsequence also $(-1)^n a\_n$ is not a null sequence and hence the sequence is diverging. |
4,429,853 | I have homework which is about CFGs, their simplification, and their normalized forms. I have also seen some examples on the internet, but unfortunately, I could not solve the below question.
>
> All the binary numbers, in which the $i$th character is equal to the
> character which is located in $i + 2$ th position, and the length of
> these strings is at least $2$.
>
>
>
My problem is with positions, and how to show them using our grammar.
My idea was to have $S$ as our initial state, and then have a production like this:
$S\Rightarrow A|B$
------------------
And then $A$ be for all the strings which start by $0$, and $B$ be for all the strings which start by $1$.
But I will be really grateful for your help. | 2022/04/17 | [
"https://math.stackexchange.com/questions/4429853",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/1003105/"
] | The subsequence $\left((-1)^{2n}\frac{2n}{2n+1}\right)\_{n\in\Bbb N}$ of the sequence$$\left((-1)^n\frac n{n+1}\right)\_{n\in\Bbb N}\tag1$$ is the sequence $\left(\frac{2n}{2n+1}\right)\_{n\in\Bbb N}$ whose limit is $1$. Since the sequence $(1)$ has a subsequence which converges to a number different from $0$, it does not converge to $0$, and therefore your series diverges. | Remember that the precise statement of this test is "**If $\lim\_{n\to\infty} a\_n$ does not converge to $0$**, then $\sum\_{n=1}^\infty a\_n$ diverges". That's broader than "If $\lim\_{n\to\infty} a\_n$ converges to a nonzero value" because it includes the possibility that $\lim\_{n\to\infty} a\_n$ doesn't exist, as is the case here. |
188,013 | By an algebraic number field, we mean a finite extension field of the field of rational numbers.
Let $k$ be an algebraic number field, we denote by $\mathcal{O}\_k$ the ring of algebraic integers in $k$.
Let $K$ be a finite extension field of an algebraic number field $k$.
Suppose for every ideal $I$ of $\mathcal{O}\_k$, $I\mathcal{O}\_K$ is principal.
Then $K$ is called a PIT(Principal Ideal Theorem) field over $k$.
Let $K$ be a PIT field over $k$.
We say $K$ is a minimal PIT field over $k$ if $L/k$ is not PIT
for every proper subextension $L/k$ of $K/k$.
(1) Let $k$ be an algebraic number field and $K/k$ be a finite extension.
Is $K/k$ a minimal PIT if and only if $K/k$ is the Hilbert class field?
(2) Let $K/k$ and $L/k$ be minimal PITs.
Are $K/k$ and $L/k$ isomorphic? | 2014/11/25 | [
"https://mathoverflow.net/questions/188013",
"https://mathoverflow.net",
"https://mathoverflow.net/users/37646/"
] | The answer to your first question is "no". In general, if $K/k$ is a cyclic unramified Galois extension of odd order, then the order of the capitulation kernel (the subgroup of the class group of $k$ that dies when base-changing to $K$), is $[K:k]\cdot [\mathcal{O}\_k^\times: N\_{K/k}\mathcal{O}\_K^\times]$. The second factor is the index in the integral units of $k$ of the subgroup generated by norms of units of $K$, and it can be non-trivial. See [Iwasawa, A Note on Capitulation Problem for Number Fields](http://projecteuclid.org/download/pdf_1/euclid.pja/1195513002) for a concrete example of a field $k$ whose class group capitulates in a proper subfield of the Hilbert class field, using the above observation. | The answer to the second question is also "no". Take $k= \mathbb{Q}(\sqrt{-5})$. Then the only non-trivial class capitulates in $H=k(i)$ and it also does in $K=k(\sqrt{-3})$, yet $H$ and $K$ are not isomorphic. |
4,241,980 | In Python there is `iter()` used like this:
```
>>> a=[1,2,4]
>>> b=iter(a)
>>> b.next()
1
>>> b.next()
2
>>> b.next()
4
>>> b.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>>
```
Does Ruby have the same feature?
I tried this but it seems there is an issue:
```
ruby-1.9.2-p0 > a=[1,2,3]
=> [1, 2, 3]
ruby-1.9.2-p0 > def b()
ruby-1.9.2-p0 ?> for i in a
ruby-1.9.2-p0 ?> yield i
ruby-1.9.2-p0 ?> end
ruby-1.9.2-p0 ?> end
=> nil
ruby-1.9.2-p0 > b
NameError: undefined local variable or method `a' for #<Object:0xb7878950>
```
Why didn't Ruby find the `a` variable? | 2010/11/22 | [
"https://Stackoverflow.com/questions/4241980",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/446929/"
] | Ruby has iterators also.
The basic use is:
```
>> iter = [0,1,2,3].each #=> #<Enumerator: [0, 1, 2, 3]:each>
>> iter.next #=> 0
>> iter.next #=> 1
>> iter.next #=> 2
>> iter.next #=> 3
>> iter.next
StopIteration: iteration reached an end
from (irb):6:in `next'
from (irb):6
from /Users/greg/.rvm/rubies/ruby-1.9.2-p0/bin/irb:16:in `<main>'
>>
```
You can use that in a method:
```
def iter(ary)
ary.each do |i|
yield i
end
end
iter([1,2,3]) { |i| puts i}
# >> 1
# >> 2
# >> 3
```
Your Ruby code is failing because `a` is not in scope, in other words, Ruby doesn't see `a` inside the `b` method. The typical way it would be defined is as I show it above. So, your code is close.
Also, note that we seldom write a for/loop in Ruby. There are reasons such as `for` loops leaving a local variable behind after running, and potential for running off the end of an array if the loop isn't defined correctly, such as if you are creating an index to access individual elements of an array. Instead we use the `.each` iterator to return each element in turn, making it impossible to go off the end, and not leaving a local variable behind. | Working with the code which you provided, and assuming that you want the values to be printed out:
```
a = [1, 2, 3]
def b(a)
a.each { |i| puts i }
end
b(a)
```
(There are much better ways, as [Mark Thomas](https://stackoverflow.com/users/182590/mark-thomas) pointed out) |
4,241,980 | In Python there is `iter()` used like this:
```
>>> a=[1,2,4]
>>> b=iter(a)
>>> b.next()
1
>>> b.next()
2
>>> b.next()
4
>>> b.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>>
```
Does Ruby have the same feature?
I tried this but it seems there is an issue:
```
ruby-1.9.2-p0 > a=[1,2,3]
=> [1, 2, 3]
ruby-1.9.2-p0 > def b()
ruby-1.9.2-p0 ?> for i in a
ruby-1.9.2-p0 ?> yield i
ruby-1.9.2-p0 ?> end
ruby-1.9.2-p0 ?> end
=> nil
ruby-1.9.2-p0 > b
NameError: undefined local variable or method `a' for #<Object:0xb7878950>
```
Why didn't Ruby find the `a` variable? | 2010/11/22 | [
"https://Stackoverflow.com/questions/4241980",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/446929/"
] | Ruby has iterators also.
The basic use is:
```
>> iter = [0,1,2,3].each #=> #<Enumerator: [0, 1, 2, 3]:each>
>> iter.next #=> 0
>> iter.next #=> 1
>> iter.next #=> 2
>> iter.next #=> 3
>> iter.next
StopIteration: iteration reached an end
from (irb):6:in `next'
from (irb):6
from /Users/greg/.rvm/rubies/ruby-1.9.2-p0/bin/irb:16:in `<main>'
>>
```
You can use that in a method:
```
def iter(ary)
ary.each do |i|
yield i
end
end
iter([1,2,3]) { |i| puts i}
# >> 1
# >> 2
# >> 3
```
Your Ruby code is failing because `a` is not in scope, in other words, Ruby doesn't see `a` inside the `b` method. The typical way it would be defined is as I show it above. So, your code is close.
Also, note that we seldom write a for/loop in Ruby. There are reasons such as `for` loops leaving a local variable behind after running, and potential for running off the end of an array if the loop isn't defined correctly, such as if you are creating an index to access individual elements of an array. Instead we use the `.each` iterator to return each element in turn, making it impossible to go off the end, and not leaving a local variable behind. | ```
[1,2,4].each { |i| puts i }
``` |
22,661,767 | I am attempting to write C functions with these two prototypes:
```
int extract_little (char* str, int ofset, int n);
int extract_big(char* str, int ofset, int n);
```
Now the general idea is I need to return a n byte integer in both formats starting from address str + ofset. P.S. Ofset doesn't do anything yet, I plan on (trying) to shuffle the memory via an offset once I figure out the little endian, for the big.
I'v trying to get it to output like this; for little endian, based off of `i=0xf261a3bf;`,
0xbf 0xa3 0x61 0xf2
```
int main()
{
int i = 0xf261a3bf;
int ofset = 1; // This isn't actually doing anything yet
int z;
for (z = 0; z < sizeof(i); z++){
printf("%x\n",extract_little((char *)&i,ofset, sizeof(i)));
}
return 0;
}
int extract_little(char *str,int offs, int n) {
int x;
for (x = 0; x < n; x++){
return str[x];
}
}
```
I'm not sure what else to try. I figured out the hard way that even thought I put it in a for loop I still can't return more than 1 value from the return.
Thanks! | 2014/03/26 | [
"https://Stackoverflow.com/questions/22661767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3464245/"
] | It is possible to recover the data:
```
In [41]: a = {'0': {'A': array([1,2,3]), 'B': array([4,5,6])}}
In [42]: np.savez('/tmp/model.npz', **a)
In [43]: a = np.load('/tmp/model.npz')
```
Notice that the dtype is 'object'.
```
In [44]: a['0']
Out[44]: array({'A': array([1, 2, 3]), 'B': array([4, 5, 6])}, dtype=object)
```
And there is only one item in the array. That item is a Python dict!
```
In [45]: a['0'].size
Out[45]: 1
```
You can retrieve the value using the `item()` method (NB: this is *not* the
`items()` method for dictionaries, nor anything intrinsic to the `NpzFile`
class, but is the [`numpy.ndarray.item()` method](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.ndarray.item.html)
that copies the value in the array to a standard [Python scalars](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.types.html#array-scalars). In an array of `object` dtype any value held in a cell of the array (even a dictionary) is a Python scalar:
```
In [46]: a['0'].item()
Out[46]: {'A': array([1, 2, 3]), 'B': array([4, 5, 6])}
In [47]: a['0'].item()['A']
Out[47]: array([1, 2, 3])
In [48]: a['0'].item()['B']
Out[48]: array([4, 5, 6])
```
---
To restore `a` as a dict of dicts:
```
In [84]: a = np.load('/tmp/model.npz')
In [85]: a = {key:a[key].item() for key in a}
In [86]: a['0']['A']
Out[86]: array([1, 2, 3])
``` | Based on this answer: [recover dict from 0-d numpy array](https://stackoverflow.com/questions/8361561/recover-dict-from-0-d-numpy-array)
After
```
a = {'key': 'val'}
scipy.savez('file.npz', a=a) # note the use of a keyword for ease later
```
you can use
```
get = scipy.load('file.npz')
a = get['a'][()] # this is crazy maybe, but true
print a['key']
```
It would also work without the use of a keyword argument, but I thought this was worth sharing too. |
70,443,180 | I am having an error with my dart code, which I tried using "?" but still it didn't work.
I am seeing this error message "Non-nullable instance field '\_bmi' must be initialized flutter"
```
import 'dart:math';
class CalculatorBrain {
final height;
final weight;
double _bmi;
CalculatorBrain({
this.height,
this.weight,
});
String calculateBMI() {
_bmi = weight / pow(height / 100, 2);
return _bmi.toStringAsFixed(1);
}
String getResult() {
if (_bmi >= 25) {
return 'overweight';
} else if (_bmi > 18.5) {
return 'Normal';
} else {
return 'underweight';
}
}
String interpretation() {
if (_bmi >= 25) {
return 'you have a higher than normal body weight. try to exercise more';
} else if (_bmi > 18.5) {
return 'you have a normal body weight';
} else {
return 'you have a normal body weight you can eat a little bit more';
}
}
}
```
How do I fix this? | 2021/12/22 | [
"https://Stackoverflow.com/questions/70443180",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17712351/"
] | Oh, I found @Peter Cordes 's comment and I combined with my initial answer:
<https://gcc.godbolt.org/z/bxzsfxPGx>
and `-fopt-info-vec-missed` doesn't say anything to me
```
void f(const unsigned char *input, const unsigned size, unsigned char *output) {
constexpr unsigned MAX_SIZE = 2000;
unsigned char odd[MAX_SIZE / 2];
unsigned char even[MAX_SIZE / 2];
for (unsigned i = 0, j = 0; size > i; i += 2, ++j) {
even[j] = input[i];
odd[j] = input[i + 1];
}
for (unsigned i = 0; size / 2 > i; ++i) {
output[i] = (even[i] << 4) | odd[i];
}
}
``` | It seems GCC doesn't like stuff like `i<size ; i += 2`. Instead, it liked `i<size/2 ; i++`. GCC and clang can't auto-vectorize loops whose trip-count can't be determined ahead of time. Perhaps GCC has a problem with this because you used `unsigned`, so `i+=2` could wrap back to `0` without ever hitting `size`, so `i<size` could be permanently false, i.e. **the compiler can't prove your loop isn't infinite because `size = UINT_MAX` is possible.** (Which disables some optimizations compilers like to do, although at least it's unsigned so we don't have to redo sign extension.)
Clang managed to vectorize anyway (poorly: <https://godbolt.org/z/b4G4jojn1>); possibly it realized that `evens[i]` would be UB if greater than the constant MAX\_SIZE, or else it just didn't care.
---
The temporary arrays seem unnecessary; I think you were only using them to try to give GCC multiple simpler problems to vectorize?
```
// __restrict is optional; it promises the compiler input and output won't overlap
// it still vectorizes without it, but does a check for overlap
void g(const unsigned char *__restrict input, const unsigned size, unsigned char *__restrict output)
{
for (unsigned i = 0 ; size/2 > i; i++) {
output[i] = (input[2*i] << 4) | input[2*i+1];
}
}
```
Without `__restrict`, on overlap it falls back to a scalar loop. In the case of `input = output` exactly, the vector version is still safe. I didn't test or reverse-engineer the overlap check to see whether it uses the vectorized version or not in that case. (It would be C++ UB to use it with `input=output` with `__restrict`, though.)
GCC11.2 `-O3 -march=haswell` auto-vectorizes this fairly reasonably ([Godbolt](https://gcc.godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAMzwBtMA7AQwFtMQByARg9KtQYEAysib0QXACx8BBAKoBnTAAUAHpwAMvAFYTStJg1DIApACYAQuYukl9ZATwDKjdAGFUtAK4sGe1wAyeAyYAHI%2BAEaYxCBmAGykAA6oCoRODB7evnrJqY4CQSHhLFEx8baY9vkMQgRMxASZPn5cFVXptfUEhWGR0bEJCnUNTdmtQ109xaUDAJS2qF7EyOwc5gDMeFQA1BomGgCCAG6oeOjbVBBoDEPbXjd4wCHnyAj12wBUwYleBKTb11u91ST0w51SAC9MP9gY9ngC3sRPosCD8CLNtiYAOxWQ7bfEAgRDTCqRJI2Gg84AWQOAA0APpCACSAC1sJj1gARbZmDR8kzrXEHAl3B6UhHvDDoEwAVgsNIZzLZ2wA9DzZZyBUKRRT4a93pgjoxZfK6YzWey1WYNVr9sKCfwkRBdWDtngOdyNP9tB7dlrtpDMBz2Xh/e7rALuWZ/hHLNoMdjtSKCYbjXLtBrfd9fibQzLNYK7SLsZq8Q6SNtnWL4eGudtWtsfZG/YKA3gocG3WHMZZm9Ge1Y4wmcUXk/ipSaM/msww0bmbYWy/iS3bR/jHZWXeda57/YHVTzO6HW7GLHhh0mxyi53K89zmxBUwx59OBW439tJMO3NsJ7eF5emJYqWBwrocGyuFsq6HCqKobFgNAhNs9L0sQmBDMQeAOKqHwfGqBKwds6ACGAHAENsZKoCweBKNsBAIEGaAsIkdDRNsDACAAtKgRrEAYiQsUYdonGc2zAFcRLkVuEpIh8KFoRhWHkdmfyEjcUnVq6gYwppLyIp88noQQmHYdevyzKuI5LhcFZViCNa%2BhoHIWG2UIqmYR5hqeF5rsmZkEC%2B951hAKkmmYXyZm%2BH5fkBP6hXK4WhpYXAAb5YGgcBdocPMtCcDKvB%2BBwWikKgnDvr2lgBosyxBhsPCkAQmjZfMDFMFgMQQPMADWIDrJIAB0cRxGYACcI0ABzDZIkjjZIXDjfonCSAVTUlZwvAKCAXqNUV2WkHAsAwIgKBUSx9BkBQEnMaxMTIAYRj0sZ9xdXwdAENEm0QBEq0RME9QAJ6cPVTFsIIADyDC0IDu2kFgLCGMA4gw/gaEOHgRqbTDJKYMgvyrPVwTvblMO0HgETEADHhYEDvDGXgLA0/MVAGMACgAGp4JgADuYOJIwNMyIIIhiOwUiC/IShqKtujRvdxinvoZObZA8yoIk1SY5xQzSlypgVRYvLbJxYNUEwtycfDywIJGbwKFzlS0BtlQ49ULgMO4njNP47tTH0MStLkaQCKMLRJCkQcML7JT9OMztowInQjJ7YxtC7HTDN0wS9NH/u2BnId6BMDRRzMXDzAo1UrHoxmYPje3E/lpCFcVpUcKok2cXEkgAnLlZPQwXUYhA5WDjY2y4IQFZ1f8HjXedPbrGYsy001swtZgbX9J1pA9dN/XTYv6xdxoXBYuN6xcLLS28Az6wjf1MpxDKZin3fXDrBokh9dIze8K3G1bQaqvfaR0IBICYmdaI5BKAQJuiAYAXBL6vVoO9Ygn1vow1%2BswYg0NgZUVBgQCGUNVpwwRkjYqKM04Y1WtjXG70BaE0qKtUm5NKYYDrg1TCDNuB7WZkwVmHNua835jw8WwtxBi34IIRQKh1Aw10OsfQCMUAKxYcrbeasNacC1gQHWnI9aj0NsbdYRtLavBtmbe2tBaBG34OrTiwR%2BCcSNMgC2NElDoCdu0Zwj53YF1aIELO0x%2BjrCxGHPI6R/HhIjiXEJYS7BpwTvnZOocEnxxqBnWJ/t4nJKyKkzJQS/YSCxOXSuotOG10ZotDgjdf5rTbh3LuYlkDIHrFwfqHlh4K3HvgIgSJp7bFnpA/pZh1jrGXkA3aa9SCtXapQbqEg%2BTVOWqQBmcQND7z6mfeIGhJojUfjKJuq1/62EATtLQ0yepP36mMrEfJ35YhlDKLgMpxqtGJusFaMMTnnOatUswXyW7rUmRc%2BYvFUjOEkEAA%3D%3D)); some missed optimizations but not as bad as with separate loops, and of course avoids touching new stack memory. The main inner loop looks like this:
```
# GCC11 -O3 -march=haswell
# before loop, YMM3 = _mm256_set1_epi16(0x00FF)
.L4: # do{
vpand ymm1, ymm3, YMMWORD PTR [rcx+32+rax*2] # why not reuse the load results for both odd/even? fortunately modern CPUs have good L1d bandwidth
vpand ymm0, ymm3, YMMWORD PTR [rcx+rax*2] # evens: load input[2*(i+0..31)] and AND away the high bytes for pack
vmovdqu ymm4, YMMWORD PTR [rcx+rax*2] # load 2 vectors of input data
vmovdqu ymm5, YMMWORD PTR [rcx+32+rax*2]
vpackuswb ymm0, ymm0, ymm1 # evens: pack evens down to single bytes.
vpsrlw ymm2, ymm5, 8 # odds: shift down to line up with evens
vpsrlw ymm1, ymm4, 8
vpermq ymm0, ymm0, 216 # evens: lane-crossing fixup
vpaddb ymm0, ymm0, ymm0 # evens <<= 1 byte shift (x86 SIMD lacks a vpsllb, even with AVX-512)
vpackuswb ymm1, ymm1, ymm2 # odds: pack
vpaddb ymm0, ymm0, ymm0 # evens <<= 1
vpermq ymm1, ymm1, 216 # odds: lane-crossing fixup
vpaddb ymm0, ymm0, ymm0 # evens <<= 1
vpaddb ymm0, ymm0, ymm0 # evens <<= 1
vpor ymm0, ymm0, ymm1 # (evens<<4) | odds
vmovdqu YMMWORD PTR [rdi+rax], ymm0 # store to output
add rax, 32 # advance output position by 32 bytes. (Input positions scale by 2)
cmp rdx, rax
jne .L4 # } while(i != size/2)
```
---
It would have been faster if GCC had chosen to mask with `0x000F` instead of `0x00FF` before packing, so the packed evens could be left-shifted with `vpsllw` instead of 4x `vpaddb` without spilling any non-zero bits into the next byte. Or just shift and AND again; that's the standard way to emulate the non-existent `vpsllb`.
Or even better, OR together high and low within each word *before* packing down to bytes.
```
# manually vectorized; what GCC could have done in theory
# if using intrinsics, this strategy is probably good.
vmovdqu ymm0, [mem]
vmovdqu ymm1, [mem+32]
vpsllw ymm2, ymm0, 12 # evens: line up with odds, and do the <<4
vpsllw ymm3, ymm1, 12
vpor ymm0, ymm0, ymm2 # odds |= (evens<<4) in the high byte of each word
vpor ymm1, ymm1, ymm3
vpsrlw ymm0, ymm0, 8 # shift merged to bottom of word
vpsrlw ymm1, ymm1, 8
vpackuswb ymm0, ymm0, ymm1 # and pack
vpermq ymm0, ymm0, 0xDB # same 216
vmovdqu [mem], ymm0
.. pointer increment / loop condition
```
Notice that we avoided an AND constant; both halves needed shifting anyway (odd to be in the right place for pack, even because of `<<4`). Shifting after packing would mean half as much data to shift, but would have needed masking after the shift so it's break except for back-end port pressure on ALU ports with shift units. (<https://agner.org/optimize/> ; <https://uops.info/>). But merging before packing saves shuffles, and that's a bigger throughput bottleneck on Intel CPUs.
---
If we can add instead of OR (because we know there aren't overlapping bits so it's equivalent), we could 2x `vpmaddubsw` (`_mm256_maddubs_epi16`) using the signed (second) operand as a `_mm256_set1_epi16(0x0110)` and the unsigned (first) input holding data from the array to do `input[2*i+1] + (input[2*i] * 16)` within each byte pair. Then AND and VPACKUSWB / VPERMQ from words down to byte elements and store. |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | If you put `command = connection.CreateCommand();` inside your for loop, it will work. The problem is you are looping over the command parameters only, so its trying to add more parameters to your existing command, but theyre already in there. So you need to make a new command every loop instead. | You need to add Command parameters outside the loop or declare Command inside the loop.
In the first case you will need to update each parameter's value like this:
```
oleDbCommand1.Parameters["@UserID"].Value = Details.ID;
```
And execute the command once new values are set. |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | You should do this **properly**:
* define your parameters **once** outside the loop
* define the **values** of your parameters inside the loop for each iteration
* use `using(...) { ... }` blocks to get rid of the `try ... catch ... finally` (the `using` block will ensure proper and speedy disposal of your classes, when no longer needed)
* stop using a `try...catch` if you're not actually *handling* the exceptions - just rethrowing them (makes no sense)
Try this code:
```
public static void featuresentry()
{
string connectionString = HandVeinPattern.Properties.Settings.Default.HandVeinPatternConnectionString;
string insertQuery = "INSERT INTO FEATURES(UserID, Angle, ClassID, Octave, PointX, PointY, Response, Size) VALUES(@UserID, @Angle, @ClassID, @Octave, @PointX, @PointY, @Response, @Size)";
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(insertQuery, connection))
{
// define your parameters ONCE outside the loop, and use EXPLICIT typing
command.Parameters.Add("@UserID", SqlDbType.Int);
command.Parameters.Add("@Angle", SqlDbType.Double);
command.Parameters.Add("@ClassID", SqlDbType.Double);
command.Parameters.Add("@Octave", SqlDbType.Double);
command.Parameters.Add("@PointX", SqlDbType.Double);
command.Parameters.Add("@PointY", SqlDbType.Double);
command.Parameters.Add("@Response", SqlDbType.Double);
command.Parameters.Add("@Size", SqlDbType.Double);
connection.Open();
for (int i = 0; i < Details.modelKeyPoints.Size; i++)
{
// now just SET the values
command.Parameters["@UserID"].Value = Details.ID;
command.Parameters["@Angle"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Angle);
command.Parameters["@ClassID"].Value = Convert.ToDouble(Details.modelKeyPoints[i].ClassId);
command.Parameters["@Octave"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Octave);
command.Parameters["@PointX"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.X);
command.Parameters["@PointY"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.Y);
command.Parameters["@Response"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Response);
command.Parameters["@Size"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Size);
command.ExecuteNonQuery();
}
}
}
``` | You need to add Command parameters outside the loop or declare Command inside the loop.
In the first case you will need to update each parameter's value like this:
```
oleDbCommand1.Parameters["@UserID"].Value = Details.ID;
```
And execute the command once new values are set. |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | In order to obtain the maximum performance, you may consider a [BulkInsert](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/). This ensures that your insert are done as fast as possible, as any issued query has some overhead (a large query will generally execute faster than many small ones). It should look something like the following:
1) define AsDataTable extension method from [here](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/):
```
public static DataTable AsDataTable<T>(this IEnumerable<T> data)
{
PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(typeof(T));
var table = new DataTable();
foreach (PropertyDescriptor prop in properties)
table.Columns.Add(prop.Name, Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType);
foreach (T item in data)
{
DataRow row = table.NewRow();
foreach (PropertyDescriptor prop in properties)
row[prop.Name] = prop.GetValue(item) ?? DBNull.Value;
table.Rows.Add(row);
}
return table;
}
```
2) execute the actual BulkInsert like this (*not tested*):
```
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
using (var bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.Default, transaction))
{
bulkCopy.BatchSize = 100;
bulkCopy.DestinationTableName = "dbo.FEATURES";
try
{
// define mappings for columns, as property names / generated data table column names
// is different from destination table column name
bulkCopy.ColumnMappings.Add("ID","UserID");
bulkCopy.ColumnMappings.Add("Angle","Angle");
// the other mappings come here
bulkCopy.WriteToServer(Details.modelKeyPoints.AsDataTable());
}
catch (Exception)
{
transaction.Rollback();
connection.Close();
}
}
transaction.Commit();
}
```
Of course, if [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) would be used (object properties names would match exactly destination table column names), no mapping would be required. | You need to add Command parameters outside the loop or declare Command inside the loop.
In the first case you will need to update each parameter's value like this:
```
oleDbCommand1.Parameters["@UserID"].Value = Details.ID;
```
And execute the command once new values are set. |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | You should do this **properly**:
* define your parameters **once** outside the loop
* define the **values** of your parameters inside the loop for each iteration
* use `using(...) { ... }` blocks to get rid of the `try ... catch ... finally` (the `using` block will ensure proper and speedy disposal of your classes, when no longer needed)
* stop using a `try...catch` if you're not actually *handling* the exceptions - just rethrowing them (makes no sense)
Try this code:
```
public static void featuresentry()
{
string connectionString = HandVeinPattern.Properties.Settings.Default.HandVeinPatternConnectionString;
string insertQuery = "INSERT INTO FEATURES(UserID, Angle, ClassID, Octave, PointX, PointY, Response, Size) VALUES(@UserID, @Angle, @ClassID, @Octave, @PointX, @PointY, @Response, @Size)";
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(insertQuery, connection))
{
// define your parameters ONCE outside the loop, and use EXPLICIT typing
command.Parameters.Add("@UserID", SqlDbType.Int);
command.Parameters.Add("@Angle", SqlDbType.Double);
command.Parameters.Add("@ClassID", SqlDbType.Double);
command.Parameters.Add("@Octave", SqlDbType.Double);
command.Parameters.Add("@PointX", SqlDbType.Double);
command.Parameters.Add("@PointY", SqlDbType.Double);
command.Parameters.Add("@Response", SqlDbType.Double);
command.Parameters.Add("@Size", SqlDbType.Double);
connection.Open();
for (int i = 0; i < Details.modelKeyPoints.Size; i++)
{
// now just SET the values
command.Parameters["@UserID"].Value = Details.ID;
command.Parameters["@Angle"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Angle);
command.Parameters["@ClassID"].Value = Convert.ToDouble(Details.modelKeyPoints[i].ClassId);
command.Parameters["@Octave"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Octave);
command.Parameters["@PointX"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.X);
command.Parameters["@PointY"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.Y);
command.Parameters["@Response"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Response);
command.Parameters["@Size"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Size);
command.ExecuteNonQuery();
}
}
}
``` | If you put `command = connection.CreateCommand();` inside your for loop, it will work. The problem is you are looping over the command parameters only, so its trying to add more parameters to your existing command, but theyre already in there. So you need to make a new command every loop instead. |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | If you put `command = connection.CreateCommand();` inside your for loop, it will work. The problem is you are looping over the command parameters only, so its trying to add more parameters to your existing command, but theyre already in there. So you need to make a new command every loop instead. | In order to obtain the maximum performance, you may consider a [BulkInsert](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/). This ensures that your insert are done as fast as possible, as any issued query has some overhead (a large query will generally execute faster than many small ones). It should look something like the following:
1) define AsDataTable extension method from [here](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/):
```
public static DataTable AsDataTable<T>(this IEnumerable<T> data)
{
PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(typeof(T));
var table = new DataTable();
foreach (PropertyDescriptor prop in properties)
table.Columns.Add(prop.Name, Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType);
foreach (T item in data)
{
DataRow row = table.NewRow();
foreach (PropertyDescriptor prop in properties)
row[prop.Name] = prop.GetValue(item) ?? DBNull.Value;
table.Rows.Add(row);
}
return table;
}
```
2) execute the actual BulkInsert like this (*not tested*):
```
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
using (var bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.Default, transaction))
{
bulkCopy.BatchSize = 100;
bulkCopy.DestinationTableName = "dbo.FEATURES";
try
{
// define mappings for columns, as property names / generated data table column names
// is different from destination table column name
bulkCopy.ColumnMappings.Add("ID","UserID");
bulkCopy.ColumnMappings.Add("Angle","Angle");
// the other mappings come here
bulkCopy.WriteToServer(Details.modelKeyPoints.AsDataTable());
}
catch (Exception)
{
transaction.Rollback();
connection.Close();
}
}
transaction.Commit();
}
```
Of course, if [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) would be used (object properties names would match exactly destination table column names), no mapping would be required. |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | If you put `command = connection.CreateCommand();` inside your for loop, it will work. The problem is you are looping over the command parameters only, so its trying to add more parameters to your existing command, but theyre already in there. So you need to make a new command every loop instead. | You can do this by sending your data as an xml string and convert in into table in a stored procedure in sql. For example:
suppose I am sending multiple rows to add/update in an sql table then here are the steps:
1. Convert your class or list of class into an xml string using following method:
```
public static string SerializeObjectToXmlString(object value)
{
var emptyNamepsaces = new XmlSerializerNamespaces(new[] {
XmlQualifiedName.Empty });
var serializer = new XmlSerializer(value.GetType());
var settings = new XmlWriterSettings();
settings.Indent = true;
settings.OmitXmlDeclaration = true;
using (var stream = new StringWriter())
using (var writer = XmlWriter.Create(stream, settings))
{
serializer.Serialize(writer, value, emptyNamepsaces);
return stream.ToString();
}
}
```
2. Now while sending data to the database convert your class object into xml
string (Here I am using entity framework in my code, you can do this without using it as well):
```
bool AddUpdateData(List<MyClass> data)
{
bool returnResult = false;
string datatXml = Helper.SerializeObjectToXmlString(data);
var sqlparam = new List<SqlParameter>()
{
new SqlParameter() { ParameterName = "dataXml", Value = datatXml}
};
var result = this.myEntity.Repository<SQL_StoredProc_ComplexType>().ExecuteStoredProc("SQL_StoredProc", sqlparam);
if (result != null && result.Count() > 0)
{
returnResult = result[0].Status == 1 ? true : false;
}
return returnResult;
}
```
3. Now your SQL Code:
3.1 Declare a table variable:
```sql
DECLARE @tableVariableName TABLE
(
ID INT, Name VARCHAR(20)
)
```
3.2 Insert Your xml string into Table variable
```sql
INSERT INTO @tableVariableName
SELECT
Finaldata.R.value ('(ID/text())[1]', 'INT') AS ID,
Finaldata.R.value ('(Name/text())[1]', 'VARCHAR(20)') AS Name
FROM @MyInputXmlString.nodes ('//ArrayMyClass/MyClass') AS Finaldata (R)
```
3.3 Finally insert this table value into your sql table
```sql
INSERT INTO MyTable (ID, Name)
SELECT ID, Name
FROM @tableVariableName
```
This will save your effort of hitting database again and again using a for loop.
Hope it will help you |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | You should do this **properly**:
* define your parameters **once** outside the loop
* define the **values** of your parameters inside the loop for each iteration
* use `using(...) { ... }` blocks to get rid of the `try ... catch ... finally` (the `using` block will ensure proper and speedy disposal of your classes, when no longer needed)
* stop using a `try...catch` if you're not actually *handling* the exceptions - just rethrowing them (makes no sense)
Try this code:
```
public static void featuresentry()
{
string connectionString = HandVeinPattern.Properties.Settings.Default.HandVeinPatternConnectionString;
string insertQuery = "INSERT INTO FEATURES(UserID, Angle, ClassID, Octave, PointX, PointY, Response, Size) VALUES(@UserID, @Angle, @ClassID, @Octave, @PointX, @PointY, @Response, @Size)";
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(insertQuery, connection))
{
// define your parameters ONCE outside the loop, and use EXPLICIT typing
command.Parameters.Add("@UserID", SqlDbType.Int);
command.Parameters.Add("@Angle", SqlDbType.Double);
command.Parameters.Add("@ClassID", SqlDbType.Double);
command.Parameters.Add("@Octave", SqlDbType.Double);
command.Parameters.Add("@PointX", SqlDbType.Double);
command.Parameters.Add("@PointY", SqlDbType.Double);
command.Parameters.Add("@Response", SqlDbType.Double);
command.Parameters.Add("@Size", SqlDbType.Double);
connection.Open();
for (int i = 0; i < Details.modelKeyPoints.Size; i++)
{
// now just SET the values
command.Parameters["@UserID"].Value = Details.ID;
command.Parameters["@Angle"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Angle);
command.Parameters["@ClassID"].Value = Convert.ToDouble(Details.modelKeyPoints[i].ClassId);
command.Parameters["@Octave"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Octave);
command.Parameters["@PointX"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.X);
command.Parameters["@PointY"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.Y);
command.Parameters["@Response"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Response);
command.Parameters["@Size"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Size);
command.ExecuteNonQuery();
}
}
}
``` | In order to obtain the maximum performance, you may consider a [BulkInsert](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/). This ensures that your insert are done as fast as possible, as any issued query has some overhead (a large query will generally execute faster than many small ones). It should look something like the following:
1) define AsDataTable extension method from [here](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/):
```
public static DataTable AsDataTable<T>(this IEnumerable<T> data)
{
PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(typeof(T));
var table = new DataTable();
foreach (PropertyDescriptor prop in properties)
table.Columns.Add(prop.Name, Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType);
foreach (T item in data)
{
DataRow row = table.NewRow();
foreach (PropertyDescriptor prop in properties)
row[prop.Name] = prop.GetValue(item) ?? DBNull.Value;
table.Rows.Add(row);
}
return table;
}
```
2) execute the actual BulkInsert like this (*not tested*):
```
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
using (var bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.Default, transaction))
{
bulkCopy.BatchSize = 100;
bulkCopy.DestinationTableName = "dbo.FEATURES";
try
{
// define mappings for columns, as property names / generated data table column names
// is different from destination table column name
bulkCopy.ColumnMappings.Add("ID","UserID");
bulkCopy.ColumnMappings.Add("Angle","Angle");
// the other mappings come here
bulkCopy.WriteToServer(Details.modelKeyPoints.AsDataTable());
}
catch (Exception)
{
transaction.Rollback();
connection.Close();
}
}
transaction.Commit();
}
```
Of course, if [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) would be used (object properties names would match exactly destination table column names), no mapping would be required. |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | You should do this **properly**:
* define your parameters **once** outside the loop
* define the **values** of your parameters inside the loop for each iteration
* use `using(...) { ... }` blocks to get rid of the `try ... catch ... finally` (the `using` block will ensure proper and speedy disposal of your classes, when no longer needed)
* stop using a `try...catch` if you're not actually *handling* the exceptions - just rethrowing them (makes no sense)
Try this code:
```
public static void featuresentry()
{
string connectionString = HandVeinPattern.Properties.Settings.Default.HandVeinPatternConnectionString;
string insertQuery = "INSERT INTO FEATURES(UserID, Angle, ClassID, Octave, PointX, PointY, Response, Size) VALUES(@UserID, @Angle, @ClassID, @Octave, @PointX, @PointY, @Response, @Size)";
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(insertQuery, connection))
{
// define your parameters ONCE outside the loop, and use EXPLICIT typing
command.Parameters.Add("@UserID", SqlDbType.Int);
command.Parameters.Add("@Angle", SqlDbType.Double);
command.Parameters.Add("@ClassID", SqlDbType.Double);
command.Parameters.Add("@Octave", SqlDbType.Double);
command.Parameters.Add("@PointX", SqlDbType.Double);
command.Parameters.Add("@PointY", SqlDbType.Double);
command.Parameters.Add("@Response", SqlDbType.Double);
command.Parameters.Add("@Size", SqlDbType.Double);
connection.Open();
for (int i = 0; i < Details.modelKeyPoints.Size; i++)
{
// now just SET the values
command.Parameters["@UserID"].Value = Details.ID;
command.Parameters["@Angle"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Angle);
command.Parameters["@ClassID"].Value = Convert.ToDouble(Details.modelKeyPoints[i].ClassId);
command.Parameters["@Octave"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Octave);
command.Parameters["@PointX"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.X);
command.Parameters["@PointY"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Point.Y);
command.Parameters["@Response"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Response);
command.Parameters["@Size"].Value = Convert.ToDouble(Details.modelKeyPoints[i].Size);
command.ExecuteNonQuery();
}
}
}
``` | You can do this by sending your data as an xml string and convert in into table in a stored procedure in sql. For example:
suppose I am sending multiple rows to add/update in an sql table then here are the steps:
1. Convert your class or list of class into an xml string using following method:
```
public static string SerializeObjectToXmlString(object value)
{
var emptyNamepsaces = new XmlSerializerNamespaces(new[] {
XmlQualifiedName.Empty });
var serializer = new XmlSerializer(value.GetType());
var settings = new XmlWriterSettings();
settings.Indent = true;
settings.OmitXmlDeclaration = true;
using (var stream = new StringWriter())
using (var writer = XmlWriter.Create(stream, settings))
{
serializer.Serialize(writer, value, emptyNamepsaces);
return stream.ToString();
}
}
```
2. Now while sending data to the database convert your class object into xml
string (Here I am using entity framework in my code, you can do this without using it as well):
```
bool AddUpdateData(List<MyClass> data)
{
bool returnResult = false;
string datatXml = Helper.SerializeObjectToXmlString(data);
var sqlparam = new List<SqlParameter>()
{
new SqlParameter() { ParameterName = "dataXml", Value = datatXml}
};
var result = this.myEntity.Repository<SQL_StoredProc_ComplexType>().ExecuteStoredProc("SQL_StoredProc", sqlparam);
if (result != null && result.Count() > 0)
{
returnResult = result[0].Status == 1 ? true : false;
}
return returnResult;
}
```
3. Now your SQL Code:
3.1 Declare a table variable:
```sql
DECLARE @tableVariableName TABLE
(
ID INT, Name VARCHAR(20)
)
```
3.2 Insert Your xml string into Table variable
```sql
INSERT INTO @tableVariableName
SELECT
Finaldata.R.value ('(ID/text())[1]', 'INT') AS ID,
Finaldata.R.value ('(Name/text())[1]', 'VARCHAR(20)') AS Name
FROM @MyInputXmlString.nodes ('//ArrayMyClass/MyClass') AS Finaldata (R)
```
3.3 Finally insert this table value into your sql table
```sql
INSERT INTO MyTable (ID, Name)
SELECT ID, Name
FROM @tableVariableName
```
This will save your effort of hitting database again and again using a for loop.
Hope it will help you |
36,815,928 | I was wondering why the result of the following code is 0 and not 3.
```
var fn = function(){
for (i = 0; i < 3; i++){
return function(){
console.log(i);
};
}
}();
fn();
``` | 2016/04/23 | [
"https://Stackoverflow.com/questions/36815928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654400/"
] | In order to obtain the maximum performance, you may consider a [BulkInsert](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/). This ensures that your insert are done as fast as possible, as any issued query has some overhead (a large query will generally execute faster than many small ones). It should look something like the following:
1) define AsDataTable extension method from [here](https://blogs.msdn.microsoft.com/nikhilsi/2008/06/11/bulk-insert-into-sql-from-c-app/):
```
public static DataTable AsDataTable<T>(this IEnumerable<T> data)
{
PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(typeof(T));
var table = new DataTable();
foreach (PropertyDescriptor prop in properties)
table.Columns.Add(prop.Name, Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType);
foreach (T item in data)
{
DataRow row = table.NewRow();
foreach (PropertyDescriptor prop in properties)
row[prop.Name] = prop.GetValue(item) ?? DBNull.Value;
table.Rows.Add(row);
}
return table;
}
```
2) execute the actual BulkInsert like this (*not tested*):
```
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction();
using (var bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.Default, transaction))
{
bulkCopy.BatchSize = 100;
bulkCopy.DestinationTableName = "dbo.FEATURES";
try
{
// define mappings for columns, as property names / generated data table column names
// is different from destination table column name
bulkCopy.ColumnMappings.Add("ID","UserID");
bulkCopy.ColumnMappings.Add("Angle","Angle");
// the other mappings come here
bulkCopy.WriteToServer(Details.modelKeyPoints.AsDataTable());
}
catch (Exception)
{
transaction.Rollback();
connection.Close();
}
}
transaction.Commit();
}
```
Of course, if [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) would be used (object properties names would match exactly destination table column names), no mapping would be required. | You can do this by sending your data as an xml string and convert in into table in a stored procedure in sql. For example:
suppose I am sending multiple rows to add/update in an sql table then here are the steps:
1. Convert your class or list of class into an xml string using following method:
```
public static string SerializeObjectToXmlString(object value)
{
var emptyNamepsaces = new XmlSerializerNamespaces(new[] {
XmlQualifiedName.Empty });
var serializer = new XmlSerializer(value.GetType());
var settings = new XmlWriterSettings();
settings.Indent = true;
settings.OmitXmlDeclaration = true;
using (var stream = new StringWriter())
using (var writer = XmlWriter.Create(stream, settings))
{
serializer.Serialize(writer, value, emptyNamepsaces);
return stream.ToString();
}
}
```
2. Now while sending data to the database convert your class object into xml
string (Here I am using entity framework in my code, you can do this without using it as well):
```
bool AddUpdateData(List<MyClass> data)
{
bool returnResult = false;
string datatXml = Helper.SerializeObjectToXmlString(data);
var sqlparam = new List<SqlParameter>()
{
new SqlParameter() { ParameterName = "dataXml", Value = datatXml}
};
var result = this.myEntity.Repository<SQL_StoredProc_ComplexType>().ExecuteStoredProc("SQL_StoredProc", sqlparam);
if (result != null && result.Count() > 0)
{
returnResult = result[0].Status == 1 ? true : false;
}
return returnResult;
}
```
3. Now your SQL Code:
3.1 Declare a table variable:
```sql
DECLARE @tableVariableName TABLE
(
ID INT, Name VARCHAR(20)
)
```
3.2 Insert Your xml string into Table variable
```sql
INSERT INTO @tableVariableName
SELECT
Finaldata.R.value ('(ID/text())[1]', 'INT') AS ID,
Finaldata.R.value ('(Name/text())[1]', 'VARCHAR(20)') AS Name
FROM @MyInputXmlString.nodes ('//ArrayMyClass/MyClass') AS Finaldata (R)
```
3.3 Finally insert this table value into your sql table
```sql
INSERT INTO MyTable (ID, Name)
SELECT ID, Name
FROM @tableVariableName
```
This will save your effort of hitting database again and again using a for loop.
Hope it will help you |
44,447,194 | I'm very new to all of this, so please tell me anything I'm doing wrong!
I wrote a little bot for Discord using Node.js.
I also signed up for the free trial of Google Cloud Platform wth $300 of credit and all that. After creating a project, I started the cloud shell and ran my Node.js Discord bot using:
```
node my_app_here.js
```
The cloud shell is running and the bot is working correctly. Will the cloud shell run indefinitely and keep my bot running? It seems hard to believe that Google would host my bot for free. I have billing disabled on the project, but I'm afraid of getting hit with a huge bill.
Thanks for any help or recommendations! | 2017/06/08 | [
"https://Stackoverflow.com/questions/44447194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8134123/"
] | It is free but intended for interactive usage. I guess you could get away with using it for other purposes but it would probably be a hassle. If you want to go free you could consider if what you try to do would fit into the free tier on [google app engine standard environment](https://cloud.google.com/products/calculator/#).
>
> Cloud Shell is intended for interactive use only. Non-interactive
> sessions will be ended automatically after a warning. Prolonged usage
> or computational or network intensive processes are not supported and
> may result in session termination without a warning.
>
>
>
[Documentation](https://cloud.google.com/shell/docs/limitations#usage_limits) | Your could use this code at the end of your code to keep it alive:
```
function intervalFunc() {
const { exec } = require("child_process");
exec("ls -la", (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
},)}
setInterval(intervalFunc, 3600000 )});
``` |
6,195,404 | I'm trying to create a hangman program that uses file io/ file input. I want the user to choose a category (file) which contains 4 lines; each has one word. The program will then read the line and convert it into `_`s , this is what the user will see.
Where would I insert this -->
{
lineCount++;
output.println (lineCount + " " + line);
line = input.readLine ();
}
/\*
* Hangman.java
* The program asks the user to choose a file that is provided.
* The program will read one line from the file and the player will guess the word.
* and then outputs the line, the word will appear using "\_".
* The player will guess letters within the word or guess entire word,
* if the player guesses correctly the "\_" will replaced with the letter guessed.
* But, if the player guesses incorrectly the a part of the stickman's body will be added,
* then the user will be asked to guess again. The user can also enter "!" to guess the entire word,
* if the guess correctly they win, but if they guess incorrectly they will be asked to guess again.
* Once it has finished reading the file, the program outputs the number of guesses.
\*/
import java.awt.\*;
import hsa.Console;
//class name
public class Hangman
{
static Console c;
public static void main (String [] args)
{
c = new Console ();
```
PrintWriter output;
String fileName;
```
//ask user to choose file; file contains words for user to guess
c.println ("The categories are: cartoons.txt, animals.txt, and food.txt. Which category would you like to choose?");
```
fileName = c.readLine ();
// E:\\ICS\\ICS 3U1\\Assignments\\JavaFiles\\+fileName
```
try {
```
/* Sets up a file reader to read the file passed on the command
line one character at a time */
FileReader input = new FileReader(args[0]);
/* Filter FileReader through a Buffered read to read a line at a
time */
BufferedReader bufRead = new BufferedReader(input);
String line; // String that holds current file line
int count = 0; // Line number of count
// Read first line
line = bufRead.readLine();
count++;
// Read through file one line at time. Print line # and line
while (line != null){
c.println(count+": "+line);
line = bufRead.readLine ();
count++;
}
bufRead.close();
}
catch (FileNotFoundException e)
{
c.println("File does not exist or could not be found.");
c.println("FileNotFoundException: " + e.getMessage());
}
catch (IOException e)
{
c.println("Problem reading file.");
c.println("IOException: " + e.getMessage());
}
``` | 2011/06/01 | [
"https://Stackoverflow.com/questions/6195404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/776107/"
] | Take a step back and look at your code from a high level:
```
print "which file?"
filename = readline()
open(filename)
try {
print "which file?"
filename = readline()
open(filename)
create reader (not using the File object?)
create writer (not using the File object, but why a writer??)
while ...
```
It sure feels like you've sat down, coded 53 odd lines without testing *anything*, copy-and-pasted code around without understanding why you had it in the first place, and didn't understand what you were aiming for in the first place. (Sorry to be this blunt, but you did ask for advice and I'm not good at sugar coating.)
I suggest writing your program entirely by *hand* on a sheet of paper with a pencil first. You'll want it to look more like this:
```
while user still wants to play
ask for category
open file
read file contents into an array
close file
select array element at random
while user still has guesses left
print a _ for each character
ask user to guess a letter
if letter is in the word
replace the _ with the letter in the output
if the word is complete, success!
else
guesses left --
user ran out of guesses, give condolences
```
Once you've thought through all the cases, all the wins and losses, and so forth, then start coding. **Start small**. Get something to compile and run immediate, even if it is just the usual Java noise. Add a few lines, re-run, re-test. Never add more than four or five lines at a time. And don't hesitate to add plenty of `System.out.println(...)` calls to show you what your program internal state looks like.
As you get more experienced, you'll get better at recognizing the error messages, and maybe feel confident enough to add ten to twenty lines at a time. Don't rush to get there, though, it takes time. | would this work as an array to check the letters entered by the user of the program?
```
final int LOW = 'A'; //smallest possible value
final int HIGH = 'Z'; //highest possible value
int[] letterCounts = new int[HIGH - LOW + 1];
String guessletter;
char[] guessletter;
int offset; //array index
// set constants for the secret word and also "!" to guess the full word
final String GUESS_FULL_WORD = "!";
final String SECRET_WORD = "APPLE";
```
// set integer value for number of letters for the length of the secret word
// set integer value for the number of guesses the user have made. starting at zero.
int numberofletters, numberofguesses;
numberofguesses = 0;
// guessletter indicates the letter that the user is guessing
// guessword indicates the word that the user is guessing after typing "!"
// new screen indicates the change made to the screen
// screen is the game screen that contains all the "\_"'s
String guessletter, guessword, newscreen;
String screen = "";
numberofletters = SECRET\_WORD.length ();
```
/* prompt user for a word */
c.print("Enter a letter: ");
guessletter = c.readLine();
``` |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | A method is a code block that contains a series of statements. A program causes the statements to be executed by calling the method and specifying any required method arguments. In C#, every executed instruction is performed in the context of a method.
Therefore must be written inside of a method. otherwise CLR don't know when it is supposed to execute that code. This is basic for C#. | Class encapsulates constructors, functions, fields and properties. Accordingly, you can only write any code statement in a function or constructor. |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | A method is a code block that contains a series of statements. A program causes the statements to be executed by calling the method and specifying any required method arguments. In C#, every executed instruction is performed in the context of a method.
Therefore must be written inside of a method. otherwise CLR don't know when it is supposed to execute that code. This is basic for C#. | Because `Console.WriteLine("");` can't be (or) is not a member field for the said class `AssemblyOneClass2`.
To say little more; You define member field (Data member and Methods) while defining class in order to give some behavior to that class and what all action it can perform.
For example: `Student` class having `Name` and `City` property
```
Class student
{
public string Name {get;set;}
public string City {get;set;}
}
```
Just by having a call to `Writeline()` method in your class; what behavior you are defining, you think?
Even if you, keep the class declaration empty saying below, it still make little sense saying that; no behavior to the class have been defined yet and it's just kind a stub right now.
```
public class AssemblyOneClass2
{
}
``` |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | A method is a code block that contains a series of statements. A program causes the statements to be executed by calling the method and specifying any required method arguments. In C#, every executed instruction is performed in the context of a method.
Therefore must be written inside of a method. otherwise CLR don't know when it is supposed to execute that code. This is basic for C#. | ```
public class AssemblyOneClass2
{
static void Main(string[] args)
{
Console.WriteLine("");
}
}
```
You need an actual method for anything to happen. The above snippet shows the correct way to run `"Console.Writeline("");"` |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | The issue is not using `WriteLine`. You can't put **any** code there other than declaring member variables (and properties events etc) or functions. Code like calls to `WriteLine` belongs in the body of those functions. As for the reason, when would you expect that code to run? I expect you can't answer that and neither can the compiler. | Class encapsulates constructors, functions, fields and properties. Accordingly, you can only write any code statement in a function or constructor. |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | Class encapsulates constructors, functions, fields and properties. Accordingly, you can only write any code statement in a function or constructor. | Because `Console.WriteLine("");` can't be (or) is not a member field for the said class `AssemblyOneClass2`.
To say little more; You define member field (Data member and Methods) while defining class in order to give some behavior to that class and what all action it can perform.
For example: `Student` class having `Name` and `City` property
```
Class student
{
public string Name {get;set;}
public string City {get;set;}
}
```
Just by having a call to `Writeline()` method in your class; what behavior you are defining, you think?
Even if you, keep the class declaration empty saying below, it still make little sense saying that; no behavior to the class have been defined yet and it's just kind a stub right now.
```
public class AssemblyOneClass2
{
}
``` |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | The issue is not using `WriteLine`. You can't put **any** code there other than declaring member variables (and properties events etc) or functions. Code like calls to `WriteLine` belongs in the body of those functions. As for the reason, when would you expect that code to run? I expect you can't answer that and neither can the compiler. | Because `Console.WriteLine("");` can't be (or) is not a member field for the said class `AssemblyOneClass2`.
To say little more; You define member field (Data member and Methods) while defining class in order to give some behavior to that class and what all action it can perform.
For example: `Student` class having `Name` and `City` property
```
Class student
{
public string Name {get;set;}
public string City {get;set;}
}
```
Just by having a call to `Writeline()` method in your class; what behavior you are defining, you think?
Even if you, keep the class declaration empty saying below, it still make little sense saying that; no behavior to the class have been defined yet and it's just kind a stub right now.
```
public class AssemblyOneClass2
{
}
``` |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | The issue is not using `WriteLine`. You can't put **any** code there other than declaring member variables (and properties events etc) or functions. Code like calls to `WriteLine` belongs in the body of those functions. As for the reason, when would you expect that code to run? I expect you can't answer that and neither can the compiler. | ```
public class AssemblyOneClass2
{
static void Main(string[] args)
{
Console.WriteLine("");
}
}
```
You need an actual method for anything to happen. The above snippet shows the correct way to run `"Console.Writeline("");"` |
29,330,708 | Can anyone please tell me the reason that why we can't use **Console.WriteLine** in Class without Method. I am trying to do this but complier is putting a error. I know its wrong but just want to know the valid reason for that .
```
public class AssemblyOneClass2
{
Console.WriteLine("");
}
``` | 2015/03/29 | [
"https://Stackoverflow.com/questions/29330708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2750155/"
] | ```
public class AssemblyOneClass2
{
static void Main(string[] args)
{
Console.WriteLine("");
}
}
```
You need an actual method for anything to happen. The above snippet shows the correct way to run `"Console.Writeline("");"` | Because `Console.WriteLine("");` can't be (or) is not a member field for the said class `AssemblyOneClass2`.
To say little more; You define member field (Data member and Methods) while defining class in order to give some behavior to that class and what all action it can perform.
For example: `Student` class having `Name` and `City` property
```
Class student
{
public string Name {get;set;}
public string City {get;set;}
}
```
Just by having a call to `Writeline()` method in your class; what behavior you are defining, you think?
Even if you, keep the class declaration empty saying below, it still make little sense saying that; no behavior to the class have been defined yet and it's just kind a stub right now.
```
public class AssemblyOneClass2
{
}
``` |
104,648 | Most people say that it is because the 3p orbital of chlorine when overlapping with the s orbital of hydrogen covers more area than when 4p orbital of bromine overlaps with the s orbital of hydrogen which makes bond between H and Cl more stronger than the bond between H and Br. But how can we say that the area of overlapping is more for 3p and s orbital compared to overlapping of 4p and s orbital? | 2018/11/21 | [
"https://chemistry.stackexchange.com/questions/104648",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/70394/"
] | You can think of the overlapping volume between the electron clouds, but a simpler form to show why this happens is when you consider the electrical potential between the atoms. Even tough this isn't an ionic compound, it has many characteristics that are similar.
If I remember correctly, this is the expression for the potential energy between charged particles: $E\_p = k\frac{Q\_1 \cdot Q\_2}{d}$
Since the radius of $\ce{Br}$ is greater than the radius of $\ce{Cl}$, the potential energy involved in $\ce{H-Br}$ bond will be smaller than $\ce{H-Cl}$ bond.
Another thing that came to my mind, and I'm not sure, is that as the 4p cloud of $\ce{Br}$ is bigger than that of $\ce{Cl}$, the relative amount of the volume that is overlapped with the 1s cloud of $\ce{H}$ gets smaller with $\ce{Br}$ than with $\ce{Cl}$, which means the connection is weaker. | If you aren't convinced by the overlap area argument, then think of it in terms of polarization. The H+ cation will polarize Br- more as compared to Cl-, so HBr will possess a more covalent character as compared to HCl, or conversely, H-Cl bond will be more ionic than the H-Br bond. And as we know, ionic bonds are generally stronger than covalent bonds |
13,351,653 | I get the following error
```
No route matches [POST] "/events"
```
with this setup:
### config/routes.rb
```
namespace :admin do
#...
resources :events
#...
end
```
---
### (...)admin/events\_controller.rb
```
class Admin::EventsController < Admin::AdminController
def index
@events = Event.all
end
def new
@event = Event.new
end
def create
@event = Event.new(params[:event])
if @event.save
redirect_to [:admin, admin_events_url]
else
render :action => "new"
end
end
def edit
@event = Event.find(params[:id])
end
end
```
---
### (...)admin/events/\_form.html.erb
```
<%= form_for([:admin, @event]) do |f| %>
```
---
I can't figure out what I am doing wrong!
Update
------
I get this error when I try to POST the from data while creating a new event entry
---
Update 2
--------
The opening form tag inside `events/new`:
```
<form accept-charset="UTF-8" action="/admin/events" enctype="multipart/form-data" id="new_event" method="post">
```
the result of `rake routes`:
```
admin_events GET /admin/events(.:format) admin/events#index
POST /admin/events(.:format) admin/events#create
```
Navigating to `/admin/events/` using `GET` works just fine.
---
Update 3
--------
It works fine on Windows 8 x64 bit with Ruby 1.9.3, Rails 3.2 and Mongrel. It *doesn't* work with Ruby 1.8.7, Rails 3.2 and Phusion Passenger on a linux server (the host).
Update 4
========
Oh. It appears Rails isn't very happy if you send it a form with `multipart/form-data` encoding! Removing the file-upload fixed this issue. | 2012/11/12 | [
"https://Stackoverflow.com/questions/13351653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2133758/"
] | For what it's worth the only thing that looks fishy to me about your controller is your redirect. You should be able to just do:
```
redirect_to admin_events_path
``` | Please try setting up your form this way:
```
form_for(@event, { url: admin_events_path, method: "POST" }) do
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.