qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
40,298,854
Hi there am building an android quiz application. I have all my necessary activities(result activity, question activity etc) I even have my Sqlite database with about 50 questions. For each game or session the user is given 10 questions which are answered and result given to the user. **What I want to achieve is each time the user opens the app I want a different set of 10 question should be showed to the user, since I have up to 50 questions** E.g on each launch of the app say question 1-10 could be displayed, on next launch 11- to 20 and in that order. If the database questions have been exhausted it can repeat the order again. I want a group of say 10 questions shuffled to the user on each launch. Its an offline app. Please any help (detailed will b appreciated)
2016/10/28
[ "https://Stackoverflow.com/questions/40298854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5839345/" ]
One way to work with your case is subclassing `NSRegularExpression` and override `replacementString(for:in:offset:template:)` method. ``` class ToUpperRegex: NSRegularExpression { override func replacementString(for result: NSTextCheckingResult, in string: String, offset: Int, template templ: String) -> String { guard result.numberOfRanges > 2 else { return "" } let matchingString = (string as NSString).substring(with: result.rangeAt(2)) as String return matchingString.uppercased() } } let oldLine = "hello_world" let fullRange = NSRange(0..<oldLine.utf16.count) //<- let tuRegex = try! ToUpperRegex(pattern: "(_)(\\w)") let newLine = tuRegex.stringByReplacingMatches(in: oldLine, range: fullRange, withTemplate: "") print(newLine) //->helloWorld ```
This doesn't answer the question pertaining regex, but might be of interest for readers not necessarily needing to use regex to perform this task (rather, using native Swift) ``` extension String { func camelCased(givenSeparators separators: [Character]) -> String { let charChunks = characters.split { separators.contains($0) } guard let firstChunk = charChunks.first else { return self } return String(firstChunk).lowercased() + charChunks.dropFirst() .map { String($0).onlyFirstCharacterUppercased }.joined() } // helper (uppercase first char, lowercase rest) var onlyFirstCharacterUppercased: String { let chars = characters guard let firstChar = chars.first else { return self } return String(firstChar).uppercased() + String(chars.dropFirst()).lowercased() } } /* Example usage */ let oldLine1 = "hello_world" let oldLine2 = "fOo_baR BAX BaZ_fOX" print(oldLine1.camelCased(givenSeparators: ["_"])) // helloWorld print(oldLine2.camelCased(givenSeparators: ["_", " "])) // fooBarBazBazFox ```
4,664,864
Is it necessary to use the second line here? ``` $("message",xml).each(function(id) { message = $("message",xml).get(id); msgID = $("msgID",message).text(); ``` Isn't there some kind of 'this' keyword to eliminate the second line? Thanks!
2011/01/12
[ "https://Stackoverflow.com/questions/4664864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/557855/" ]
``` $('message', xml).each(function() { var msgID = $(this).find('msgID').text(); } ``` Assuming this structure: ``` <root> <message> <msgID>123</msgID> </message> <message> <msgID>234</msgID> </message> <message> <msgID>345</msgID> </message> </root> ```
When you're in [an `.each()`](http://api.jquery.com/each/), `this` will represent the current item in the iteration. The `.each()` also gives you 2 parameters. The first is the current index number of the iteration, and the second is the item in the iteration, same as `this`. ``` $("message",xml).each(function( idx, val ) { // in here, "this" is the current "message" node in the iteration // "i" is the current index in the iteration // "val" is the same as "this" }); ``` When you do `$("message",xml)`, you are looking for `"message"` nodes that are *nested* under the nodes at the top of `xml`. If any were found, the `.each()` will iterate over them.
38,372,137
I like to move post under parent page, by default WordPress makes them appear from the root homepage ex: <http://www.example.com/new-article> I would like to move the articles URL so that they appear under that ex: <http://www.example.com/blog/new-article>. Does anyone know how to make WordPress move posts under a specific area/parent page so that the URL would appear as above? any input would be great!
2016/07/14
[ "https://Stackoverflow.com/questions/38372137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3955028/" ]
You have to go your dashboard >> Settings >> Permalinks Select: "Custom Structure" and make your own permalinks by putting /blog/%postname%/ on the field there. See for better that what you want. You can read up here: <http://codex.wordpress.org/Using_Permalinks> and here: <https://codex.wordpress.org/Settings_Permalinks_Screen>
You should 1. create the page "blog" 2. edit the child page 3. set the parent page via the widget on the right side of the edit screen
38,372,137
I like to move post under parent page, by default WordPress makes them appear from the root homepage ex: <http://www.example.com/new-article> I would like to move the articles URL so that they appear under that ex: <http://www.example.com/blog/new-article>. Does anyone know how to make WordPress move posts under a specific area/parent page so that the URL would appear as above? any input would be great!
2016/07/14
[ "https://Stackoverflow.com/questions/38372137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3955028/" ]
You have to go your dashboard >> Settings >> Permalinks Select: "Custom Structure" and make your own permalinks by putting /blog/%postname%/ on the field there. See for better that what you want. You can read up here: <http://codex.wordpress.org/Using_Permalinks> and here: <https://codex.wordpress.org/Settings_Permalinks_Screen>
Actually I just have realized that this is pretty easy. In **WordPress >> Setting >>Permalinks** I have changed the permalink structure to *example.com/%category%/%postname%* and got the desired result Now, I want to edit my category page e.g. *example.com/cat1*. It seems WordPress doesn't allow such thing, am I right?
3,259,910
**Sorry for the long post, but most of it is code spelling out my scenario:** I'm trying to execute a dynamic query (hopefully through a stored proceedure) to retrieve results based on a variable number of inputs. If I had a table: ``` (dbo).(People) ID Name Age 1 Joe 28 2 Bob 32 3 Alan 26 4 Joe 27 ``` I want to allow the user to search by any of the three columns, no problem: ``` DECLARE @ID int, @Name nvarchar(25), @Age int SET @ID = 1 SET @Name = 'Joe' SET @Age = null SELECT * FROM dbo.People WHERE (ID = @ID or @ID is null) AND (Name like @Name or @Name is null) AND (Age = @Age or @Age is null) ``` And I retrieve the result that I want. Now, if I want to search for multiple fields in a column, I can do that no problem: ``` DECLARE @text nvarchar(100) SET @text = '1, 3' DECLARE @ids AS TABLE (n int NOT NULL PRIMARY KEY) --//parse the string into a table DECLARE @TempString nvarchar(300), @Pos int SET @text = LTRIM(RTRIM(@text))+ ',' SET @Pos = CHARINDEX(',', @text, 1) IF REPLACE(@text, ',', '') <> '' BEGIN WHILE @Pos > 0 BEGIN SET @TempString = LTRIM(RTRIM(LEFT(@text, @Pos - 1))) IF @TempString <> '' --just: IF @TempString != '' BEGIN INSERT INTO @ids VALUES (@TempString) END SET @text = RIGHT(@text, LEN(@text) - @Pos) SET @Pos = CHARINDEX(',', @text, 1) END END SELECT * FROM dbo.People WHERE ID IN (SELECT n FROM @ids) ``` Now, my issue is I can't seem to figure out how to combine the two since I can't put: ``` WHERE (Name like @Name or @Name is null) AND (Id IN (SELECT n FROM @ids) or @ids is null) ``` Because @ids will never be null (since it's a table) Any help would be greatly appreciated! Thanks in advance...and let me know if I can clarify anything
2010/07/15
[ "https://Stackoverflow.com/questions/3259910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351207/" ]
You could use an IF statement: ``` IF LEN(@ids) > 0 BEGIN SELECT * FROM dbo.People WHERE ID IN (SELECT n FROM @ids) END ELSE BEGIN SELECT * FROM dbo.People END ``` Otherwise, [consider making the query real dynamic SQL (minding pitfalls of course)](http://www.sommarskog.se/dynamic_sql.html).
You can try: ``` NOT EXISTS (SELECT 1 FROM @ids) OR EXISTS (SELECT 1 FROM @ids where n = Id) ``` But these better be small tables - this query will probably not play very well with any indexes on your tables.
3,259,910
**Sorry for the long post, but most of it is code spelling out my scenario:** I'm trying to execute a dynamic query (hopefully through a stored proceedure) to retrieve results based on a variable number of inputs. If I had a table: ``` (dbo).(People) ID Name Age 1 Joe 28 2 Bob 32 3 Alan 26 4 Joe 27 ``` I want to allow the user to search by any of the three columns, no problem: ``` DECLARE @ID int, @Name nvarchar(25), @Age int SET @ID = 1 SET @Name = 'Joe' SET @Age = null SELECT * FROM dbo.People WHERE (ID = @ID or @ID is null) AND (Name like @Name or @Name is null) AND (Age = @Age or @Age is null) ``` And I retrieve the result that I want. Now, if I want to search for multiple fields in a column, I can do that no problem: ``` DECLARE @text nvarchar(100) SET @text = '1, 3' DECLARE @ids AS TABLE (n int NOT NULL PRIMARY KEY) --//parse the string into a table DECLARE @TempString nvarchar(300), @Pos int SET @text = LTRIM(RTRIM(@text))+ ',' SET @Pos = CHARINDEX(',', @text, 1) IF REPLACE(@text, ',', '') <> '' BEGIN WHILE @Pos > 0 BEGIN SET @TempString = LTRIM(RTRIM(LEFT(@text, @Pos - 1))) IF @TempString <> '' --just: IF @TempString != '' BEGIN INSERT INTO @ids VALUES (@TempString) END SET @text = RIGHT(@text, LEN(@text) - @Pos) SET @Pos = CHARINDEX(',', @text, 1) END END SELECT * FROM dbo.People WHERE ID IN (SELECT n FROM @ids) ``` Now, my issue is I can't seem to figure out how to combine the two since I can't put: ``` WHERE (Name like @Name or @Name is null) AND (Id IN (SELECT n FROM @ids) or @ids is null) ``` Because @ids will never be null (since it's a table) Any help would be greatly appreciated! Thanks in advance...and let me know if I can clarify anything
2010/07/15
[ "https://Stackoverflow.com/questions/3259910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351207/" ]
Try: ``` (Id IN (SELECT n FROM @ids) OR NOT EXISTS (SELECT * FROM @ids)) ```
You can try: ``` NOT EXISTS (SELECT 1 FROM @ids) OR EXISTS (SELECT 1 FROM @ids where n = Id) ``` But these better be small tables - this query will probably not play very well with any indexes on your tables.
3,259,910
**Sorry for the long post, but most of it is code spelling out my scenario:** I'm trying to execute a dynamic query (hopefully through a stored proceedure) to retrieve results based on a variable number of inputs. If I had a table: ``` (dbo).(People) ID Name Age 1 Joe 28 2 Bob 32 3 Alan 26 4 Joe 27 ``` I want to allow the user to search by any of the three columns, no problem: ``` DECLARE @ID int, @Name nvarchar(25), @Age int SET @ID = 1 SET @Name = 'Joe' SET @Age = null SELECT * FROM dbo.People WHERE (ID = @ID or @ID is null) AND (Name like @Name or @Name is null) AND (Age = @Age or @Age is null) ``` And I retrieve the result that I want. Now, if I want to search for multiple fields in a column, I can do that no problem: ``` DECLARE @text nvarchar(100) SET @text = '1, 3' DECLARE @ids AS TABLE (n int NOT NULL PRIMARY KEY) --//parse the string into a table DECLARE @TempString nvarchar(300), @Pos int SET @text = LTRIM(RTRIM(@text))+ ',' SET @Pos = CHARINDEX(',', @text, 1) IF REPLACE(@text, ',', '') <> '' BEGIN WHILE @Pos > 0 BEGIN SET @TempString = LTRIM(RTRIM(LEFT(@text, @Pos - 1))) IF @TempString <> '' --just: IF @TempString != '' BEGIN INSERT INTO @ids VALUES (@TempString) END SET @text = RIGHT(@text, LEN(@text) - @Pos) SET @Pos = CHARINDEX(',', @text, 1) END END SELECT * FROM dbo.People WHERE ID IN (SELECT n FROM @ids) ``` Now, my issue is I can't seem to figure out how to combine the two since I can't put: ``` WHERE (Name like @Name or @Name is null) AND (Id IN (SELECT n FROM @ids) or @ids is null) ``` Because @ids will never be null (since it's a table) Any help would be greatly appreciated! Thanks in advance...and let me know if I can clarify anything
2010/07/15
[ "https://Stackoverflow.com/questions/3259910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351207/" ]
You could use an IF statement: ``` IF LEN(@ids) > 0 BEGIN SELECT * FROM dbo.People WHERE ID IN (SELECT n FROM @ids) END ELSE BEGIN SELECT * FROM dbo.People END ``` Otherwise, [consider making the query real dynamic SQL (minding pitfalls of course)](http://www.sommarskog.se/dynamic_sql.html).
A quick fix: ``` (`%,' + Id + ',%' like ',' + @ids + ',' or @ids is null) and (`%,' + Name + ',%' like ',' + @names + ',' or @names is null) ``` So if the user passes `@ids = 1,2`, the first row gives: ``` `%,1,%' like ',1,2,' ``` It's a good idea to filter out spaces before and after comma's. :)
3,259,910
**Sorry for the long post, but most of it is code spelling out my scenario:** I'm trying to execute a dynamic query (hopefully through a stored proceedure) to retrieve results based on a variable number of inputs. If I had a table: ``` (dbo).(People) ID Name Age 1 Joe 28 2 Bob 32 3 Alan 26 4 Joe 27 ``` I want to allow the user to search by any of the three columns, no problem: ``` DECLARE @ID int, @Name nvarchar(25), @Age int SET @ID = 1 SET @Name = 'Joe' SET @Age = null SELECT * FROM dbo.People WHERE (ID = @ID or @ID is null) AND (Name like @Name or @Name is null) AND (Age = @Age or @Age is null) ``` And I retrieve the result that I want. Now, if I want to search for multiple fields in a column, I can do that no problem: ``` DECLARE @text nvarchar(100) SET @text = '1, 3' DECLARE @ids AS TABLE (n int NOT NULL PRIMARY KEY) --//parse the string into a table DECLARE @TempString nvarchar(300), @Pos int SET @text = LTRIM(RTRIM(@text))+ ',' SET @Pos = CHARINDEX(',', @text, 1) IF REPLACE(@text, ',', '') <> '' BEGIN WHILE @Pos > 0 BEGIN SET @TempString = LTRIM(RTRIM(LEFT(@text, @Pos - 1))) IF @TempString <> '' --just: IF @TempString != '' BEGIN INSERT INTO @ids VALUES (@TempString) END SET @text = RIGHT(@text, LEN(@text) - @Pos) SET @Pos = CHARINDEX(',', @text, 1) END END SELECT * FROM dbo.People WHERE ID IN (SELECT n FROM @ids) ``` Now, my issue is I can't seem to figure out how to combine the two since I can't put: ``` WHERE (Name like @Name or @Name is null) AND (Id IN (SELECT n FROM @ids) or @ids is null) ``` Because @ids will never be null (since it's a table) Any help would be greatly appreciated! Thanks in advance...and let me know if I can clarify anything
2010/07/15
[ "https://Stackoverflow.com/questions/3259910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/351207/" ]
Try: ``` (Id IN (SELECT n FROM @ids) OR NOT EXISTS (SELECT * FROM @ids)) ```
A quick fix: ``` (`%,' + Id + ',%' like ',' + @ids + ',' or @ids is null) and (`%,' + Name + ',%' like ',' + @names + ',' or @names is null) ``` So if the user passes `@ids = 1,2`, the first row gives: ``` `%,1,%' like ',1,2,' ``` It's a good idea to filter out spaces before and after comma's. :)
1,042,915
I am currently using Ubuntu 17.10 and am trying to upgrade to the 18.04 LTS new version. After clicking on the "Upgrade" option in the Software Updater I am presented with a release notes window which has another "Upgrade" option. After choosing it I am presented with a 'do-release upgrade' screen which disappears as soon as it finishes downloading/loading some files. I have tried restarting my machine, running the Software manager through different means but to no avail. Is there another way to upgrade to the newest distro version other than simply installing a new image file on my drive? EDIT: getting the following output when trying to upgrade using `do-release-upgrade -d` OUTPUT: ``` Checking for a new Ubuntu release ERROR:root:gedefaultlocale() failed Traceback (most recent call last): File "/usr/lib/python3/dist-packages/UpdateManager/Core/utils.py", line 388, in get_lang (locale_s, encoding) = locale.getdefaultlocale() File "/usr/lib/python3.6/locale.py", line 562, in getdefaultlocale return _parse_localename(localename) File "/usr/lib/python3.6/locale.py", line 490, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: en_IL ERROR:root:gedefaultlocale() failed Traceback (most recent call last): File "/usr/lib/python3/dist-packages/UpdateManager/Core/utils.py", line 388, in get_lang (locale_s, encoding) = locale.getdefaultlocale() File "/usr/lib/python3.6/locale.py", line 562, in getdefaultlocale return _parse_localename(localename) File "/usr/lib/python3.6/locale.py", line 490, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: en_IL ERROR:root:gedefaultlocale() failed Traceback (most recent call last): File "/usr/lib/python3/dist-packages/UpdateManager/Core/utils.py", line 388, in get_lang (locale_s, encoding) = locale.getdefaultlocale() File "/usr/lib/python3.6/locale.py", line 562, in getdefaultlocale return _parse_localename(localename) File "/usr/lib/python3.6/locale.py", line 490, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: en_IL ERROR:root:gedefaultlocale() failed Traceback (most recent call last): File "/usr/lib/python3/dist-packages/UpdateManager/Core/utils.py", line 388, in get_lang (locale_s, encoding) = locale.getdefaultlocale() File "/usr/lib/python3.6/locale.py", line 562, in getdefaultlocale return _parse_localename(localename) File "/usr/lib/python3.6/locale.py", line 490, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: en_IL Upgrades to the development release are only available from the latest supported release. ```
2018/06/02
[ "https://askubuntu.com/questions/1042915", "https://askubuntu.com", "https://askubuntu.com/users/836619/" ]
This is a [known issue](https://launchpad.net/bugs/1646260) with the `en_IL` locale and Python. Probably your `/etc/default/locale` file includes this line: ``` LANG=en_IL ``` Edit that file and change the line to: ``` LANG=en_IL.UTF-8 ``` At next login you'll hopefully be able to upgrade successfully.
run this command to solve this problem ``` export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 ```
122,906
I'm using IIS6 Manager to setup the SMTP service on Windows Server 2008 Web Edition. There seems to be a conflict (port 25?) which means that I cannot start and stop the Default SMTP server within IIS6. I can start and stop it with the services.msc snap in and this is reflected in state of the SMTP server in IIS6 manager. I'm worried that none of the settings I want to get at within IIS6 (logging, authentication etc..) are having any effect. None of these settings are available within IIS7 in Web Edition.
2010/03/16
[ "https://serverfault.com/questions/122906", "https://serverfault.com", "https://serverfault.com/users/-1/" ]
To answer the port conflict issue, run `netstat -ano` and check which PID is using port 25. You can check the process using Task Manager by matching it with PID seen in netstat -ano. By default inetinfo.exe has control over port 25.
HEy Vivek Yes, your advice showed up a conflict with MESMTPC which is obviously the other SMTP server. How many SMTP servers are there!! Which is the best one to use or are they the same one with different management tools?
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
To Start Tomcat7 Service : * Open cmd, go to bin directory within "Apache Tomcat 7" folder. You will see some this like `C:\..\bin>` * Enter above command to start the service: `C:\..\bin>service.bat install`. The service will get started now. * Enter above command to start tomcat7w monitory service. If you have issue with starting the tomcat7 service then remove the service with command : `C:\..\bin>tomcat7 //DS//Tomcat7` * Now the service will no longer exist. Try the install command again, now the service will get installed and started: `C:\..\bin>tomcat7w \\MS\tomcat7w` * You will see the tomcat 7 icon in the system tray. Now, the tomcat7 service and tomcat7w will start automatically when the windows get start.
I have spent a couple of hours looking for the magic configuration to get Tomcat 7 running as a service on Windows Server 2008... no luck. I do have a solution though. My install of Tomcat 7 works just fine if I just jump into a console window and run... ``` C:\apache-tomcat-7.0.26\bin\start.bat ``` At this point another console window pops up and tails the logs *(tail meaning show the server logs as they happen)*. **SOLUTION** Run the start.bat file as a Scheduled Task. 1. Start Menu > Accessories > System Tools > **Task Scheduler** 2. In the Actions Window: **Create Basic Task...** 3. Name the task something like "**Start Tomcat 7**" or something that makes sense a year from now. 4. Click **Next >** 5. Trigger should be set to "**When the computer starts**" 6. Click **Next >** 7. Action should be set to "**Start a program**" 8. Click **Next >** 9. Program/script: should be set to the location of the **startup.bat** file. 10. Click **Next >** 11. Click **Finish** 12. **IF YOUR SERVER IS NOT BEING USED**: Reboot your server to test this functionality
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
I have spent a couple of hours looking for the magic configuration to get Tomcat 7 running as a service on Windows Server 2008... no luck. I do have a solution though. My install of Tomcat 7 works just fine if I just jump into a console window and run... ``` C:\apache-tomcat-7.0.26\bin\start.bat ``` At this point another console window pops up and tails the logs *(tail meaning show the server logs as they happen)*. **SOLUTION** Run the start.bat file as a Scheduled Task. 1. Start Menu > Accessories > System Tools > **Task Scheduler** 2. In the Actions Window: **Create Basic Task...** 3. Name the task something like "**Start Tomcat 7**" or something that makes sense a year from now. 4. Click **Next >** 5. Trigger should be set to "**When the computer starts**" 6. Click **Next >** 7. Action should be set to "**Start a program**" 8. Click **Next >** 9. Program/script: should be set to the location of the **startup.bat** file. 10. Click **Next >** 11. Click **Finish** 12. **IF YOUR SERVER IS NOT BEING USED**: Reboot your server to test this functionality
There are a lot of answers here, but many overlook a few points. I ran into the same issue and it was likely due to a combination of being a complete neophyte when it comes to tomcat. Even more I am rather new to web servers in general. I consider myself somewhat proficient user of windows, but I guess not proficient enough. In particular I don't work with services too much. I did not have a startup.bat or any bat files. I only downloaded the **32-bit/64-bit Windows Service Installer**. The bin that is created for that download is small - only 4 files. My colleagues were surprised that I did not have a catalina.bat etc... and I was too. Only the below four files in the bin. And no %CATALINA\_HOME% or %TOMCAT\_HOME% etc... ``` bootstrap.jar tomcat-juli.jar Tomcat7.exe Tomcat7w.exe ``` With this setup I had some frustrations as setting parameters is done via the gui widget - very helpful I might add. So nearly all the answers I have perused were not immediately applicable as many said, "go to bin and issue the startup.bat file" I am a neophyte but not so much to not be able to look into the bin and start such a file it is existed! For my simple purposes (again remember that I am a neophyte at tomcat and even web servers) all I wanted to do was to be able to startup and shutdown the tomcat server from a cmd prompt window. Nothing too heavy duty. I am embarrassed to say how simple it is. It is probably evident to anyone with a shred of experience with services and such. ``` To Start server: <Tomcat Root>/bin>Tomcat7.exe start To Stop server: <Tomcat Root>/bin>Tomcat7.exe stop ``` Found here - <http://crunchify.com/how-to-start-stop-apache-tomcat-server-via-command-line-setup-as-windows-service/> I did not realize there was a separate download the 64-bit Windows zip file that has a tomcat server and all the standard array of cmd line tomcat management tools. This zip file has all the common startup/shutdown scripts, batch files for windows, including catalina.bat/.sh etc... Then all the above answers make sense and are rather trivial. Remember I am a neophyte when it comes to tomcat and web servers. It appears these two downloads are somewhat mutually exclusive in the sense that if I download and install the 32-bit/64-bit Windows Service Installer version and the 64-bit Windows zip file the startup.bat file in the 64-bit Windows zip file version will not run or interact with the 32-bit/64-bit Windows Service Installer tomcat instance. But I am not sure about this point.
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
To Start Tomcat7 Service : * Open cmd, go to bin directory within "Apache Tomcat 7" folder. You will see some this like `C:\..\bin>` * Enter above command to start the service: `C:\..\bin>service.bat install`. The service will get started now. * Enter above command to start tomcat7w monitory service. If you have issue with starting the tomcat7 service then remove the service with command : `C:\..\bin>tomcat7 //DS//Tomcat7` * Now the service will no longer exist. Try the install command again, now the service will get installed and started: `C:\..\bin>tomcat7w \\MS\tomcat7w` * You will see the tomcat 7 icon in the system tray. Now, the tomcat7 service and tomcat7w will start automatically when the windows get start.
1. Edit service.bat – Swap two lines so that they appear in following order: if not “%JAVA\_HOME%“ == ““ goto got JdkHome if not “%JRE\_HOME%“ == ““ goto got JreHome 2. Open cmd and run command service.bat install 3. Open Services and find Apache Tomcat 7.0 Tomcat7. Right click and Properties. Change its startup type to Automatic (with delay). 4. Reboot machine to verify if the service started automatically
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
1. Edit service.bat – Swap two lines so that they appear in following order: if not “%JAVA\_HOME%“ == ““ goto got JdkHome if not “%JRE\_HOME%“ == ““ goto got JreHome 2. Open cmd and run command service.bat install 3. Open Services and find Apache Tomcat 7.0 Tomcat7. Right click and Properties. Change its startup type to Automatic (with delay). 4. Reboot machine to verify if the service started automatically
I had a similar problem, there isn't a **service.bat** in the zip version of tomcat that I downloaded ages ago. I simply downloaded a new [64-bit Windows zip](http://mirror.nus.edu.sg/apache/tomcat/tomcat-7/v7.0.37/bin/apache-tomcat-7.0.37-windows-x64.zip) version of tomcat from <http://tomcat.apache.org/download-70.cgi> and replaced my existing **tomcat\bin** folder with the one I just downloaded (Remember to keep a backup first!). Start command prompt > navigate to the tomcat\bin directory > issue the command: `service.bat install` Hope that helps!
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
I just had the same issue and could only install tomcat7 as a serivce using the "32-bit/64-bit Windows Service Installer" version of tomcat: <http://tomcat.apache.org/download-70.cgi>
There are a lot of answers here, but many overlook a few points. I ran into the same issue and it was likely due to a combination of being a complete neophyte when it comes to tomcat. Even more I am rather new to web servers in general. I consider myself somewhat proficient user of windows, but I guess not proficient enough. In particular I don't work with services too much. I did not have a startup.bat or any bat files. I only downloaded the **32-bit/64-bit Windows Service Installer**. The bin that is created for that download is small - only 4 files. My colleagues were surprised that I did not have a catalina.bat etc... and I was too. Only the below four files in the bin. And no %CATALINA\_HOME% or %TOMCAT\_HOME% etc... ``` bootstrap.jar tomcat-juli.jar Tomcat7.exe Tomcat7w.exe ``` With this setup I had some frustrations as setting parameters is done via the gui widget - very helpful I might add. So nearly all the answers I have perused were not immediately applicable as many said, "go to bin and issue the startup.bat file" I am a neophyte but not so much to not be able to look into the bin and start such a file it is existed! For my simple purposes (again remember that I am a neophyte at tomcat and even web servers) all I wanted to do was to be able to startup and shutdown the tomcat server from a cmd prompt window. Nothing too heavy duty. I am embarrassed to say how simple it is. It is probably evident to anyone with a shred of experience with services and such. ``` To Start server: <Tomcat Root>/bin>Tomcat7.exe start To Stop server: <Tomcat Root>/bin>Tomcat7.exe stop ``` Found here - <http://crunchify.com/how-to-start-stop-apache-tomcat-server-via-command-line-setup-as-windows-service/> I did not realize there was a separate download the 64-bit Windows zip file that has a tomcat server and all the standard array of cmd line tomcat management tools. This zip file has all the common startup/shutdown scripts, batch files for windows, including catalina.bat/.sh etc... Then all the above answers make sense and are rather trivial. Remember I am a neophyte when it comes to tomcat and web servers. It appears these two downloads are somewhat mutually exclusive in the sense that if I download and install the 32-bit/64-bit Windows Service Installer version and the 64-bit Windows zip file the startup.bat file in the 64-bit Windows zip file version will not run or interact with the 32-bit/64-bit Windows Service Installer tomcat instance. But I am not sure about this point.
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
You can find the solution [here](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html)! Install the service named 'Tomcat7' ``` C:\>Tomcat\bin\service.bat install ``` There is a 2nd optional parameter that lets you specify the name of the service, as displayed in Windows services. Install the service named 'MyTomcatService' ``` C:\>Tomcat\bin\service.bat install MyTomcatService ```
There are a lot of answers here, but many overlook a few points. I ran into the same issue and it was likely due to a combination of being a complete neophyte when it comes to tomcat. Even more I am rather new to web servers in general. I consider myself somewhat proficient user of windows, but I guess not proficient enough. In particular I don't work with services too much. I did not have a startup.bat or any bat files. I only downloaded the **32-bit/64-bit Windows Service Installer**. The bin that is created for that download is small - only 4 files. My colleagues were surprised that I did not have a catalina.bat etc... and I was too. Only the below four files in the bin. And no %CATALINA\_HOME% or %TOMCAT\_HOME% etc... ``` bootstrap.jar tomcat-juli.jar Tomcat7.exe Tomcat7w.exe ``` With this setup I had some frustrations as setting parameters is done via the gui widget - very helpful I might add. So nearly all the answers I have perused were not immediately applicable as many said, "go to bin and issue the startup.bat file" I am a neophyte but not so much to not be able to look into the bin and start such a file it is existed! For my simple purposes (again remember that I am a neophyte at tomcat and even web servers) all I wanted to do was to be able to startup and shutdown the tomcat server from a cmd prompt window. Nothing too heavy duty. I am embarrassed to say how simple it is. It is probably evident to anyone with a shred of experience with services and such. ``` To Start server: <Tomcat Root>/bin>Tomcat7.exe start To Stop server: <Tomcat Root>/bin>Tomcat7.exe stop ``` Found here - <http://crunchify.com/how-to-start-stop-apache-tomcat-server-via-command-line-setup-as-windows-service/> I did not realize there was a separate download the 64-bit Windows zip file that has a tomcat server and all the standard array of cmd line tomcat management tools. This zip file has all the common startup/shutdown scripts, batch files for windows, including catalina.bat/.sh etc... Then all the above answers make sense and are rather trivial. Remember I am a neophyte when it comes to tomcat and web servers. It appears these two downloads are somewhat mutually exclusive in the sense that if I download and install the 32-bit/64-bit Windows Service Installer version and the 64-bit Windows zip file the startup.bat file in the 64-bit Windows zip file version will not run or interact with the 32-bit/64-bit Windows Service Installer tomcat instance. But I am not sure about this point.
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
I have spent a couple of hours looking for the magic configuration to get Tomcat 7 running as a service on Windows Server 2008... no luck. I do have a solution though. My install of Tomcat 7 works just fine if I just jump into a console window and run... ``` C:\apache-tomcat-7.0.26\bin\start.bat ``` At this point another console window pops up and tails the logs *(tail meaning show the server logs as they happen)*. **SOLUTION** Run the start.bat file as a Scheduled Task. 1. Start Menu > Accessories > System Tools > **Task Scheduler** 2. In the Actions Window: **Create Basic Task...** 3. Name the task something like "**Start Tomcat 7**" or something that makes sense a year from now. 4. Click **Next >** 5. Trigger should be set to "**When the computer starts**" 6. Click **Next >** 7. Action should be set to "**Start a program**" 8. Click **Next >** 9. Program/script: should be set to the location of the **startup.bat** file. 10. Click **Next >** 11. Click **Finish** 12. **IF YOUR SERVER IS NOT BEING USED**: Reboot your server to test this functionality
I had a similar problem, there isn't a **service.bat** in the zip version of tomcat that I downloaded ages ago. I simply downloaded a new [64-bit Windows zip](http://mirror.nus.edu.sg/apache/tomcat/tomcat-7/v7.0.37/bin/apache-tomcat-7.0.37-windows-x64.zip) version of tomcat from <http://tomcat.apache.org/download-70.cgi> and replaced my existing **tomcat\bin** folder with the one I just downloaded (Remember to keep a backup first!). Start command prompt > navigate to the tomcat\bin directory > issue the command: `service.bat install` Hope that helps!
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
I just had the same issue and could only install tomcat7 as a serivce using the "32-bit/64-bit Windows Service Installer" version of tomcat: <http://tomcat.apache.org/download-70.cgi>
I had a similar problem, there isn't a **service.bat** in the zip version of tomcat that I downloaded ages ago. I simply downloaded a new [64-bit Windows zip](http://mirror.nus.edu.sg/apache/tomcat/tomcat-7/v7.0.37/bin/apache-tomcat-7.0.37-windows-x64.zip) version of tomcat from <http://tomcat.apache.org/download-70.cgi> and replaced my existing **tomcat\bin** folder with the one I just downloaded (Remember to keep a backup first!). Start command prompt > navigate to the tomcat\bin directory > issue the command: `service.bat install` Hope that helps!
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
You can find the solution [here](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html)! Install the service named 'Tomcat7' ``` C:\>Tomcat\bin\service.bat install ``` There is a 2nd optional parameter that lets you specify the name of the service, as displayed in Windows services. Install the service named 'MyTomcatService' ``` C:\>Tomcat\bin\service.bat install MyTomcatService ```
Looks like now they have the bat in the zip as well note that you can use windows sc command to do more e.g. ``` sc config tomcat7 start= auto ``` yes the space before auto is NEEDED
5,920,051
I want to install my tomcat v7.0.12 as a service on my Windows 2008 Server. On the tomcat page I found [this tutorial](http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html#Installing_services). But there isn't a `service.bat` file in my installation dir. In the service overview of WS2008 it isn't possible easily create a new service like `new->service ...`
2011/05/07
[ "https://Stackoverflow.com/questions/5920051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705763/" ]
To Start Tomcat7 Service : * Open cmd, go to bin directory within "Apache Tomcat 7" folder. You will see some this like `C:\..\bin>` * Enter above command to start the service: `C:\..\bin>service.bat install`. The service will get started now. * Enter above command to start tomcat7w monitory service. If you have issue with starting the tomcat7 service then remove the service with command : `C:\..\bin>tomcat7 //DS//Tomcat7` * Now the service will no longer exist. Try the install command again, now the service will get installed and started: `C:\..\bin>tomcat7w \\MS\tomcat7w` * You will see the tomcat 7 icon in the system tray. Now, the tomcat7 service and tomcat7w will start automatically when the windows get start.
I just had the same issue and could only install tomcat7 as a serivce using the "32-bit/64-bit Windows Service Installer" version of tomcat: <http://tomcat.apache.org/download-70.cgi>
1,125,085
In Ubuntu 16.04 I installed a Compiz plugin with an alternative `alt`-`tab` switcher that had the nifty feature that until I let go of the `Alt` key, it *hid* all other windows and showed only the one I was about to switch to. This is very useful when one has a bunch of open terminals that don't look all that different as thumbnailed previews. After upgrading to Ubuntu 18.04 this can't be used anymore (save for explicitly switching back to Unity, which I'd rather avoid for unrelated reasons). Does anyone know of a similar switcher I could install for the GNOME desktop?
2019/03/12
[ "https://askubuntu.com/questions/1125085", "https://askubuntu.com", "https://askubuntu.com/users/410876/" ]
You can use the **[Coverflow Alt-Tab](https://extensions.gnome.org/extension/97/coverflow-alt-tab/)** extension for GNOME shell. It's a > > Replacement of `Alt`-`Tab`, iterates through windows in a [cover-flow](https://en.wikipedia.org/wiki/Cover_Flow) manner. > > > [![enter image description here](https://i.stack.imgur.com/9w5sa.png)](https://i.stack.imgur.com/9w5sa.png) Refer to this for installing and managing GNOME extensions: [How do I install and manage GNOME Shell extensions?](https://askubuntu.com/q/75530/480481) --- Another alternative would be using the `Alt`+`Esc` combination. It doesn't show any overlay or provide anything fancy, it just switches to the *next* window and so on.
If you like the old panel/taskbar style interface where you can always see what the title of your windows are and you can identify windows by their fixed locations on the taskbar you can try the `gnome dash to panel` extension. If you use multiple workspaces, you can customize is in the software center after installing `gnome-tweak-tool` I had to tweak it A LOT to adjust the padding, hide unnecessary buttons, ungroup applications, isolate workspaces etc, but I found that in the end it was far more productive than trying to hunt for the right terminal among a sea of identical terminal previews that rearrange themselves continuously.
9,131,083
I have a select stored procedure and I am trying to make it so the results it bring down it also updates a column called `Downloaded` and marks those rows as downloads. For example, I pull down 10 rows those 10 rows I also want to update the `Downloaded` column to true all in the same stored procedure. Is this possible? This is my sp so far, it pulls down the data. ``` ALTER PROCEDURE [dbo].[GetLeads] @DateTo datetime = null, @DateFrom datetime = null AS SELECT name , lastname , title , company , address , address2 , city , [state] , zip , country , stamptime FROM lead where ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) ``` Thanks!
2012/02/03
[ "https://Stackoverflow.com/questions/9131083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can simply `OUTPUT` the updated rows; ``` UPDATE lead SET Downloaded = 1 OUTPUT INSERTED.* WHERE ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) ``` This updates, then returns the updated rows in a single statement.
Continuing on vulkanino's comment answer, something like this: ``` ALTER PROCEDURE [dbo].[GetLeads] @DateTo datetime = null, @DateFrom datetime = null AS UPDATE lead SET Downloaded = 1 WHERE ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) SELECT name , lastname , title , company , address , address2 , city , [state] , zip , country , stamptime FROM lead where ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) ```
9,131,083
I have a select stored procedure and I am trying to make it so the results it bring down it also updates a column called `Downloaded` and marks those rows as downloads. For example, I pull down 10 rows those 10 rows I also want to update the `Downloaded` column to true all in the same stored procedure. Is this possible? This is my sp so far, it pulls down the data. ``` ALTER PROCEDURE [dbo].[GetLeads] @DateTo datetime = null, @DateFrom datetime = null AS SELECT name , lastname , title , company , address , address2 , city , [state] , zip , country , stamptime FROM lead where ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) ``` Thanks!
2012/02/03
[ "https://Stackoverflow.com/questions/9131083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Continuing on vulkanino's comment answer, something like this: ``` ALTER PROCEDURE [dbo].[GetLeads] @DateTo datetime = null, @DateFrom datetime = null AS UPDATE lead SET Downloaded = 1 WHERE ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) SELECT name , lastname , title , company , address , address2 , city , [state] , zip , country , stamptime FROM lead where ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) ```
Your best bet might be to use an OUTPUT statement with the UPDATE. <http://blog.sqlauthority.com/2007/10/01/sql-server-2005-output-clause-example-and-explanation-with-insert-update-delete/> ``` DECLARE @TEMPTABLE ( name <type> , lastname <type> , title <type> , company <type> , address <type> , address2 <type> , city <type> , state <type> , zip <type> , country <type> , stamptime <type> ) UPDATE a SET a.Downloaded = 1 OUTPUT Inserted.name, Inserted.lastname, Inserted.title, etc. INTO @TEMPTABLE FROM lead a WHERE ((@DateTo IS NULL AND @DateFrom IS NULL) OR (a.stamptime BETWEEN @DateTo AND @DateFrom)) SELECT * FROM @TEMPTABLE ```
9,131,083
I have a select stored procedure and I am trying to make it so the results it bring down it also updates a column called `Downloaded` and marks those rows as downloads. For example, I pull down 10 rows those 10 rows I also want to update the `Downloaded` column to true all in the same stored procedure. Is this possible? This is my sp so far, it pulls down the data. ``` ALTER PROCEDURE [dbo].[GetLeads] @DateTo datetime = null, @DateFrom datetime = null AS SELECT name , lastname , title , company , address , address2 , city , [state] , zip , country , stamptime FROM lead where ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) ``` Thanks!
2012/02/03
[ "https://Stackoverflow.com/questions/9131083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can simply `OUTPUT` the updated rows; ``` UPDATE lead SET Downloaded = 1 OUTPUT INSERTED.* WHERE ((@DateTo is null AND @DateFrom IS null) or (stamptime BETWEEN @DateTo AND @DateFrom)) ``` This updates, then returns the updated rows in a single statement.
Your best bet might be to use an OUTPUT statement with the UPDATE. <http://blog.sqlauthority.com/2007/10/01/sql-server-2005-output-clause-example-and-explanation-with-insert-update-delete/> ``` DECLARE @TEMPTABLE ( name <type> , lastname <type> , title <type> , company <type> , address <type> , address2 <type> , city <type> , state <type> , zip <type> , country <type> , stamptime <type> ) UPDATE a SET a.Downloaded = 1 OUTPUT Inserted.name, Inserted.lastname, Inserted.title, etc. INTO @TEMPTABLE FROM lead a WHERE ((@DateTo IS NULL AND @DateFrom IS NULL) OR (a.stamptime BETWEEN @DateTo AND @DateFrom)) SELECT * FROM @TEMPTABLE ```
45,184,169
I have a create form where if the specific `Medicine` exist, its number of supply will update or added with the new entry however if the specific `Medicine` doesn't exist, it will create a new batch of data. Im having trouble at understanding how update works in MVC. Here is the error: Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Here is my controller: ``` public ActionResult Create([Bind(Include = "SupplyID,MedicineID,Expiration,NumberOfSupply")] Supply supply) { if (ModelState.IsValid) { bool supplyExsist = db.Supplies.Any(x => x.Expiration == supply.Expiration && x.MedicineID == supply.MedicineID); if (supplyExsist) { var currentSupply = (from x in db.Supplies //get current supply where x.MedicineID == supply.MedicineID && x.Expiration == supply.Expiration select x.NumberOfSupply).First(); db.Entry(supply).State = EntityState.Modified; supply.NumberOfSupply = currentSupply + supply.NumberOfSupply; db.SaveChanges(); return RedirectToAction("Index"); } else { db.Supplies.Add(supply); db.SaveChanges(); return RedirectToAction("Index"); } } ViewBag.MedicineID = new SelectList(db.Medicines, "MedicineID", "MedicineName", supply.MedicineID); return View(supply); } ``` Model: ``` public class Supply { [Key] public int SupplyID { get; set; } [ForeignKey("Medicine")] public int MedicineID { get; set; } public Medicine Medicine { get; set; } [DataType(DataType.Date)] public DateTime Expiration { get; set; } [Display(Name = "Quantity")] [Range(1, int.MaxValue, ErrorMessage = "The value must be greater than 0")] public int NumberOfSupply { get; set; } } ```
2017/07/19
[ "https://Stackoverflow.com/questions/45184169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Here you go with the solution <https://jsfiddle.net/1aL33df4/> ```js $('li').hover(function(){ if( typeof $(this).prev().attr('id') != 'undefined') console.log("Previous Element ID: " + $(this).prev().attr('id')); if( typeof $(this).next().attr('id') != 'undefined') console.log("Next Element ID: " + $(this).next().attr('id')); }); ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <ul> <li id="1">First <span class="testId">14</span></li> <li id="2" class="active">Second <span class="testId">15</span></li> <li id="3">Third <span class="testId">16</span></li> </ul> ```
1. Use .next() or .prev() to get respective li 2. use .attr() to get the id 3. use hasClass() to test if li has active class ```js var pliid = $("ul li.active").prev('li').attr('id'); var nliid = $("ul li.active").next('li').attr('id'); console.log("prev li id is " + pliid) console.log("prev li is active " + $("ul li.active").prev('li').hasClass('active')) console.log("prev li textid text " + $("ul li.active").prev('li').find('.testId').text()) console.log("next li id is " + nliid) console.log("next li is active " + $("ul li.active").next('li').hasClass('active')) console.log("next li textid text " + $("ul li.active").next('li').find('.testId').text()) ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <ul> <li id="1">First <span class="testId">14</span></li> <li id="2" class="active">Second <span class="testId">15</span></li> <li id="3">Third <span class="testId">16</span></li> </ul> ```
67,233,521
I just updated from Terraform v0.11.11 to v0.12.1 and I am now seeing these issues. I get the whole list in a list thing but I have tried every which way to fix this with no luck. I have removed the LBracket after ***cidr\_blocks = [*** with no luck I have tried warpping in ***${}*** no go. I have tried removing the lbrackets after the var.cidr\_groups.. like so with no luck ***var.cidr\_groups"mainoffice"***. Where am I going wrong here? ``` resource "aws_security_group" "common_access" { name = "common_access" description = "common_access" vpc_id = "${aws_vpc.myvcp.id}" ingress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = [ var.cidr_groups["mainoffice"], var.cidr_groups["manchester"], var.cidr_groups["singapore"], var.cidr_groups["jena"], var.cidr_groups["fremont"], var.cidr_groups["indianapolis"], ] } egress { from_port = 0 to_port = 0 protocol = "-1" # TF-UPGRADE-TODO: In Terraform v0.10 and earlier, it was sometimes necessary to # force an interpolation expression to be interpreted as a list by wrapping it # in an extra set of list brackets. That form was supported for compatibilty in # v0.11, but is no longer supported in Terraform v0.12. # # If the expression in the following list itself returns a list, remove the # brackets to avoid interpretation as a list of lists. If the expression # returns a single list item then leave it as-is and remove this TODO comment. cidr_blocks = var.cidr_groups["PublicAll"] } tags = { Name = "common_access" owner = var.contact terraform = true } } ``` Vairables file for the ***var.cidr\_groups*** looks like this ``` variable "cidr_groups" { default = { mainoffice = ["10.200.0.0/15"] manchester = ["10.201.0.0/16"] singapore = ["10.202.0.0/16"] jena = ["10.203.0.0/16"] fremont = ["10.204.0.0/16"] indianapolis = ["10.205.0.0/16"] } } ```
2021/04/23
[ "https://Stackoverflow.com/questions/67233521", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7959591/" ]
You can check out the Document Viewer from the LEADTOOLS Nuget here: WPF Viewer: <https://www.nuget.org/packages/Leadtools.Document.Viewer.Wpf/> WinForms Viewer: <https://www.nuget.org/packages/Leadtools.Document.Viewer.WinForms/> Just as a disclaimer, I am an employee of this vendor. It supports viewing PDFs and other document and image formats as document or image. This works without any installations or running ActiveX components.
You can evaluate PDF Viewer controls for WinForms/WPF by DevExpress. You can easily customize these controls as your needs dictate. * [PDF Viewer for WinForms](https://www.devexpress.com/products/net/controls/winforms/pdf-viewer/) * [PDF Viewer for WPF](https://www.devexpress.com/products/net/controls/wpf/pdf_viewer/)
9,116,914
There is a table view with three sections. The last section may contain many items. I need to show a button on the navigation bar as soon as the table view is showing only the last section (e.g. user scrolled the cells up so that the first and second view became invisible). So basically how to detect that the table view is now showing only the last section and cells from the first two sections are no longer visible?
2012/02/02
[ "https://Stackoverflow.com/questions/9116914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/745250/" ]
UITableView class has to methods: > > > ``` > - (NSArray *)indexPathsForVisibleRows; > - (NSArray *)indexPathsForRowsInRect:(CGRect)rect; > > ``` > > Have you tried them? Do they help?
You could try to iterate over UITableView's indexPathsForVisibleRows to see if the cells in the relevant section are contained in the array.
9,116,914
There is a table view with three sections. The last section may contain many items. I need to show a button on the navigation bar as soon as the table view is showing only the last section (e.g. user scrolled the cells up so that the first and second view became invisible). So basically how to detect that the table view is now showing only the last section and cells from the first two sections are no longer visible?
2012/02/02
[ "https://Stackoverflow.com/questions/9116914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/745250/" ]
UITableView class has to methods: > > > ``` > - (NSArray *)indexPathsForVisibleRows; > - (NSArray *)indexPathsForRowsInRect:(CGRect)rect; > > ``` > > Have you tried them? Do they help?
You can use the tableview delegate to check which section cells are being created by checking it's indexpath. * (UITableViewCell \*)tableView:(UITableView \*)tableView cellForRowAtIndexPath:(NSIndexPath \*)indexPath; Or you can use the below function to get cells that are visible and then check for to which section it belongs to. * (NSArray \*)indexPathsForVisibleRows;
10,908,376
I can compile this program which was provided to me, but that I must further develop. I have some questions about it: ``` #include <sys/types.h> #include <signal.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #define TIMEOUT (20) int main(int argc, char *argv[]) { pid_t pid; if(argc > 1 && strncmp(argv[1], "-help", strlen(argv[1])) == 0) { fprintf(stderr, "Usage: RunSafe Prog [CommandLineArgs]\n\nRunSafe takes as arguments:\nthe program to be run (Prog) and its command line arguments (CommandLineArgs) (if any)\n\nRunSafe will execute Prog with its command line arguments and\nterminate it and any remaining childprocesses after %d seconds\n", TIMEOUT); exit(0); } if((pid = fork()) == 0) /* Fork off child */ { execvp(argv[1], argv+1); fprintf(stderr,"RunSafe failed to execute: %s\n",argv[1]); perror("Reason"); kill(getppid(),SIGKILL); /* kill waiting parent */ exit(errno); /* execvp failed, no child - exit immediately */ } else if(pid != -1) { sleep(TIMEOUT); if(kill(0,0) == 0) /* are there processes left? */ { fprintf(stderr,"\nRunSafe: Attempting to kill remaining (child) processes\n"); kill(0, SIGKILL); /* send SIGKILL to all child processes */ } } else { fprintf(stderr,"RunSafe failed to fork off child process\n"); perror("Reason"); } } ``` What does my warning mean when I compile it? ``` $ gcc -o RunSafe RunSafe.c -lm RunSafe.c: In function ‘main’: RunSafe.c:30:44: warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] ``` Why can't I execute the file? ``` $ file RunSafe RunSafe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0x0a128c8d71e16bfde4dbc316bdc329e4860a195f, not stripped ubuntu@ubuntu:/media/Lexar$ sudo chmod 777 RunSafe ubuntu@ubuntu:/media/Lexar$ ./RunSafe bash: ./RunSafe: Permission denied ubuntu@ubuntu:/media/Lexar$ sudo ./RunSafe sudo: ./RunSafe: command not found ```
2012/06/06
[ "https://Stackoverflow.com/questions/10908376", "https://Stackoverflow.com", "https://Stackoverflow.com/users/108207/" ]
First, you need to `#include <string.h>` to get rid of that warning. Second, the OS is probably preventing you from executing programs on the `/media/Lexar` filesystem, no matter what their permission bits are. If you type `mount` you'll probably see the `noexec` option for `/media/Lexar`.
> > warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] > > > You need to include `#include<string.h>` because `strlen()` is declared in it. Try running the exe on some other location in your filesystem and not the mounted partition as the error indicates for some reason you don't have permissions on that mounted partition.
9,981,968
We are in the process of setting up our IT infrastructure on Amazon EC2. Assume a setup along the lines of: X production servers Y staging servers Log collation and Monitoring Server Build Server Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying \*.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!) I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ? Appreciate your help! Thanks [Update] Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by @rook fails in this situation.
2012/04/02
[ "https://Stackoverflow.com/questions/9981968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1187552/" ]
A very good method of handling access to a collection of EC2 instances is using a [Bastion Host](http://en.wikipedia.org/wiki/Bastion_host). All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private. I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..
71,167
Does a generic rulebook exist for the system behind Apocalypse World (Powered by the Apocalypse)?
2015/11/17
[ "https://rpg.stackexchange.com/questions/71167", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/25864/" ]
There is no generic edition of Apocalypse World — that is, an edition with the setting stripped out and “just” the rules. This is for the simple reason that the rules effectively *are* the setting, so there's no way to have “just” the rules. All the other Powered by the Apocalypse games were created by playing and studying Apocalypse World (and other PbtA games) to understand the design patterns involved, and then adjusting the rules to create rules that embodied a new setting. However, one of the most well-respected designers of PbtA games — Avery Alder of [*Monsterhearts*](http://rpggeek.com/rpgitem/118988/monsterhearts) fame — has created a free PDF meta-game called [*Simple World*](http://buriedwithoutceremony.com/simple-world/) ([alternative link](https://web.archive.org/web/20150919200536/http://buriedwithoutceremony.com/simple-world/) due to today's server issues) that is kind of like a playbook for a PbtA game. You follow the instructions in Simple World, and the result is a GM's Agenda, Principles, and Moves that are customised for the setting and themes you want to play with. It meanwhile instructs the players in how to create characters that suit this custom set of rules. This is as close to a generic set of PbtA rules that exist. Creating a game is fast (you do it together as a group while the players make their characters), so you can create a custom PbtA game and start playing in one session. And once you've created a PbtA game with Simple World, you can always continue to adjust, tweak, and refine it, along with formalising some iconic character playbooks, ending up with a true stand-alone PbtA game of your very own design.
The closest I've seen to generic rules from a full game of Powered by the Apocalypse is The Bureau, which is a free game.
867,734
I am new to setting up SSL-certificates and working with servers in general, so please bear with me as I try to explain the situation I have put myself in. I recently acquired an Comodo EssentialSSL Wildcard license that is going to be used for securing my server. The server I am configuring is for use with Kolab. Kolab is working but does not have SSL configured, so I figured I might set that up. I followed the instructions provided on the corresponding Kolab howto-page (<https://docs.kolab.org/howtos/secure-kolab-server.html>), but got stuck on the section where you set up the cyrus-IMAP server. When running: ``` openssl s_client -showcerts -connect example.org:993 ``` I get the following output (truncated): ``` CONNECTED(00000003) depth=0 OU = Domain Control Validated, OU = EssentialSSL Wildcard, CN = *.example.org verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 OU = Domain Control Validated, OU = EssentialSSL Wildcard, CN = *.example.org verify error:num=27:certificate not trusted verify return:1 depth=0 OU = Domain Control Validated, OU = EssentialSSL Wildcard, CN = *.example.org verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=EssentialSSL Wildcard/CN=*.example.org i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- --- Server certificate subject=/OU=Domain Control Validated/OU=EssentialSSL Wildcard/CN=*.example.org issuer=/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA --- No client certificate CA names sent --- SSL handshake has read 2019 bytes and written 421 bytes --- ... Start Time: 1502097786 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) --- * OK [CAPABILITY IMAP4rev1 LITERAL+ ID ENABLE AUTH=PLAIN AUTH=LOGIN SASL-IR] example.org Cyrus IMAP git2.5+0-Debian-2.5~dev2015021301-0~kolab2 server ready ``` If I specify the -CApath to the certificates it does however work and I get the verify return code: 0 (ok), but only if I run the command while logged into the server through ssh. While setting up my mail on thunderbird I can access my mailbox, but have to first add an security exception because the certificate has an "Unknown Identity". The certificate does however work flawlessly on port 443 for https without specifying the -CApath. Any help would be greatly appreciated.
2017/08/09
[ "https://serverfault.com/questions/867734", "https://serverfault.com", "https://serverfault.com/users/429905/" ]
I solved the problem by checking that the order of my intermediate bundle file was correctly formatted and changed it from a .ca-bundle to a .pem and added the following line to my imapd.conf: ``` tls_ca_path: /etc/ssl/certs ```
If you got the following error while `openssl s_client -showcerts -connect example.com:443`: > > verify error:num=20:unable to get local issuer certificate > > > You need to make sure you include [CA's Bundle](https://comodosslstore.com/resources/comodo-ca-bundle-certificate-chain/) certificates. > > When Comodo CA issues an SSL certificate, it will send along a specific Comodo CA bundle of intermediate certificates to install alongside it. These certificates create what is called a certificate chain. The end user certificate was signed using one of the intermediates, which was signed using one of the roots. When a browser arrives at a website it will attempt to build the certificate chain and chain the SSL certificate it’s being presented with back to one of the roots in its trust store. > > > Make sure you've downloaded these files from the certificate provider. The most convenient way (especially for [Nginx](https://nginx.org/en/docs/http/configuring_https_servers.html#chains)) is to merge all of them into one, for example by appending CA Bundle into the main CRT: ``` $ cat STAR_example_com.ca-bundle > STAR_example_com.crt ``` If you don't have a single `.ca-bundle` file (CA Bundle), but multiple files, merge them all into one. Then install `STAR_example_com.crt` as usual. Example for Nginx: ``` ssl_certificate /etc/nginx/ssl/STAR_example_com.crt; ssl_certificate_key /etc/nginx/ssl/STAR_example_com.key; ```
19,601,318
Is there anything I can add to emacs that will make as much as possible in as many modes as possible colorized, including bold and italic?
2013/10/26
[ "https://Stackoverflow.com/questions/19601318", "https://Stackoverflow.com", "https://Stackoverflow.com/users/279695/" ]
In addition to using [color or custom *themes*](http://www.emacswiki.org/emacs/ColorTheme), as @Link mentioned, some modes provide for multiple levels of such syntax highlighting (called font-locking). See user option `font-lock-maximum-decoration`. And some 3rd-party libraries specifically add more highlighting, sometimes by adding more font-lock levels. A good example of this is library [**Dired+**](http://www.emacswiki.org/cgi-bin/wiki/DiredPlus), which provides much more highlighting, and more control over highlighting, than does the out-of-the-box Dired mode.
Perhaps you are looking for [colour or custom *themes*](http://www.emacswiki.org/emacs/ColorTheme)? I'm not sure if you could do bold or italic, but I'm pretty sure there may be a plugin for that.
19,601,318
Is there anything I can add to emacs that will make as much as possible in as many modes as possible colorized, including bold and italic?
2013/10/26
[ "https://Stackoverflow.com/questions/19601318", "https://Stackoverflow.com", "https://Stackoverflow.com/users/279695/" ]
In addition to using [color or custom *themes*](http://www.emacswiki.org/emacs/ColorTheme), as @Link mentioned, some modes provide for multiple levels of such syntax highlighting (called font-locking). See user option `font-lock-maximum-decoration`. And some 3rd-party libraries specifically add more highlighting, sometimes by adding more font-lock levels. A good example of this is library [**Dired+**](http://www.emacswiki.org/cgi-bin/wiki/DiredPlus), which provides much more highlighting, and more control over highlighting, than does the out-of-the-box Dired mode.
This is derived from a popular twilight theme, which can just be inserted into your `.emacs` file and then modified by you in any manner you see fit: ``` (set-mouse-color "sienna1") (set-cursor-color "#DDDD00") (custom-set-faces '(default ((t (:background "#141414" :foreground "#CACACA")))) '(blue ((t (:foreground "blue")))) '(border-glyph ((t (nil)))) '(buffers-tab ((t (:background "#141414" :foreground "#CACACA")))) '(font-lock-builtin-face ((t (:foreground "#CACACA")))) '(font-lock-comment-face ((t (:foreground "#5F5A60")))) '(font-lock-constant-face ((t (:foreground "#CF6A4C")))) '(font-lock-doc-string-face ((t (:foreground "DarkOrange")))) '(font-lock-function-name-face ((t (:foreground "#9B703F")))) '(font-lock-keyword-face ((t (:foreground "#CDA869")))) '(font-lock-preprocessor-face ((t (:foreground "Aquamarine")))) '(font-lock-reference-face ((t (:foreground "SlateBlue")))) '(ruby-string-delimiter-face ((t (:foreground "#5A6340")))) '(ruby-regexp-delimiter-face ((t (:foreground "orange")))) '(ruby-heredoc-delimiter-face ((t (:foreground "#9B859D")))) '(ruby-op-face ((t (:foreground "#CDA869")))) '(font-lock-regexp-grouping-backslash ((t (:foreground "#E9C062")))) '(font-lock-regexp-grouping-construct ((t (:foreground "red")))) '(minibuffer-prompt ((t (:foreground "#5F5A60")))) '(ido-subdir ((t (:foreground "#CF6A4C")))) '(ido-first-match ((t (:foreground "#8F9D6A")))) '(ido-only-match ((t (:foreground "#8F9D6A")))) '(mumamo-background-chunk-submode ((t (:background "#222222")))) '(linum ((t (:background "#141314" :foreground "#2D2B2E")))) '(hl-line ((t (:background "#212121")))) '(region ((t (:background "#373446")))) '(yas/field-highlight-face ((t (:background "#27292A")))) '(mode-line ((t (:background "grey75" :foreground "black" :height 0.8)))) '(mode-line-inactive ((t (:background "grey10" :foreground "grey40" :box (:line-width -1 :color "grey20") :height 0.8)))) '(magit-item-highlight ((t (:background "#191930")))) '(magit-diff-add ((((class color) (background dark)) (:foreground "green")))) '(org-hide ((((background dark)) (:foreground "#141414")))) '(outline-4 ((t (:foreground "#8F8A80")))) '(diff-removed ((((class color) (background dark)) (:foreground "orange")))) '(diff-added ((((class color) (background dark)) (:foreground "green")))) '(font-lock-string-face ((t (:foreground "#8F9D6A")))) '(font-lock-type-face ((t (:foreground "#9B703F")))) '(font-lock-variable-name-face ((t (:foreground "#7587A6")))) '(font-lock-warning-face ((t (:background "#EE799F" :foreground "red")))) '(gui-element ((t (:background "#D4D0C8" :foreground "black")))) '(region ((t (:background "#27292A")))) '(highlight ((t (:background "#111111")))) '(highline-face ((t (:background "SeaGreen")))) '(left-margin ((t (nil)))) '(text-cursor ((t (:background "yellow" :foreground "black")))) '(toolbar ((t (nil)))) '(underline ((nil (:underline nil)))) '(zmacs-region ((t (:background "snow" :foreground "blue")))) ) ```
15,877,854
I have got the following Data Frame "j" ... and want to convert to a matrix of zeros and ones, like below, but i looking for a more easy way to convert it in R...the matrix represent the positions of the values of the data frame,..For example the matrix is (81x3)...if i have got an "1" in the df, an "1" will be write in the first column of the matrix, if a have got a "2" in the df, a "1" will be write in the second column of the matrix... thanks! ``` g <- h <- i <- c(1:3) j<-expand.grid(g,h,i) j Var1 Var2 Var3 1 1 1 1 2 2 1 1 3 3 1 1 4 1 2 1 5 2 2 2 . . . m<-matrix(0,81,3) m[1,1]=m[1,1]+1;m[2,1]=m[2,1]+1;m[3,1]=m[3,1]+1 m[4,2]=m[4,2]+1;m[5,1]=m[5,1]+1;m[6,1]=m[6,1]+1 m[7,3]=m[7,3]+1;m[8,1]=m[8,1]+1;m[9,1]=m[9,1]+1 m[10,1]=m[10,1]+1;m[11,2]=m[11,2]+1;m[12,1]=m[12,1]+1 m[13,2]=m[13,2]+1;m[14,2]=m[14,2]+1;m[15,1]=m[15,1]+1 m[16,2]=m[16,2]+1;m[17,2]=m[17,2]+1;m[18,2]=m[18,2]+1 head(m) [,1] [,2] [,3] [1,] 1 0 0 [2,] 1 0 0 [3,] 1 0 0 [4,] 0 1 0 [5,] 1 0 0 [6,] 1 0 0 [7,] 0 0 1 [8,] 1 0 0 [9,] 1 0 0 [10,] 1 0 0 [11,] 0 1 0 [12,] 1 0 0 [13,] 0 1 0 [14,] 0 1 0 [15,] 0 1 0 ```
2013/04/08
[ "https://Stackoverflow.com/questions/15877854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2257002/" ]
Feels quirky, but this is what I came up with: ``` g <- h <- i <- c(1:3) j<-expand.grid(g,h,i) tmp<-c(0,0,0) t(sapply(t(j),function(x,tmp){ tmp[x]<-1;tmp }, tmp)) ```
@ndoogan's approach works, but it seems like it would make much more sense to just use matrix indexing for this: Here's your "j" `data.frame`: ``` j <- expand.grid(replicate(3, 1:3, FALSE)) ``` And your empty `matrix`, "m": ``` m <- matrix(0, 81, 3) ``` The "i" would just be `1:nrow(m)`, and you can turn your "j" `data.frame` into a `vector` with a simple `c(t(j))`. The `t` is required to get the order you show, otherwise `unlist` could also be used. Here it is in practice: ``` m[cbind(1:81, c(t(j)))] <- 1 head(m, 10) # [,1] [,2] [,3] # [1,] 1 0 0 # [2,] 1 0 0 # [3,] 1 0 0 # [4,] 0 1 0 # [5,] 1 0 0 # [6,] 1 0 0 # [7,] 0 0 1 # [8,] 1 0 0 # [9,] 1 0 0 # [10,] 1 0 0 ```
63,970,864
I'm using Preact for the first time. I simply created a new project with preact-cli and this default template: <https://github.com/preactjs-templates/default>. In `app.js` I'm trying to use this code: ```js import { Router } from 'preact-router'; import Header from './header'; import Home from '../routes/home'; import Profile from '../routes/profile'; // I added this function function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } const App = async () => { // I added "async" and the "{" in this line await sleep(3000) // I added this line return ( // I added this line <div id="app"> <Header /> <Router> <Home path="/" /> <Profile path="/profile/" user="me" /> <Profile path="/profile/:user" /> </Router> </div> ) } // I added this line export default App; ``` But unfortunately browser's gives me error: ``` Uncaught Error: Objects are not valid as a child. Encountered an object with the keys {}. ``` **Why?** It works if I do not use `async/await`.
2020/09/19
[ "https://Stackoverflow.com/questions/63970864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10088259/" ]
*Disclaimer: I work on Preact.* Our debug addon (`preact/debug`) will print this error whenever an invalid object is passed as a child that doesn't match the expected return type of `h/createElement`, usually called `vnode`: ``` const invalidVNode = { foo: 123 }; <div>{invalidVNode}</div> ``` In your case your component function returns a `Promise` which is an object in JavaScript. When Preact renders that component the render function will NOT return a `vnode`, but a Promise instead. That's why the error occurs. Which poses the question: How to do async initialization? ------------------------------- Once triggered, the render process in Preact is always synchronous. A component that returns a `Promise` breaks that contract. The reason it is that way is because you usually want to show at least something, like a spinner, to the user, while the asynchronous initialization is happening. A real world scenario for that would be fetching data via the network for example. ``` import { useEffect } from "preact/hooks"; const App = () => { // useEffect Hook is perfect for any sort of initialization code. // The second parameter is for checking when the effect should re-run. // We only want to initialize once when the component is created so we // pass an empty array so that nothing will be dirty checked. useEffect(() => { doSometThingAsyncHere() }, []); return ( <div id="app"> <Header /> <Router> <Home path="/" /> <Profile path="/profile/" user="me" /> <Profile path="/profile/:user" /> </Router> </div> ) } ```
Reactjs is a component library. At the core it has a function like ``` React.createElement(component, props, ...children) ``` Here the first parameter is the component that you want to render. When you are putting `await sleep(3000)` the function is not returning any valid children/html object rather it is returning an empty object. that's why you are getting this error.
542,671
I would like to implement the 100Mbps single pair automotive ethernet specified in 802.3bw, otherwise known as **100BASE-T1**. I was told that in order to troubleshoot and test on this standard, the scope must be capable of 1 Ghz. Is this accurate? Why would this be necessary as opposed to an oscilloscope that is well above 100MHz such as one that is 200MHz, but still 1 GSa/s ?
2021/01/14
[ "https://electronics.stackexchange.com/questions/542671", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/146587/" ]
Sounds like you want a protocol tester more than you want an oscilloscope. But oh well, that's not the question here. > > I was told that in order to troubleshoot and test on this standard, the scope must be capable of 1 GHz > > > No. Even wikipedia will tell you that it only requires CAT3 cabling, and that doesn't guarantee a bandwidth even close to that. --- Ah, with your comment: > > Only looking for network traffic analysis at this point > > > yeah, well, then an oscilloscope in itself is no use at all. You'll need a device that speaks that protocol. For some higher-end, there might be add-ons that implement such a protocol decoder. But that again has nothing to do with the scope's 1 GHz bandwidth or not – it's something that extends the signal processing capabilities of the scope, and "converts" it into a protocol analyzer. Since this is an ethernet standard, in principle, a network card connecting your device A to a PC, which sniffs and forwards the packets in both directions, and another network card for PC <-> device B would sound like a saner investment than an oscilloscope.
There are two aspects to Ethernet connectivity: signal integrity and content. You’d use an oscilloscope with suitable differential probes to capture the raw signals and analyze them with (usually) an Ethernet signal analysis package on the oscilloscope. That’s probably over $10k worth of equipment - if you aren’t in that line of work, you’d do a rental. Without that, you won’t probably be able to verify that the signal is fully within specs since you won’t have bit error rate measurements and such, but at least you can get an eye diagram and make sure it fits the masks prescribed for the physical layer you chose. For 100BASE-T-anything I’d probably look for a 5Gs/s at least - otherwise there are way too few samples per clock period to make it convincing (or even usable if you are diagnosing signal integrity issues vs. merely confirming that things work). The content aspect is taken care of by wireshark and a PC :)
3,657,111
How can I verify if the file is binary or text without to open the file?
2010/09/07
[ "https://Stackoverflow.com/questions/3657111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/396841/" ]
Schrödinger's cat, I'm afraid. There is no way to determine the contents of a file without opening it. The filesystem stores no metadata relating to the contents. If not opening the file is not a hard requirement, then there are a number of solutions available to you. **Edit:** It has been suggested in a number of comments and answers that `file(1)` is a good way of determining the contents. Indeed it is. However, `file(1)` *opens* the file, which was prohibited in the question. See the penultimate line in the following example: ``` > echo 'This is not a pipe' > file.jpg && strace file file.jpg 2>&1 | grep file.jpg execve("/usr/bin/file", ["file", "file.jpg"], [/* 56 vars */]) = 0 lstat64("file.jpg", {st_mode=S_IFREG|0644, st_size=19, ...}) = 0 stat64("file.jpg", {st_mode=S_IFREG|0644, st_size=19, ...}) = 0 open("file.jpg", O_RDONLY|O_LARGEFILE) = 3 write(1, "file.jpg: ASCII text\n", 21file.jpg: ASCII text ```
There is no way of being certain without looking inside the file. Hoewever, you don't have to open it with an editor and see for yourself to have a clue. You may want to look into the `file` command: <http://linux.die.net/man/1/file>
3,657,111
How can I verify if the file is binary or text without to open the file?
2010/09/07
[ "https://Stackoverflow.com/questions/3657111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/396841/" ]
The correct way to determine the type of a file is to use the file(1) command. You also need to be aware that UTF-8 encoded files are "text" files, but may contain non-ASCII data. Other encodings also have this issue. In the case of text encoded with a [code page](http://en.wikipedia.org/wiki/Code_page), it may not be possible to unambiguously determine if a file is text or not. The file(1) command will look at the structure of a file to try and determine what it contains - from the file(1) man page: > > The type printed will usually contain > one of the words ***text*** (the file > contains only printing characters and > a few common control characters and is > probably safe to read on an ASCII > terminal), ***executable*** (the file contains > the result of compiling a program in a > form understandable to some UNIX kernel > or another), or ***data*** meaning anything > else (data is usually ‘binary’ or non-printable). > > > With regard to different character encodings, the file(1) man page has this to say: > > If a file does not match any of the > entries in the magic file, it is > examined to see if it seems to be a text file. ASCII, ISO-8859-x, non- > ISO 8-bit extended-ASCII character sets (such as those used on > Macintosh > and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded > Unicode, and > EBCDIC character sets can be distinguished by the different ranges > and > sequences of bytes that constitute printable text in each set. > If a file > passes any of these tests, its character set is reported. ASCII, > ISO-8859-x, UTF-8, and extended-ASCII files are identified as > ‘text’ > because they will be mostly readable on nearly any terminal; > UTF-16 and > EBCDIC are only ‘character data’ because, while they contain text, it > is > text that will require translation before it can be read. > > > So, some text will be identified as **text**, but some may be identified as **character data**. You will need to determine yourself if this matters to your application and take appropriate action.
There is no way of being certain without looking inside the file. Hoewever, you don't have to open it with an editor and see for yourself to have a clue. You may want to look into the `file` command: <http://linux.die.net/man/1/file>
3,657,111
How can I verify if the file is binary or text without to open the file?
2010/09/07
[ "https://Stackoverflow.com/questions/3657111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/396841/" ]
Schrödinger's cat, I'm afraid. There is no way to determine the contents of a file without opening it. The filesystem stores no metadata relating to the contents. If not opening the file is not a hard requirement, then there are a number of solutions available to you. **Edit:** It has been suggested in a number of comments and answers that `file(1)` is a good way of determining the contents. Indeed it is. However, `file(1)` *opens* the file, which was prohibited in the question. See the penultimate line in the following example: ``` > echo 'This is not a pipe' > file.jpg && strace file file.jpg 2>&1 | grep file.jpg execve("/usr/bin/file", ["file", "file.jpg"], [/* 56 vars */]) = 0 lstat64("file.jpg", {st_mode=S_IFREG|0644, st_size=19, ...}) = 0 stat64("file.jpg", {st_mode=S_IFREG|0644, st_size=19, ...}) = 0 open("file.jpg", O_RDONLY|O_LARGEFILE) = 3 write(1, "file.jpg: ASCII text\n", 21file.jpg: ASCII text ```
If you are attempting to do this from a command shell then the `file` command will take a guess at what filetype it is. If it is text then it will generally include the word text in its description. I am not aware of any 100% method of determining this but the file command is probably the most accurate.
3,657,111
How can I verify if the file is binary or text without to open the file?
2010/09/07
[ "https://Stackoverflow.com/questions/3657111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/396841/" ]
Schrödinger's cat, I'm afraid. There is no way to determine the contents of a file without opening it. The filesystem stores no metadata relating to the contents. If not opening the file is not a hard requirement, then there are a number of solutions available to you. **Edit:** It has been suggested in a number of comments and answers that `file(1)` is a good way of determining the contents. Indeed it is. However, `file(1)` *opens* the file, which was prohibited in the question. See the penultimate line in the following example: ``` > echo 'This is not a pipe' > file.jpg && strace file file.jpg 2>&1 | grep file.jpg execve("/usr/bin/file", ["file", "file.jpg"], [/* 56 vars */]) = 0 lstat64("file.jpg", {st_mode=S_IFREG|0644, st_size=19, ...}) = 0 stat64("file.jpg", {st_mode=S_IFREG|0644, st_size=19, ...}) = 0 open("file.jpg", O_RDONLY|O_LARGEFILE) = 3 write(1, "file.jpg: ASCII text\n", 21file.jpg: ASCII text ```
In unix, a file is just some bytes. So, without opening the file, you cannot figure out 100% that's it's ASCII or Binary. You can just use tools available to you and dig deeper to make it fool proof. 1. file 2. cat -v
3,657,111
How can I verify if the file is binary or text without to open the file?
2010/09/07
[ "https://Stackoverflow.com/questions/3657111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/396841/" ]
The correct way to determine the type of a file is to use the file(1) command. You also need to be aware that UTF-8 encoded files are "text" files, but may contain non-ASCII data. Other encodings also have this issue. In the case of text encoded with a [code page](http://en.wikipedia.org/wiki/Code_page), it may not be possible to unambiguously determine if a file is text or not. The file(1) command will look at the structure of a file to try and determine what it contains - from the file(1) man page: > > The type printed will usually contain > one of the words ***text*** (the file > contains only printing characters and > a few common control characters and is > probably safe to read on an ASCII > terminal), ***executable*** (the file contains > the result of compiling a program in a > form understandable to some UNIX kernel > or another), or ***data*** meaning anything > else (data is usually ‘binary’ or non-printable). > > > With regard to different character encodings, the file(1) man page has this to say: > > If a file does not match any of the > entries in the magic file, it is > examined to see if it seems to be a text file. ASCII, ISO-8859-x, non- > ISO 8-bit extended-ASCII character sets (such as those used on > Macintosh > and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded > Unicode, and > EBCDIC character sets can be distinguished by the different ranges > and > sequences of bytes that constitute printable text in each set. > If a file > passes any of these tests, its character set is reported. ASCII, > ISO-8859-x, UTF-8, and extended-ASCII files are identified as > ‘text’ > because they will be mostly readable on nearly any terminal; > UTF-16 and > EBCDIC are only ‘character data’ because, while they contain text, it > is > text that will require translation before it can be read. > > > So, some text will be identified as **text**, but some may be identified as **character data**. You will need to determine yourself if this matters to your application and take appropriate action.
If you are attempting to do this from a command shell then the `file` command will take a guess at what filetype it is. If it is text then it will generally include the word text in its description. I am not aware of any 100% method of determining this but the file command is probably the most accurate.
3,657,111
How can I verify if the file is binary or text without to open the file?
2010/09/07
[ "https://Stackoverflow.com/questions/3657111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/396841/" ]
The correct way to determine the type of a file is to use the file(1) command. You also need to be aware that UTF-8 encoded files are "text" files, but may contain non-ASCII data. Other encodings also have this issue. In the case of text encoded with a [code page](http://en.wikipedia.org/wiki/Code_page), it may not be possible to unambiguously determine if a file is text or not. The file(1) command will look at the structure of a file to try and determine what it contains - from the file(1) man page: > > The type printed will usually contain > one of the words ***text*** (the file > contains only printing characters and > a few common control characters and is > probably safe to read on an ASCII > terminal), ***executable*** (the file contains > the result of compiling a program in a > form understandable to some UNIX kernel > or another), or ***data*** meaning anything > else (data is usually ‘binary’ or non-printable). > > > With regard to different character encodings, the file(1) man page has this to say: > > If a file does not match any of the > entries in the magic file, it is > examined to see if it seems to be a text file. ASCII, ISO-8859-x, non- > ISO 8-bit extended-ASCII character sets (such as those used on > Macintosh > and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded > Unicode, and > EBCDIC character sets can be distinguished by the different ranges > and > sequences of bytes that constitute printable text in each set. > If a file > passes any of these tests, its character set is reported. ASCII, > ISO-8859-x, UTF-8, and extended-ASCII files are identified as > ‘text’ > because they will be mostly readable on nearly any terminal; > UTF-16 and > EBCDIC are only ‘character data’ because, while they contain text, it > is > text that will require translation before it can be read. > > > So, some text will be identified as **text**, but some may be identified as **character data**. You will need to determine yourself if this matters to your application and take appropriate action.
In unix, a file is just some bytes. So, without opening the file, you cannot figure out 100% that's it's ASCII or Binary. You can just use tools available to you and dig deeper to make it fool proof. 1. file 2. cat -v
46,732,891
I'm not certain if the title is the right way to word what I'm asking, sorry if it's not, but what I'm trying to do is create a memory match game using GUI. I have an array, and I've got the button printing an element from the array at random, but, the issue is, that I can have the same element printing multiple times. Is there a way to remove that element from being selected once it's used? If there isn't a way to do that, any ideas on how I could go about getting it to use each element only once? This is my current code: ``` package MemoryMatching; import java.awt.BorderLayout; import java.awt.GridLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.Random; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; public class MemoryGUI extends JFrame implements MemoryMatch, ActionListener{ JPanel mainPanel, boardPanel; JButton [][] gridButtons = new JButton[3][4]; char cardArray[] = new char[12]; int numInPlay; public MemoryGUI(){ cardArray[0] = 'A'; cardArray[1] = 'A'; cardArray[2] = 'B'; cardArray[3] = 'B'; cardArray[4] = 'C'; cardArray[5] = 'C'; cardArray[6] = 'D'; cardArray[7] = 'D'; cardArray[8] = 'E'; cardArray[9] = 'E'; cardArray[10] = 'F'; cardArray[11] = 'F'; mainPanel = new JPanel(); mainPanel.setLayout(new BorderLayout()); boardPanel = new JPanel(); boardPanel.setLayout(new GridLayout(4,3)); setBoard(); mainPanel.add(boardPanel, BorderLayout.CENTER); add(mainPanel); } @Override public void actionPerformed(ActionEvent e) { JButton btnClicked = (JButton) e.getSource(); btnClicked.setEnabled(false); char randomChar = cardArray[(int)new Random().nextInt(cardArray.length)]; btnClicked.setText(""+ randomChar); faceUp(); } @Override public void setBoard() { for(int x = 0; x < cardArray.length; x++) { } for(int row=0; row<gridButtons.length; row++){ for(int col=0; col<gridButtons[row].length;col++){ gridButtons[row][col] = new JButton(); gridButtons[row][col].addActionListener(this); gridButtons[row][col].setText("No peeking"); boardPanel.add(gridButtons[row][col] ); faceDown(); } } } @Override public void isWinner() { // TODO Auto-generated method stub } @Override public void isMatch() { } @Override public void faceUp() { for(int x = 0; x < cardArray.length; x++) { for(int y = 0; y < cardArray[x]; y++) { } } } @Override public void faceDown() { } } ``` what I'm currently getting is something like A B A A F B F D D E F C rather than: B A C D E F A B C F E D The first example having three A and one C, rather than two of each as in the second example. If possible I'd like to not be given the code outright, but a push towards the right direction.
2017/10/13
[ "https://Stackoverflow.com/questions/46732891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8098654/" ]
> > Is there a way to get the SCM checkout credentials from what is specified in the GUI level of the job, so i don't have to hard code the credential ID in the script. > > > Yes. If you are using **Pipeline script from SCM**, you can get the Credentials ID with the following snippet: ``` scm.getUserRemoteConfigs()[0].getCredentialsId() ``` This is useful, for example, when you want to dynamically checkout a *Jenkins Library* using the same credentials you configure in the *Pipeline script from SCM* GUI. ``` library identifier: 'sample-jenkins-library@master', retriever: modernSCM( [$class: 'GitSCMSource', remote: 'https://github.com/repo/sample-jenkins-library.git', credentialsId: scm.getUserRemoteConfigs()[0].getCredentialsId()]) ``` *Please keep in mind that libs loaded this way aren't considered trusted. Not exactly part of the question, but very valuable info if you got here with the intention of implementing exactly that!*
It's not possible to get the library credentials ID within the pipeline script. The best you can do is get the version (branch name) for the library. For example, `env.getProperty("library.<NAME>.version")` where `<NAME>` is the name of your shared library. I had to migrate jobs to a new Jenkins instance and ran into this same issue. The only silver lining is that I now reference credential IDs using string constants where possible so that future updates are simple. Job migration is a tedious process, and managing credentials can be difficult. Once you have many credentials defined it can be a challenge.
46,732,891
I'm not certain if the title is the right way to word what I'm asking, sorry if it's not, but what I'm trying to do is create a memory match game using GUI. I have an array, and I've got the button printing an element from the array at random, but, the issue is, that I can have the same element printing multiple times. Is there a way to remove that element from being selected once it's used? If there isn't a way to do that, any ideas on how I could go about getting it to use each element only once? This is my current code: ``` package MemoryMatching; import java.awt.BorderLayout; import java.awt.GridLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.Random; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; public class MemoryGUI extends JFrame implements MemoryMatch, ActionListener{ JPanel mainPanel, boardPanel; JButton [][] gridButtons = new JButton[3][4]; char cardArray[] = new char[12]; int numInPlay; public MemoryGUI(){ cardArray[0] = 'A'; cardArray[1] = 'A'; cardArray[2] = 'B'; cardArray[3] = 'B'; cardArray[4] = 'C'; cardArray[5] = 'C'; cardArray[6] = 'D'; cardArray[7] = 'D'; cardArray[8] = 'E'; cardArray[9] = 'E'; cardArray[10] = 'F'; cardArray[11] = 'F'; mainPanel = new JPanel(); mainPanel.setLayout(new BorderLayout()); boardPanel = new JPanel(); boardPanel.setLayout(new GridLayout(4,3)); setBoard(); mainPanel.add(boardPanel, BorderLayout.CENTER); add(mainPanel); } @Override public void actionPerformed(ActionEvent e) { JButton btnClicked = (JButton) e.getSource(); btnClicked.setEnabled(false); char randomChar = cardArray[(int)new Random().nextInt(cardArray.length)]; btnClicked.setText(""+ randomChar); faceUp(); } @Override public void setBoard() { for(int x = 0; x < cardArray.length; x++) { } for(int row=0; row<gridButtons.length; row++){ for(int col=0; col<gridButtons[row].length;col++){ gridButtons[row][col] = new JButton(); gridButtons[row][col].addActionListener(this); gridButtons[row][col].setText("No peeking"); boardPanel.add(gridButtons[row][col] ); faceDown(); } } } @Override public void isWinner() { // TODO Auto-generated method stub } @Override public void isMatch() { } @Override public void faceUp() { for(int x = 0; x < cardArray.length; x++) { for(int y = 0; y < cardArray[x]; y++) { } } } @Override public void faceDown() { } } ``` what I'm currently getting is something like A B A A F B F D D E F C rather than: B A C D E F A B C F E D The first example having three A and one C, rather than two of each as in the second example. If possible I'd like to not be given the code outright, but a push towards the right direction.
2017/10/13
[ "https://Stackoverflow.com/questions/46732891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8098654/" ]
> > Is there a way to get the SCM checkout credentials from what is specified in the GUI level of the job, so i don't have to hard code the credential ID in the script. > > > Yes. If you are using **Pipeline script from SCM**, you can get the Credentials ID with the following snippet: ``` scm.getUserRemoteConfigs()[0].getCredentialsId() ``` This is useful, for example, when you want to dynamically checkout a *Jenkins Library* using the same credentials you configure in the *Pipeline script from SCM* GUI. ``` library identifier: 'sample-jenkins-library@master', retriever: modernSCM( [$class: 'GitSCMSource', remote: 'https://github.com/repo/sample-jenkins-library.git', credentialsId: scm.getUserRemoteConfigs()[0].getCredentialsId()]) ``` *Please keep in mind that libs loaded this way aren't considered trusted. Not exactly part of the question, but very valuable info if you got here with the intention of implementing exactly that!*
You could just migrate all the credentials from the old Jenkins instance to the new one before you start configuring the jobs. Then delete what you don't use later, if needed. <https://support.cloudbees.com/hc/en-us/articles/115001634268-How-to-migrate-credentials-to-a-new-Jenkins-instance->
122,749
I am making quite some binaries, scripts etc that I want to install easily (using my own rpms). Since I want them accessible for everyone, my intuition would be to put them in /usr/bin; * no need to change PATH however; my executables now disappear in a pool of all the others; how can I find back all the executables I put there in an easy way. I was thinking of: * a subdirectory in /usr/bin (I know I cannot do this; just to illustrate my thinking) * another directory (/opt/myself/bin) and linking each executable to /usr/bin (lots of work) * another directory (/opt/myself/bin) and linking the directory to /usr/bin (is this possible?) what would be the "best, most linux-compliant way" to do this? EDIT: we had a discussion on this in the company and came up with this sub-optimal option: put binaries in /usr/bin/company with a symbolic link from /usr/bin. I'm not thrilled with this solution (disussion ongoing)
2014/04/02
[ "https://unix.stackexchange.com/questions/122749", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64031/" ]
If you bundle your binaries into your own RPMs then it's trivial to get a list of what they are and where they were installed. ### Example ``` $ rpm -ql httpd| head -10 /etc/httpd /etc/httpd/conf /etc/httpd/conf.d /etc/httpd/conf.d/README /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.modules.d /etc/httpd/conf.modules.d/00-base.conf ``` I would suggest putting your executables in either `/usr/bin` or `/usr/local/bin` and rolling your own RPM. It's pretty trivial to do this and by managing your software deployment using an RPM you'll be able to label a bundle with a version number further easing the configuration management of your software as you deploy it. ### Determining which RPMs are "mine"? You can build your RPMs using some known information that could then be agreed upon prior to doing the building. I often build packages on systems that are owned by my domain so it's trivial to find RPMs by simply searching through all the RPMs that were built on host X.mydom.com. ### Example ``` $ rpm -qi httpd Name : httpd Version : 2.4.7 Release : 1.fc19 Architecture: x86_64 Install Date: Mon 17 Feb 2014 01:53:15 AM EST Group : System Environment/Daemons Size : 3865725 License : ASL 2.0 Signature : RSA/SHA256, Mon 27 Jan 2014 11:00:08 AM EST, Key ID 07477e65fb4b18e6 Source RPM : httpd-2.4.7-1.fc19.src.rpm Build Date : Mon 27 Jan 2014 08:39:13 AM EST Build Host : buildvm-20.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : http://httpd.apache.org/ Summary : Apache HTTP Server Description : The Apache HTTP Server is a powerful, efficient, and extensible web server. ``` This would be the `Build Host` line within the RPMs. ### The use of /usr/bin/company? I would probably discourage the use of a location such as this. Mainly because it requires all your systems to have their `$PATH` augmented to include it and is non-standard. Customizing things has always been a "right of passage" for every wannabee Unix admin, but I always discourage it unless absolutely necessary. The biggest issue with customization's like this is that they become a burden in both maintaining your environment and in bringing new people up to speed on how to use your environment. ### Can I just get a list of files from RPM? Yes you can achieve this but it will require 2 calls to RPM. The first will build a list of packages that were built on host X.mydom.com. After getting this list you'll need to re-call RPM querying for the files owned by each of these packages. You can achieve this using this one liner: ``` $ rpm -ql $(rpm -qa --queryformat "%-30{NAME}%{BUILDHOST}\n" | \ grep X.mydom.com | awk '{print $1}') | head -10 /etc/pam.d/run_init /etc/sestatus.conf /usr/bin/secon /usr/bin/semodule_deps /usr/bin/semodule_expand /usr/bin/semodule_link /usr/bin/semodule_package /usr/bin/semodule_unpackage /usr/sbin/fixfiles /usr/sbin/genhomedircon ```
Binaries not part of the system or distribution are usually in ``` /usr/local/bin ``` the directory is usually in the standard `$PATH` so that your binaries will be found.
122,749
I am making quite some binaries, scripts etc that I want to install easily (using my own rpms). Since I want them accessible for everyone, my intuition would be to put them in /usr/bin; * no need to change PATH however; my executables now disappear in a pool of all the others; how can I find back all the executables I put there in an easy way. I was thinking of: * a subdirectory in /usr/bin (I know I cannot do this; just to illustrate my thinking) * another directory (/opt/myself/bin) and linking each executable to /usr/bin (lots of work) * another directory (/opt/myself/bin) and linking the directory to /usr/bin (is this possible?) what would be the "best, most linux-compliant way" to do this? EDIT: we had a discussion on this in the company and came up with this sub-optimal option: put binaries in /usr/bin/company with a symbolic link from /usr/bin. I'm not thrilled with this solution (disussion ongoing)
2014/04/02
[ "https://unix.stackexchange.com/questions/122749", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64031/" ]
If you bundle your binaries into your own RPMs then it's trivial to get a list of what they are and where they were installed. ### Example ``` $ rpm -ql httpd| head -10 /etc/httpd /etc/httpd/conf /etc/httpd/conf.d /etc/httpd/conf.d/README /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.modules.d /etc/httpd/conf.modules.d/00-base.conf ``` I would suggest putting your executables in either `/usr/bin` or `/usr/local/bin` and rolling your own RPM. It's pretty trivial to do this and by managing your software deployment using an RPM you'll be able to label a bundle with a version number further easing the configuration management of your software as you deploy it. ### Determining which RPMs are "mine"? You can build your RPMs using some known information that could then be agreed upon prior to doing the building. I often build packages on systems that are owned by my domain so it's trivial to find RPMs by simply searching through all the RPMs that were built on host X.mydom.com. ### Example ``` $ rpm -qi httpd Name : httpd Version : 2.4.7 Release : 1.fc19 Architecture: x86_64 Install Date: Mon 17 Feb 2014 01:53:15 AM EST Group : System Environment/Daemons Size : 3865725 License : ASL 2.0 Signature : RSA/SHA256, Mon 27 Jan 2014 11:00:08 AM EST, Key ID 07477e65fb4b18e6 Source RPM : httpd-2.4.7-1.fc19.src.rpm Build Date : Mon 27 Jan 2014 08:39:13 AM EST Build Host : buildvm-20.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : http://httpd.apache.org/ Summary : Apache HTTP Server Description : The Apache HTTP Server is a powerful, efficient, and extensible web server. ``` This would be the `Build Host` line within the RPMs. ### The use of /usr/bin/company? I would probably discourage the use of a location such as this. Mainly because it requires all your systems to have their `$PATH` augmented to include it and is non-standard. Customizing things has always been a "right of passage" for every wannabee Unix admin, but I always discourage it unless absolutely necessary. The biggest issue with customization's like this is that they become a burden in both maintaining your environment and in bringing new people up to speed on how to use your environment. ### Can I just get a list of files from RPM? Yes you can achieve this but it will require 2 calls to RPM. The first will build a list of packages that were built on host X.mydom.com. After getting this list you'll need to re-call RPM querying for the files owned by each of these packages. You can achieve this using this one liner: ``` $ rpm -ql $(rpm -qa --queryformat "%-30{NAME}%{BUILDHOST}\n" | \ grep X.mydom.com | awk '{print $1}') | head -10 /etc/pam.d/run_init /etc/sestatus.conf /usr/bin/secon /usr/bin/semodule_deps /usr/bin/semodule_expand /usr/bin/semodule_link /usr/bin/semodule_package /usr/bin/semodule_unpackage /usr/sbin/fixfiles /usr/sbin/genhomedircon ```
An obvious suggestions is to name your binaries or your packages in a special way. So for example you could prefix them with `cm-`, per your initials as given in this post. If you are installing rpms they need to go into `/usr/bin` (if they are user level executables), per the FHS. They should not go into `/usr/local/bin` for example. That is for local installs only. For the record, I don't find the idea of putting binaries in a special directory and linking them appealing at all, though I suppose such things are sometimes done. Bear in mind also that if you need to find out which binaries belong to which package, you can just query the packaging system.
565,916
> > Calculate the indefinite integral > $$\int e^{\sin^2(x)+ \cos^2(x)}\,dx.$$ > > > Not sure how to do this
2013/11/13
[ "https://math.stackexchange.com/questions/565916", "https://math.stackexchange.com", "https://math.stackexchange.com/users/108618/" ]
Well, it's not hard at all. $$\int e^{\sin^2(x)+\cos^2(x)}\,dx= \int e^1\,dx,$$ since $\sin^2(x)+\cos^2(x)=1$. So $e^1$ is a constant and you can pull this out of the integral. which will leave you with $e\int\,dx$, which is just $ex+C$. Hope this helps.
**Hint**: $$\sin^2(x) + \cos^2(x) = 1$$ so therefore $$e^{\sin^2(x) + \cos^2(x)} = ....?$$
68,034,282
I am trying to load data from parquet file in AWS S3 into snowflake table. But getting the below error. Could you please help. ``` SQL compilation error: PARQUET file format can produce one and only one column of type variant or object or array. Use CSV file format if you want to load more than one column. ``` Parquet file schema ``` |-- uuid: string (nullable = true) |-- event_timestamp: timestamp (nullable = true) |-- params: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: string (nullable = true) | | |-- name: string (nullable = true) | | |-- type: string (nullable = true) | | |-- value: string (nullable = true) ``` Here is the sample data. uuid,event\_timestamp,params 3f230ea5-dd52-4cf9-bdde-b79201eb1001,2020-05-10 17:06:21.524,[{id=501, type=custom, name=filtering, value=true}, {id=502, type=custom, name=select, value=false}] **snowflake table** ``` create or replace table temp_log ( uuid string, event_timestamp timestamp, params array); ``` I am using the below copy command to load data ``` copy into temp_log from '<<s3 path>>' pattern = '*.parquet' storage_integration = <<integration object> file_format = ( type = parquet compression = snappy ) ; ```
2021/06/18
[ "https://Stackoverflow.com/questions/68034282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13274071/" ]
This documentation explains how to load parquet data into multiple columns: [Loading Parquet](https://docs.snowflake.com/en/user-guide/script-data-load-transform-parquet.html) **UPDATE** I'm not sure if the comment below is a response to my answer and, if it is, what the relevance of it is? Did you read the document and, if you did, what part of it do you still have questions about? You need to have your data in a stage (presumably an external stage in your case), or possibly in an external table, and then load from that into your table using "COPY INTO table FROM (SELECT..." with the $1:.. notation allowing you to select the appropriate elements from you parquet structure. From the documentation: ``` /* Load the Parquet data into the relational table. */ /* */ /* A SELECT query in the COPY statement identifies a numbered set of columns in the data files you are */ /* loading from. Note that all Parquet data is stored in a single column ($1). */ /* */ /* Cast element values to the target column data type. */ copy into cities from (select $1:continent::varchar, $1:country:name::varchar, $1:country:city.bag::variant from @sf_tut_stage/cities.parquet); ```
This issue was resolved after creating table as below ``` create or replace table temp_log ( logcontent VARIANT); ```
68,034,282
I am trying to load data from parquet file in AWS S3 into snowflake table. But getting the below error. Could you please help. ``` SQL compilation error: PARQUET file format can produce one and only one column of type variant or object or array. Use CSV file format if you want to load more than one column. ``` Parquet file schema ``` |-- uuid: string (nullable = true) |-- event_timestamp: timestamp (nullable = true) |-- params: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: string (nullable = true) | | |-- name: string (nullable = true) | | |-- type: string (nullable = true) | | |-- value: string (nullable = true) ``` Here is the sample data. uuid,event\_timestamp,params 3f230ea5-dd52-4cf9-bdde-b79201eb1001,2020-05-10 17:06:21.524,[{id=501, type=custom, name=filtering, value=true}, {id=502, type=custom, name=select, value=false}] **snowflake table** ``` create or replace table temp_log ( uuid string, event_timestamp timestamp, params array); ``` I am using the below copy command to load data ``` copy into temp_log from '<<s3 path>>' pattern = '*.parquet' storage_integration = <<integration object> file_format = ( type = parquet compression = snappy ) ; ```
2021/06/18
[ "https://Stackoverflow.com/questions/68034282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13274071/" ]
This documentation explains how to load parquet data into multiple columns: [Loading Parquet](https://docs.snowflake.com/en/user-guide/script-data-load-transform-parquet.html) **UPDATE** I'm not sure if the comment below is a response to my answer and, if it is, what the relevance of it is? Did you read the document and, if you did, what part of it do you still have questions about? You need to have your data in a stage (presumably an external stage in your case), or possibly in an external table, and then load from that into your table using "COPY INTO table FROM (SELECT..." with the $1:.. notation allowing you to select the appropriate elements from you parquet structure. From the documentation: ``` /* Load the Parquet data into the relational table. */ /* */ /* A SELECT query in the COPY statement identifies a numbered set of columns in the data files you are */ /* loading from. Note that all Parquet data is stored in a single column ($1). */ /* */ /* Cast element values to the target column data type. */ copy into cities from (select $1:continent::varchar, $1:country:name::varchar, $1:country:city.bag::variant from @sf_tut_stage/cities.parquet); ```
In my case the error message was raised because I was running the command as ``` COPY INTO my_db.my_schema.my_table FROM ( SELECT * FROM @my_stage ) FILE_FORMAT = ( TYPE = PARQUET ); ``` Instead each column should be specified as `$1:my_column` in the 'select statement', for example: ``` COPY INTO my_db.my_schema.my_table FROM ( SELECT $1:my_column1, $1:my_column2, $1:my_column3 FROM @my_stage ) FILE_FORMAT = ( TYPE = PARQUET ); ```
68,034,282
I am trying to load data from parquet file in AWS S3 into snowflake table. But getting the below error. Could you please help. ``` SQL compilation error: PARQUET file format can produce one and only one column of type variant or object or array. Use CSV file format if you want to load more than one column. ``` Parquet file schema ``` |-- uuid: string (nullable = true) |-- event_timestamp: timestamp (nullable = true) |-- params: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: string (nullable = true) | | |-- name: string (nullable = true) | | |-- type: string (nullable = true) | | |-- value: string (nullable = true) ``` Here is the sample data. uuid,event\_timestamp,params 3f230ea5-dd52-4cf9-bdde-b79201eb1001,2020-05-10 17:06:21.524,[{id=501, type=custom, name=filtering, value=true}, {id=502, type=custom, name=select, value=false}] **snowflake table** ``` create or replace table temp_log ( uuid string, event_timestamp timestamp, params array); ``` I am using the below copy command to load data ``` copy into temp_log from '<<s3 path>>' pattern = '*.parquet' storage_integration = <<integration object> file_format = ( type = parquet compression = snappy ) ; ```
2021/06/18
[ "https://Stackoverflow.com/questions/68034282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13274071/" ]
This documentation explains how to load parquet data into multiple columns: [Loading Parquet](https://docs.snowflake.com/en/user-guide/script-data-load-transform-parquet.html) **UPDATE** I'm not sure if the comment below is a response to my answer and, if it is, what the relevance of it is? Did you read the document and, if you did, what part of it do you still have questions about? You need to have your data in a stage (presumably an external stage in your case), or possibly in an external table, and then load from that into your table using "COPY INTO table FROM (SELECT..." with the $1:.. notation allowing you to select the appropriate elements from you parquet structure. From the documentation: ``` /* Load the Parquet data into the relational table. */ /* */ /* A SELECT query in the COPY statement identifies a numbered set of columns in the data files you are */ /* loading from. Note that all Parquet data is stored in a single column ($1). */ /* */ /* Cast element values to the target column data type. */ copy into cities from (select $1:continent::varchar, $1:country:name::varchar, $1:country:city.bag::variant from @sf_tut_stage/cities.parquet); ```
I use these 2 sql to load the data into table, first: use this SQL to create the table sql ``` with cols as ( select COLUMN_NAME || ' ' || TYPE col from table( infer_schema( location=>'@LANDING/myFile.parquet' , file_format=>'LANDING.default_parquet' ) ) ), temp as ( select 'create or replace table myTable (' col1 union select listagg(col, ',') col1 from cols union select ') ' col1 ) select listagg(col1) from temp ``` Second, use this SQL to create the copy into SQL load the data into table ``` with cols as ( select expression from table( infer_schema( location=>'@LANDING/myFile.parquet' , file_format=>'LANDING.default_parquet' ) ) ), temp as ( select 'copy into myTable from ( select ' col1 union select listagg(expression, ',') col1 from cols union select 'from @LANDING/myFile.parquet ) ' col1 ) select listagg(col1) from temp ```
64,008,128
I have the following table structure: ``` +------------+------------+--------------+-------------+ | Column One | Column Two | Column Three | Column Four | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 202 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 102 | +------------+------------+--------------+-------------+ ``` As you see my table has a number of unique rows where the values of Column One-Four are consistent. However, it should be noted it has a number of 'nearly' unique rows where the values of Column One-Three are consistent. I need a query to remove duplicate rows only where the entire row is unique. My expected result would be: ``` +------------+------------+--------------+-------------+ | Column One | Column Two | Column Three | Column Four | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 202 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 102 | +------------+------------+--------------+-------------+ ``` I have the following query, but its not return any results: ``` SELECT * FROM TABLE_NAME WHERE Column One NOT IN (SELECT min(Column One) FROM TABLE_NAME GROUP BY Column One, Column Two, Column Three, Column Four); ```
2020/09/22
[ "https://Stackoverflow.com/questions/64008128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/394601/" ]
You can use elasticsearch **input** on *employees\_data* In your filters, use the elasticsearch **filter** on *transaction\_data* ``` input { elasticsearch { hosts => "localost" index => "employees_data" query => '{ "query": { "match_all": { } } }' sort => "code:desc" scroll => "5m" docinfo => true } } filter { elasticsearch { hosts => "localhost" index => "transaction_data" query => "(code:\"%{[code]}\" fields => { "Month" => "Month", "payment" => "payment" } } } output { elasticsearch { hosts => ["localhost"] index => "join1" } } ``` And send your new document to your third index with the elasticsearch **output** You'll have 3 elastic search connection and the result can be a little slow. But it works.
As long as I know, this can not be happened just using elasticsearch APIs. To handle this, you need to set a unique ID for documents that are relevant. For example, the code that you mentioned in your question can be a good ID for documents. So you can reindex the first index to the third one and use UPDATE API to update them by reading documents from the second index and update them by their IDs into the third index. I hope I could help.
64,008,128
I have the following table structure: ``` +------------+------------+--------------+-------------+ | Column One | Column Two | Column Three | Column Four | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 202 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 102 | +------------+------------+--------------+-------------+ ``` As you see my table has a number of unique rows where the values of Column One-Four are consistent. However, it should be noted it has a number of 'nearly' unique rows where the values of Column One-Three are consistent. I need a query to remove duplicate rows only where the entire row is unique. My expected result would be: ``` +------------+------------+--------------+-------------+ | Column One | Column Two | Column Three | Column Four | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 202 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 102 | +------------+------------+--------------+-------------+ ``` I have the following query, but its not return any results: ``` SELECT * FROM TABLE_NAME WHERE Column One NOT IN (SELECT min(Column One) FROM TABLE_NAME GROUP BY Column One, Column Two, Column Three, Column Four); ```
2020/09/22
[ "https://Stackoverflow.com/questions/64008128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/394601/" ]
You don't need Logstash to do this, Elasticsearch itself supports that by leveraging the [`enrich processor`](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-enriching-data.html). First, you need to create an enrich policy (use the smallest index, let's say it's `employees_data` ): ``` PUT /_enrich/policy/employee-policy { "match": { "indices": "employees_data", "match_field": "code", "enrich_fields": ["name", "city"] } } ``` Then you can execute that policy in order to create an enrichment index ``` POST /_enrich/policy/employee-policy/_execute ``` When the enrichment index has been created and populated, the next step requires you to create an ingest pipeline that uses the above enrich policy/index: ``` PUT /_ingest/pipeline/employee_lookup { "description" : "Enriching transactions with employee data", "processors" : [ { "enrich" : { "policy_name": "employee-policy", "field" : "code", "target_field": "tmp", "max_matches": "1" } }, { "script": { "if": "ctx.tmp != null", "source": "ctx.putAll(ctx.tmp); ctx.remove('tmp');" } } ] } ``` Finally, you're now ready to create your target index with the joined data. Simply leverage the `_reindex` API combined with the ingest pipeline we've just created: ``` POST _reindex { "source": { "index": "transaction_data" }, "dest": { "index": "join1", "pipeline": "employee_lookup" } } ``` After running this, the `join1` index will contain exactly what you need, for instance: ``` { "_index" : "join1", "_type" : "_doc", "_id" : "0uA8dXMBU9tMsBeoajlw", "_score" : 1.0, "_source" : { "code":1, "name": "xyz", "city": "Mumbai", "Month": "June", "payment": 78000 } } ```
As long as I know, this can not be happened just using elasticsearch APIs. To handle this, you need to set a unique ID for documents that are relevant. For example, the code that you mentioned in your question can be a good ID for documents. So you can reindex the first index to the third one and use UPDATE API to update them by reading documents from the second index and update them by their IDs into the third index. I hope I could help.
64,008,128
I have the following table structure: ``` +------------+------------+--------------+-------------+ | Column One | Column Two | Column Three | Column Four | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 202 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 102 | +------------+------------+--------------+-------------+ ``` As you see my table has a number of unique rows where the values of Column One-Four are consistent. However, it should be noted it has a number of 'nearly' unique rows where the values of Column One-Three are consistent. I need a query to remove duplicate rows only where the entire row is unique. My expected result would be: ``` +------------+------------+--------------+-------------+ | Column One | Column Two | Column Three | Column Four | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 200 | +------------+------------+--------------+-------------+ | 1001 | 6000 | 3000 | 202 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 100 | +------------+------------+--------------+-------------+ | 2001 | 6000 | 3000 | 102 | +------------+------------+--------------+-------------+ ``` I have the following query, but its not return any results: ``` SELECT * FROM TABLE_NAME WHERE Column One NOT IN (SELECT min(Column One) FROM TABLE_NAME GROUP BY Column One, Column Two, Column Three, Column Four); ```
2020/09/22
[ "https://Stackoverflow.com/questions/64008128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/394601/" ]
You can use elasticsearch **input** on *employees\_data* In your filters, use the elasticsearch **filter** on *transaction\_data* ``` input { elasticsearch { hosts => "localost" index => "employees_data" query => '{ "query": { "match_all": { } } }' sort => "code:desc" scroll => "5m" docinfo => true } } filter { elasticsearch { hosts => "localhost" index => "transaction_data" query => "(code:\"%{[code]}\" fields => { "Month" => "Month", "payment" => "payment" } } } output { elasticsearch { hosts => ["localhost"] index => "join1" } } ``` And send your new document to your third index with the elasticsearch **output** You'll have 3 elastic search connection and the result can be a little slow. But it works.
You don't need Logstash to do this, Elasticsearch itself supports that by leveraging the [`enrich processor`](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-enriching-data.html). First, you need to create an enrich policy (use the smallest index, let's say it's `employees_data` ): ``` PUT /_enrich/policy/employee-policy { "match": { "indices": "employees_data", "match_field": "code", "enrich_fields": ["name", "city"] } } ``` Then you can execute that policy in order to create an enrichment index ``` POST /_enrich/policy/employee-policy/_execute ``` When the enrichment index has been created and populated, the next step requires you to create an ingest pipeline that uses the above enrich policy/index: ``` PUT /_ingest/pipeline/employee_lookup { "description" : "Enriching transactions with employee data", "processors" : [ { "enrich" : { "policy_name": "employee-policy", "field" : "code", "target_field": "tmp", "max_matches": "1" } }, { "script": { "if": "ctx.tmp != null", "source": "ctx.putAll(ctx.tmp); ctx.remove('tmp');" } } ] } ``` Finally, you're now ready to create your target index with the joined data. Simply leverage the `_reindex` API combined with the ingest pipeline we've just created: ``` POST _reindex { "source": { "index": "transaction_data" }, "dest": { "index": "join1", "pipeline": "employee_lookup" } } ``` After running this, the `join1` index will contain exactly what you need, for instance: ``` { "_index" : "join1", "_type" : "_doc", "_id" : "0uA8dXMBU9tMsBeoajlw", "_score" : 1.0, "_source" : { "code":1, "name": "xyz", "city": "Mumbai", "Month": "June", "payment": 78000 } } ```
2,651,022
I have created my custom MembershipProvider. I have used an instance of the class DBConnect within this provider to handle database functions. Please look at the code below: ``` public class SGIMembershipProvider : MembershipProvider { #region "[ Property Variables ]" private int newPasswordLength = 8; private string connectionString; private string applicationName; private bool enablePasswordReset; private bool enablePasswordRetrieval; private bool requiresQuestionAndAnswer; private bool requiresUniqueEmail; private int maxInvalidPasswordAttempts; private int passwordAttemptWindow; private MembershipPasswordFormat passwordFormat; private int minRequiredNonAlphanumericCharacters; private int minRequiredPasswordLength; private string passwordStrengthRegularExpression; private MachineKeySection machineKey; **private DBConnect dbConn;** #endregion ....... public override bool ChangePassword(string username, string oldPassword, string newPassword) { if (!ValidateUser(username, oldPassword)) return false; ValidatePasswordEventArgs args = new ValidatePasswordEventArgs(username, newPassword, true); OnValidatingPassword(args); if (args.Cancel) { if (args.FailureInformation != null) { throw args.FailureInformation; } else { throw new Exception("Change password canceled due to new password validation failure."); } } SqlParameter[] p = new SqlParameter[3]; p[0] = new SqlParameter("@applicationName", applicationName); p[1] = new SqlParameter("@username", username); p[2] = new SqlParameter("@password", EncodePassword(newPassword)); bool retval = **dbConn.ExecuteSP("User_ChangePassword", p);** return retval; } //ChangePassword public override void Initialize(string name, NameValueCollection config) { if (config == null) { throw new ArgumentNullException("config"); } ...... ConnectionStringSettings ConnectionStringSettings = ConfigurationManager.ConnectionStrings[config["connectionStringName"]]; if ((ConnectionStringSettings == null) || (ConnectionStringSettings.ConnectionString.Trim() == String.Empty)) { throw new ProviderException("Connection string cannot be blank."); } connectionString = ConnectionStringSettings.ConnectionString; **dbConn = new DBConnect(connectionString); dbConn.ConnectToDB();** ...... } //Initialize ...... } // SGIMembershipProvider ``` I have instantiated `dbConn` object within `Initialize()` event. My problem is that how could i dispose off this object when object of `SGIMembershipProvider` is disposed off. I know the GC will do this all for me, but I need to explicitly dispose off that object. Even I tried to override `Finalize()` but there is no such overridable method. I have also tried to create destructor for `SGIMembershipProvider`. Can anyone provide me solution.
2010/04/16
[ "https://Stackoverflow.com/questions/2651022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/123514/" ]
From what I can see, [`MembershipProvider`](http://msdn.microsoft.com/en-us/library/system.web.security.membershipprovider.aspx) is not `IDisposable` (nor is [`ProviderBase`](http://msdn.microsoft.com/en-us/library/system.configuration.provider.providerbase.aspx)), so we're really talking about garbage collection here, not disposal. You should be able to add your own finalizer (`~SGIMembershipProvider() {}`), but that should *only* talk to *unmanaged* objects - the other *managed* objects will *also* be collected, after all (and should handle their own unmanaged objects, if any - which is rare).
For this specific scenario, I would recommend directly creating your DB connection inside of each method you need it in, instead of storing a reference to it. Wrap it in a using statement and let the framework dispose it for you. The reason I recommend this approach is that you aren't saving any resources by hanging on to that connection object. Connection pooling will handle reusing of existing open connections for you. Connection pooling works at the unmanaged code level. When you close/dispose of a managed Connection object, it doesn't necessarily close the physical unmanged connection, it simply returns the connection to the connection pool. The unmanaged code controlling the pool will then determine what to do with it (close it, keep it open for a bit, give it to another process that needs it). Likewise, when you create a managed connection, you are not necessarily creating a new connection from scratch, you may just be reusing an existing one. I do think the framework needs to make this object disposable however. I can think of plenty of other situations in which I'd like to reuse something in the provider and dispose of it in the end.
61,132,631
When working with Java streams, we can use a collector to produce a collection such as a stream. For example, here we make a stream of the `Month` enum objects, and for each one generate a `String` holding the localized name of the month. We collect the results into a `List` of type `String` by calling [`Collectors.toList()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html#toList()). ``` List < String > monthNames = Arrays .stream( Month.values() ) .map( month -> month.getDisplayName( TextStyle.FULL , Locale.CANADA_FRENCH ) ) .collect( Collectors.toList() ) ; ``` > > monthNames.toString(): [janvier, février, mars, avril, mai, juin, juillet, août, septembre, octobre, novembre, décembre] > > > To make that [list unmodifiable](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/List.html#unmodifiable), we can call [`List.copyOf`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/List.html#copyOf(java.util.Collection)) in Java 10 and later. ``` List < String > monthNamesUnmod = List.copyOf( monthNames ); ``` ➥ Is there a way for the stream with collector to produce an unmodifiable list without me needing to wrap a call to `List.copyOf`?
2020/04/10
[ "https://Stackoverflow.com/questions/61132631", "https://Stackoverflow.com", "https://Stackoverflow.com/users/642706/" ]
`Collectors.toUnmodifiableList` =============================== Yes, there is a way: [`Collectors.toUnmodifiableList`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html#toUnmodifiableList()) Like [`List.copyOf`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/List.html#copyOf(java.util.Collection)), this feature is built into [Java 10](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_10) and later. In contrast, `Collectors.toList` appeared with the debut of [`Collectors`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html) in [Java 8](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8). In your example code, just change that last part [`toList`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html#toList()) to `toUnmodifiableList`. ``` List < String > monthNames = Arrays .stream( Month.values() ) .map( month -> month.getDisplayName( TextStyle.FULL , Locale.CANADA_FRENCH ) ) .collect( Collectors.toUnModifiableList() ) // Call `toUnModifiableList`. ; ``` `Set` and `Map` too ------------------- The [`Collectors`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html) utility class offers options for collecting into an unmodifiable `Set` or `Map` as well as `List`. * [`Collectors.toUnmodifiableList()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html#toUnmodifiableList()) * [`Collectors.toUnmodifiableSet()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html#toUnmodifiableSet()) * [`Collectors.toUnmodifiableMap()`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html#toUnmodifiableMap(java.util.function.Function,java.util.function.Function)) (or [with `BinaryOperator`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/stream/Collectors.html#toUnmodifiableMap(java.util.function.Function,java.util.function.Function,java.util.function.BinaryOperator)))
In Java 8 we could use [`Collectors.collectingAndThen`](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collectors.html#collectingAndThen-java.util.stream.Collector-java.util.function.Function-). ``` List < String > monthNames = Arrays .stream( Month.values() ) .map( month -> month.getDisplayName( TextStyle.FULL , Locale.CANADA_FRENCH ) ) .collect( Collectors.collectingAndThen(Collectors.toList(), Collections::unmodifiableList) ) ; ```
4,629,946
Authlogic hasn't been updated in a few months and, while it seems to work in Rails 3, still has a ton of deprecation warnings. Is there a particularly good fork of it I can/should use instead? I'm tempted to fork it and maintain an "authlogic-rails3" gem.
2011/01/07
[ "https://Stackoverflow.com/questions/4629946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/168143/" ]
I'm using this and seems to be fine. <https://github.com/odorcicd/authlogic/tree/rails3>
I've created my own, which I'm pondering naming authlogic-rails3: <http://github.com/jjb/authlogic>
55,787,018
Using .NET Core 3 preview 4, the "API" template for a F# ASP.NET MVC project fails to build. This is without any changes to the template whatsoever. [!["API" template for a F# ASP.NET MVC project](https://i.stack.imgur.com/cWJ87.png)](https://i.stack.imgur.com/cWJ87.png) This is the code that fails: ```fs type Startup private () = member this.ConfigureServices(services: IServiceCollection) = // Add framework services. services.AddControllers().AddNewtonsoftJson() |> ignore ``` With error > > ...\Startup.fs(23,35): error FS0039: The field, constructor or member 'AddNewtonsoftJson' is not defined. Maybe you want one of the following: AddNewtonsoftJsonProtocol > > > It seems that there are [changes coming for this](https://github.com/aspnet/AspNetCore/issues/7438) - is it just being worked on and unusable right now?
2019/04/21
[ "https://Stackoverflow.com/questions/55787018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/304174/" ]
In order to switch ASP.NET Core 3.0 back to use JSON.NET, you will need to reference the [`Microsoft.AspNetCore.Mvc.NewtonsoftJson` NuGet package](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.NewtonsoftJson). That will contain the `AddNewtonsoftJson` extension method. In C#, this would look like this: ``` services.AddControllers() .AddNewtonsoftJson(); ``` So assuming that I understand enough of F#, I would say that your call would be correct if you have the package referenced in your project.
For me this helped: 1. Code in Startup.cs `services.AddControllers().AddNewtonsoftJson(x => x.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore);` 2. Upgrade all Nuget Packages to 3.1.8 (3.1.3 was not working)
55,787,018
Using .NET Core 3 preview 4, the "API" template for a F# ASP.NET MVC project fails to build. This is without any changes to the template whatsoever. [!["API" template for a F# ASP.NET MVC project](https://i.stack.imgur.com/cWJ87.png)](https://i.stack.imgur.com/cWJ87.png) This is the code that fails: ```fs type Startup private () = member this.ConfigureServices(services: IServiceCollection) = // Add framework services. services.AddControllers().AddNewtonsoftJson() |> ignore ``` With error > > ...\Startup.fs(23,35): error FS0039: The field, constructor or member 'AddNewtonsoftJson' is not defined. Maybe you want one of the following: AddNewtonsoftJsonProtocol > > > It seems that there are [changes coming for this](https://github.com/aspnet/AspNetCore/issues/7438) - is it just being worked on and unusable right now?
2019/04/21
[ "https://Stackoverflow.com/questions/55787018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/304174/" ]
In order to switch ASP.NET Core 3.0 back to use JSON.NET, you will need to reference the [`Microsoft.AspNetCore.Mvc.NewtonsoftJson` NuGet package](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.NewtonsoftJson). That will contain the `AddNewtonsoftJson` extension method. In C#, this would look like this: ``` services.AddControllers() .AddNewtonsoftJson(); ``` So assuming that I understand enough of F#, I would say that your call would be correct if you have the package referenced in your project.
It's work for me, Install the NewtonsoftJson package from NuGet "dotnet add package Microsoft.AspNetCore.Mvc.NewtonsoftJson --version 3.1.0" version 3.1.0 working for ASP.NET Core 3.0 and use the Following Code- ``` services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_3_0) .AddNewtonsoftJson(opt => { opt.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore; }); ``` Hope it's Working Fine, Thanks.
55,787,018
Using .NET Core 3 preview 4, the "API" template for a F# ASP.NET MVC project fails to build. This is without any changes to the template whatsoever. [!["API" template for a F# ASP.NET MVC project](https://i.stack.imgur.com/cWJ87.png)](https://i.stack.imgur.com/cWJ87.png) This is the code that fails: ```fs type Startup private () = member this.ConfigureServices(services: IServiceCollection) = // Add framework services. services.AddControllers().AddNewtonsoftJson() |> ignore ``` With error > > ...\Startup.fs(23,35): error FS0039: The field, constructor or member 'AddNewtonsoftJson' is not defined. Maybe you want one of the following: AddNewtonsoftJsonProtocol > > > It seems that there are [changes coming for this](https://github.com/aspnet/AspNetCore/issues/7438) - is it just being worked on and unusable right now?
2019/04/21
[ "https://Stackoverflow.com/questions/55787018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/304174/" ]
In order to switch ASP.NET Core 3.0 back to use JSON.NET, you will need to reference the [`Microsoft.AspNetCore.Mvc.NewtonsoftJson` NuGet package](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.NewtonsoftJson). That will contain the `AddNewtonsoftJson` extension method. In C#, this would look like this: ``` services.AddControllers() .AddNewtonsoftJson(); ``` So assuming that I understand enough of F#, I would say that your call would be correct if you have the package referenced in your project.
Add package: Microsoft.AspNetCore.Mvc.NewtonsoftJson Package details: <https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.NewtonsoftJson> Call `AddNewtonsoftJson()` extension method as mentioned below ``` // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews().AddNewtonsoftJson(); } ```
55,787,018
Using .NET Core 3 preview 4, the "API" template for a F# ASP.NET MVC project fails to build. This is without any changes to the template whatsoever. [!["API" template for a F# ASP.NET MVC project](https://i.stack.imgur.com/cWJ87.png)](https://i.stack.imgur.com/cWJ87.png) This is the code that fails: ```fs type Startup private () = member this.ConfigureServices(services: IServiceCollection) = // Add framework services. services.AddControllers().AddNewtonsoftJson() |> ignore ``` With error > > ...\Startup.fs(23,35): error FS0039: The field, constructor or member 'AddNewtonsoftJson' is not defined. Maybe you want one of the following: AddNewtonsoftJsonProtocol > > > It seems that there are [changes coming for this](https://github.com/aspnet/AspNetCore/issues/7438) - is it just being worked on and unusable right now?
2019/04/21
[ "https://Stackoverflow.com/questions/55787018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/304174/" ]
Add package: Microsoft.AspNetCore.Mvc.NewtonsoftJson Package details: <https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.NewtonsoftJson> Call `AddNewtonsoftJson()` extension method as mentioned below ``` // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews().AddNewtonsoftJson(); } ```
For me this helped: 1. Code in Startup.cs `services.AddControllers().AddNewtonsoftJson(x => x.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore);` 2. Upgrade all Nuget Packages to 3.1.8 (3.1.3 was not working)
55,787,018
Using .NET Core 3 preview 4, the "API" template for a F# ASP.NET MVC project fails to build. This is without any changes to the template whatsoever. [!["API" template for a F# ASP.NET MVC project](https://i.stack.imgur.com/cWJ87.png)](https://i.stack.imgur.com/cWJ87.png) This is the code that fails: ```fs type Startup private () = member this.ConfigureServices(services: IServiceCollection) = // Add framework services. services.AddControllers().AddNewtonsoftJson() |> ignore ``` With error > > ...\Startup.fs(23,35): error FS0039: The field, constructor or member 'AddNewtonsoftJson' is not defined. Maybe you want one of the following: AddNewtonsoftJsonProtocol > > > It seems that there are [changes coming for this](https://github.com/aspnet/AspNetCore/issues/7438) - is it just being worked on and unusable right now?
2019/04/21
[ "https://Stackoverflow.com/questions/55787018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/304174/" ]
Add package: Microsoft.AspNetCore.Mvc.NewtonsoftJson Package details: <https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.NewtonsoftJson> Call `AddNewtonsoftJson()` extension method as mentioned below ``` // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews().AddNewtonsoftJson(); } ```
It's work for me, Install the NewtonsoftJson package from NuGet "dotnet add package Microsoft.AspNetCore.Mvc.NewtonsoftJson --version 3.1.0" version 3.1.0 working for ASP.NET Core 3.0 and use the Following Code- ``` services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_3_0) .AddNewtonsoftJson(opt => { opt.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore; }); ``` Hope it's Working Fine, Thanks.
334,844
I have installed Magento 2.4 successfully in localhost. When I tried to open the link (http://localhost/magento3/) it's showing not found. But all the Magento directories in the same folder(magento3). Please any let me know how to fix this issue. [![enter image description here](https://i.stack.imgur.com/4UxuK.png)](https://i.stack.imgur.com/4UxuK.png) [![enter image description here](https://i.stack.imgur.com/yA0Q9.png)](https://i.stack.imgur.com/yA0Q9.png) Thanks
2021/03/30
[ "https://magento.stackexchange.com/questions/334844", "https://magento.stackexchange.com", "https://magento.stackexchange.com/users/72786/" ]
I suggest you to use bulk api endpoints, for better performance. Tier price updates the endpoint is ``` {{host}}/rest/async/bulk/V1/products/tier-prices/ ``` or ``` {{host}}/rest/all/async/bulk/V1/products/tier-prices/ ``` or ``` {{host}}/rest/<storecode>/async/bulk/V1/products/tier-prices/ ``` Here an example of the json payload ``` [{ "prices": [ { "price": 66, "price_type": "fixed", "website_id": 0, "sku": "juice-lemon", "customer_group": "Gold", "quantity": 1 }, { "price": 55, "price_type": "fixed", "website_id": 0, "sku": "juice-lemon", "customer_group": "Silver", "quantity": 1 }, { "price": 44, "price_type": "fixed", "website_id": 0, "sku": "juice-orange", "customer_group": "Black", "quantity": 1 }, ] } ] ``` You can find format and fields explanation at the tier-price REST official documentaion <https://devdocs.magento.com/guides/v2.4/rest/modules/catalog-pricing.html#manage-tier-prices> Here is the documentation about bulk endpoints and how they work <https://devdocs.magento.com/guides/v2.4/rest/bulk-endpoints.html>
There is no built in bulk update option for updating tier pricing. > > Rather than entering tier prices manually for each product, it can be > more efficient to import the pricing data. > > > Have a look at the documentation for importing tier price data here <https://docs.magento.com/user-guide/system/data-import-price-tier.html> You will also find many third party import tools available that enable you to import pricing data. Furthermore it is relatively easy to create your own import scripts for modifying data like this yourself. And finally, believe it or not, MAGMI is still an option for Magento 2. <http://wiki.magmi.org/index.php/Tier_price_importer>
334,844
I have installed Magento 2.4 successfully in localhost. When I tried to open the link (http://localhost/magento3/) it's showing not found. But all the Magento directories in the same folder(magento3). Please any let me know how to fix this issue. [![enter image description here](https://i.stack.imgur.com/4UxuK.png)](https://i.stack.imgur.com/4UxuK.png) [![enter image description here](https://i.stack.imgur.com/yA0Q9.png)](https://i.stack.imgur.com/yA0Q9.png) Thanks
2021/03/30
[ "https://magento.stackexchange.com/questions/334844", "https://magento.stackexchange.com", "https://magento.stackexchange.com/users/72786/" ]
we recently published a module that does exactly what you need. You can find it here <https://marketplace.magento.com/customgento-module-mass-update-tier-prices-m2.html>.
There is no built in bulk update option for updating tier pricing. > > Rather than entering tier prices manually for each product, it can be > more efficient to import the pricing data. > > > Have a look at the documentation for importing tier price data here <https://docs.magento.com/user-guide/system/data-import-price-tier.html> You will also find many third party import tools available that enable you to import pricing data. Furthermore it is relatively easy to create your own import scripts for modifying data like this yourself. And finally, believe it or not, MAGMI is still an option for Magento 2. <http://wiki.magmi.org/index.php/Tier_price_importer>
606,985
So, if I run a program through the menus in gnome-shell, is there a way to view `stdout` and `stderr`? Or is there some kind of hack to achieve this functionality? Or is everything just sent to `/dev/null`?
2013/06/13
[ "https://superuser.com/questions/606985", "https://superuser.com", "https://superuser.com/users/102044/" ]
Usually, `gdm`/session start-up scripts redirect `stderr` & `stdout` to either: ``` ~/.xsession-errors ``` or ``` ~/.cache/gdm/session.log ``` With `systemd` and recent `gdm` versions, everything is redirected to `systemd journal`, so one way to get that output is: ``` journalctl -b _PID=$(pgrep gnome-session) ```
The command suggested by don\_crissti didn't show anything for me, but I just do: ``` journalctl -f ``` in a terminal tab that I always leave open (and opens automatically on boot) so I have realtime feedback of all logging from systemd on my computer. If desired, you can use the match filters from journalctl to limit the noise, but for now I like to have everything at hand.
14,911,356
ive just installed titanium and the android sdk for development. In my project i have an index.html but its not loading that when i do a build, it keeps loading a 'welcome to titanium' html page which for the life of me i just can't find anywhere to see where its being loaded from. How the heck do i set *my* index.html to be the one that is loaded when the app first loads? I have tried adding index2.html (index2 as a test) but its still loading this welcome to titanium url even when i make a new blank project.
2013/02/16
[ "https://Stackoverflow.com/questions/14911356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058234/" ]
Sounds like you created a default alloy project, all the `app/controllers/index.xml` file does is load another controller, probably called FirstView or something like that. Look through your `views` directory inside the `app` directory for another `.xml` file. The structure of Alloy is that the `index.xml` file is loaded first no matter what, so it is not even recognizing your `index2.xml`. I would highly recommend you go through the [Alloy Quick Start](http://docs.appcelerator.com/titanium/latest/#!/guide/Alloy_Quick_Start) to get the general concepts first.
I think you are working with titanium alloy. if so, your file should be index.xml and not index.html index.xml contains as a child node of node. for allow project you can find index.xml file and for the same file there is a controller file in folder app/controller/index.js. in index.js file there must b a following line ``` index.open() ``` this line will open the index.xml file in your app. **Note :** if you have given id attribute to window node in xml than you should use $..open(). this should work fine.
14,911,356
ive just installed titanium and the android sdk for development. In my project i have an index.html but its not loading that when i do a build, it keeps loading a 'welcome to titanium' html page which for the life of me i just can't find anywhere to see where its being loaded from. How the heck do i set *my* index.html to be the one that is loaded when the app first loads? I have tried adding index2.html (index2 as a test) but its still loading this welcome to titanium url even when i make a new blank project.
2013/02/16
[ "https://Stackoverflow.com/questions/14911356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058234/" ]
Sounds like you created a default alloy project, all the `app/controllers/index.xml` file does is load another controller, probably called FirstView or something like that. Look through your `views` directory inside the `app` directory for another `.xml` file. The structure of Alloy is that the `index.xml` file is loaded first no matter what, so it is not even recognizing your `index2.xml`. I would highly recommend you go through the [Alloy Quick Start](http://docs.appcelerator.com/titanium/latest/#!/guide/Alloy_Quick_Start) to get the general concepts first.
It should be `index.xml` and body should be like this: ``` <Alloy> <Window id="xyz"> </Window> </Alloy> ``` Then there should be `index.js` file where you have to call this xml file by id: ``` $.xyz.open(); ```
14,911,356
ive just installed titanium and the android sdk for development. In my project i have an index.html but its not loading that when i do a build, it keeps loading a 'welcome to titanium' html page which for the life of me i just can't find anywhere to see where its being loaded from. How the heck do i set *my* index.html to be the one that is loaded when the app first loads? I have tried adding index2.html (index2 as a test) but its still loading this welcome to titanium url even when i make a new blank project.
2013/02/16
[ "https://Stackoverflow.com/questions/14911356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058234/" ]
I think you are working with titanium alloy. if so, your file should be index.xml and not index.html index.xml contains as a child node of node. for allow project you can find index.xml file and for the same file there is a controller file in folder app/controller/index.js. in index.js file there must b a following line ``` index.open() ``` this line will open the index.xml file in your app. **Note :** if you have given id attribute to window node in xml than you should use $..open(). this should work fine.
It should be `index.xml` and body should be like this: ``` <Alloy> <Window id="xyz"> </Window> </Alloy> ``` Then there should be `index.js` file where you have to call this xml file by id: ``` $.xyz.open(); ```
206,737
I have a question about comparing the points within two plots. I would like to compare two plots and find the minimum distance among their points, in order to find the nearest/common points (i.e., those ones with minimum or zero-distance) and plot it (overlapping). What I did was to extract the coordinates of the respectively points. But I do not know how to compare them and/or the two plots. I used the following line of code, but the result is completely different from what I am looking for. ``` Outer[EuclideanDistance, seq1, seq2, 1] // Flatten seq1 = {{160.5, 262.5}, {105.5, 241.5}, {247.5, 241.5}, {333.5, 220.5}, {34.5, 199.5}, {239.5, 178.5}, {58.5, 136.5}, {159.5, 73.5}, {281.5, 178.5}, {124.5, 262.5}, {196.5, 152.5}, {92.5, 194.5}, {153.5, 239.5}, {120.5, 236.5}, {105.5, 173.5}, {88.5, 131.5}, {26.5, 110.5}, {96.5, 110.5}, {152, 89.5}, {2.5, 68.5}, {49.5, 47.5}, {281.5, 221.5}, {217.5, 200.5}, {172.5, 158.5}, {296.5, 179.5}, {51.5, 300.5}, {60.5, 279.5}, {171.5, 279.5}, {311, 216}, {350.5, 216.5}, {83.5, 153.5}, {239.5, 132.5}, {75.5, 111.5}, {79.5, 195.5}, {110.5, 195.5}, {126.5, 195.5}, {183.5, 153.5}, {49.5, 90.5}, {53.5, 158.5}, {111.5, 216.5}, {244.5, 258.5}, {110.5, 69.5}, {221.5, 237.5}, {276.5, 237.5}, {147.5, 299.5}, {165.5, 195.5}, {84.5, 299.5}, {92.5, 299.5}, {21.5, 257.5}, {29.5, 257.5}, {77.5, 89.5}, {60.5, 68.5}, {68.5, 47.5}, {76.5, 47.5}, {139.5, 257.5}, {36.5, 175.5}, {185.5, 175.5}, {99.5, 154.5}, {43.5, 133.5}, {43.5, 70.5}, {129.5, 70.5}, {4.5, 49.5}, {254.5, 195.5}, {264.5, 90.5}, {342.5, 90.5}, {175.5, 215.5}, {214.5, 215.5}, {307.5, 174.5}, {230.5, 258.5}, {144.5, 216.5}, {42.5, 153.5}, {190.5, 132.5}, {42.5, 111.5}, {66.5, 90.5}, {121.5, 90.5}, {96.5, 69.5}, {174.5, 48.5}, {228.5, 278.5}} seq2 = {{160.5, 262.5}, {105.5, 241.5}, {247.5, 241.5}, {333.5, 220.5}, {34.5, 199.5}, {239.5, 178.5}, {58.5, 136}, {159.5, 73.5}, {281.5, 178.5}, {128, 262.5}, {196.5, 152.5}, {92, 194.5}, {153.5, 239.5}, {120.5, 236.5}, {105.5, 173.5}, {88.5, 131.5}, {26.5, 110.5}, {96.5, 110.5}, {152.5, 89.5}, {2.5, 68.5}, {49.5, 47.5}, {281.5, 221.5}, {217.5, 200.5}, {172.5, 158.5}, {296.5, 179.5}, {51.5, 300.5}, {60.5, 279.5}, {171.5, 279.5}, {311.5, 216.5}, {350.5, 216.5}, {83.5, 153.5}, {239.5, 132.5}, {75.5, 111.5}, {79.5, 195.5}, {110.5, 200.5}, {126.5, 195.5}, {183.5, 153.5}, {49.5, 90.5}, {53.5, 158.5}, {111.5, 216.5}, {244.5, 258.5}, {114.5, 69.5}, {221.5, 237.5}, {276.5, 237.5}, {147.5, 299.5}, {165.5, 195.5}, {84.5, 299.5}, {92.5, 299.5}, {22.5, 257.5}, {29.5, 257.5}, {77.5, 89.5}, {60.5, 68.5}, {68.5, 47.5}, {76.5, 47.5}, {139.5, 257.5}, {36.5, 175.5}, {185.5, 175.5}, {99.5, 154.5}, {43.5, 133.5}, {43.5, 70.5}, {129.5, 70.5}, {4.5, 49.5}, {254.5, 195.5}, {264.5, 90.5}, {342.5, 90.5}, {175.5, 215.5}, {214.5, 215.5}, {307.5, 174.5}, {230.5, 258.5}, {144.5, 216.5}, {42.5, 153.5}, {190.5, 132.5}, {42.5, 111.5}, {66.5, 90.5}, {121.5, 90.5}, {96.5, 69.5}, {174.5, 48.5}, {228.5, 278.5}} ``` [![enter image description here](https://i.stack.imgur.com/e3Bc4.png)](https://i.stack.imgur.com/e3Bc4.png) [![enter image description here](https://i.stack.imgur.com/u9OaQ.png)](https://i.stack.imgur.com/u9OaQ.png) The result should show the points on the plot equal (almost in common) between the two plots. Could you please help me?
2019/09/23
[ "https://mathematica.stackexchange.com/questions/206737", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/67505/" ]
Since most of the points in your two data sets coincide and none of them has a nearest neighbor that is very far away when the total scale of the two plots is taken into consideration, I recommend visualizing the spacial relation between the points by plotting the points of one dataset and showing the offset of the nearest point in the other by color. Like so: Data ``` seq1 = {{160.5, 262.5}, {105.5, 241.5}, {247.5, 241.5}, {333.5, 220.5}, {34.5, 199.5}, {239.5, 178.5}, {58.5, 136.5}, {159.5, 73.5}, {281.5, 178.5}, {124.5, 262.5}, {196.5, 152.5}, {92.5, 194.5}, {153.5, 239.5}, {120.5, 236.5}, {105.5, 173.5}, {88.5, 131.5}, {26.5, 110.5}, {96.5, 110.5}, {152, 89.5}, {2.5, 68.5}, {49.5, 47.5}, {281.5, 221.5}, {217.5, 200.5}, {172.5, 158.5}, {296.5, 179.5}, {51.5, 300.5}, {60.5, 279.5}, {171.5, 279.5}, {311, 216}, {350.5, 216.5}, {83.5, 153.5}, {239.5, 132.5}, {75.5, 111.5}, {79.5, 195.5}, {110.5, 195.5}, {126.5, 195.5}, {183.5, 153.5}, {49.5, 90.5}, {53.5, 158.5}, {111.5, 216.5}, {244.5, 258.5}, {110.5, 69.5}, {221.5, 237.5}, {276.5, 237.5}, {147.5, 299.5}, {165.5, 195.5}, {84.5, 299.5}, {92.5, 299.5}, {21.5, 257.5}, {29.5, 257.5}, {77.5, 89.5}, {60.5, 68.5}, {68.5, 47.5}, {76.5, 47.5}, {139.5, 257.5}, {36.5, 175.5}, {185.5, 175.5}, {99.5, 154.5}, {43.5, 133.5}, {43.5, 70.5}, {129.5, 70.5}, {4.5, 49.5}, {254.5, 195.5}, {264.5, 90.5}, {342.5, 90.5}, {175.5, 215.5}, {214.5, 215.5}, {307.5, 174.5}, {230.5, 258.5}, {144.5, 216.5}, {42.5, 153.5}, {190.5, 132.5}, {42.5, 111.5}, {66.5, 90.5}, {121.5, 90.5}, {96.5, 69.5}, {174.5, 48.5}, {228.5, 278.5}}; seq2 = {{160.5, 262.5}, {105.5, 241.5}, {247.5, 241.5}, {333.5, 220.5}, {34.5, 199.5}, {239.5, 178.5}, {58.5, 136}, {159.5, 73.5}, {281.5, 178.5}, {128, 262.5}, {196.5, 152.5}, {92, 194.5}, {153.5, 239.5}, {120.5, 236.5}, {105.5, 173.5}, {88.5, 131.5}, {26.5, 110.5}, {96.5, 110.5}, {152.5, 89.5}, {2.5, 68.5}, {49.5, 47.5}, {281.5, 221.5}, {217.5, 200.5}, {172.5, 158.5}, {296.5, 179.5}, {51.5, 300.5}, {60.5, 279.5}, {171.5, 279.5}, {311.5, 216.5}, {350.5, 216.5}, {83.5, 153.5}, {239.5, 132.5}, {75.5, 111.5}, {79.5, 195.5}, {110.5, 200.5}, {126.5, 195.5}, {183.5, 153.5}, {49.5, 90.5}, {53.5, 158.5}, {111.5, 216.5}, {244.5, 258.5}, {114.5, 69.5}, {221.5, 237.5}, {276.5, 237.5}, {147.5, 299.5}, {165.5, 195.5}, {84.5, 299.5}, {92.5, 299.5}, {22.5, 257.5}, {29.5, 257.5}, {77.5, 89.5}, {60.5, 68.5}, {68.5, 47.5}, {76.5, 47.5}, {139.5, 257.5}, {36.5, 175.5}, {185.5, 175.5}, {99.5, 154.5}, {43.5, 133.5}, {43.5, 70.5}, {129.5, 70.5}, {4.5, 49.5}, {254.5, 195.5}, {264.5, 90.5}, {342.5, 90.5}, {175.5, 215.5}, {214.5, 215.5}, {307.5, 174.5}, {230.5, 258.5}, {144.5, 216.5}, {42.5, 153.5}, {190.5, 132.5}, {42.5, 111.5}, {66.5, 90.5}, {121.5, 90.5}, {96.5, 69.5}, {174.5, 48.5}, {228.5, 278.5}}; ``` Offset plot ``` nf = Nearest[seq1] clusters = GroupBy[seq2, {# - nf[#][[1]]} &]; colors = ColorData[24] /@ Range @ Length @ clusters Legended[ Graphics[ {AbsolutePointSize[10], MapThread[{#1, Point[#2]} &, {colors, Values @ clusters}]}, Frame -> True, ImageSize -> Large], SwatchLegend[colors, Keys @ clusters, LegendLabel -> "Offset"]] ``` [![plot](https://i.stack.imgur.com/sDDTC.png)](https://i.stack.imgur.com/sDDTC.png) Second thoughts Perhaps using `ListPlot` with markers makes for a better visualization. ``` nf = Nearest[seq1]; clusters = GroupBy[seq2, {# - nf[#][[1]]} &]; ListPlot[Values @ clusters, PlotMarkers -> {Automatic, 15}, PlotLegends -> PointLegend[HoldForm /@ (Keys @ clusters)[[All, 1]], LegendLabel -> "Offset"], Frame -> True, ImageSize -> Large] ``` [![plot](https://i.stack.imgur.com/kle9O.png)](https://i.stack.imgur.com/kle9O.png)
``` distance = DistanceMatrix[seq1, seq2] points = Position[distance, x_ /; x < 0.1] ListPlot[{seq1[[First /@ points]], seq2[[Last /@ points]]}] ```
65,286,995
I'm attempting to make a widget extension with two text labels and an image. The two labels need to be in the top left and the image in the bottom right. I've been unable to get this working correctly. Is there a proper way of doing this without having to use a Spacer() or an overlay image which isn't what I need. Very new to SwiftUI. Any help or pointers would be greatly appreciated. ``` VStack (alignment: .leading){ Text("Title") .font(.headline) .fontWeight(.regular) .lineLimit(1) Text("subtitle") .font(.title3) .fontWeight(.bold) .lineLimit(1) .allowsTightening(true) HStack { Spacer(minLength: 15) Image("fish") .resizable() .frame(width: 120, height: 80) } } .padding(.top) .padding(.leading) ``` [![enter image description here](https://i.stack.imgur.com/OfHPp.png)](https://i.stack.imgur.com/OfHPp.png)
2020/12/14
[ "https://Stackoverflow.com/questions/65286995", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14601731/" ]
There are a bunch of ways to build a layout like that. What you have is almost working, you just need a `Spacer` between the text and image to shove the image down: ``` VStack (alignment: .leading) { Text("Title") .font(.headline) .fontWeight(.regular) .lineLimit(1) Text("subtitle") .font(.title3) .fontWeight(.bold) .lineLimit(1) .allowsTightening(true) Spacer(minLength: 0) // <-- add spacer here HStack { Spacer(minLength: 15) Image("fish") .resizable() .frame(width: 120, height: 80) } } .padding(.top) .padding(.leading) ``` The only other change I would suggest is avoiding the hard-coded image size, since widgets are different sizes on different phones. You could have the image expand to fit the available width while preserving your image’s 3:2 aspect ratio like this: ``` VStack(alignment: .leading) { Text("Title") .font(.headline) .fontWeight(.regular) Text("subtitle") .font(.title3) .fontWeight(.bold) .allowsTightening(true) Spacer(minLength: 0) Image("fish") .resizable() .aspectRatio(3/2, contentMode: .fit) } .lineLimit(1) .padding(.leading) .padding(.top) ``` Or if you want the fixed-size image and would prefer to have the text overlap it on smaller screens, you could use a `ZStack` like this: ``` ZStack(alignment: .bottomTrailing) { Image("fish") .resizable() .frame(width: 120, height: 80) VStack (alignment: .leading) { Text("Title") .font(.headline) .fontWeight(.regular) Text("subtitle") .font(.title3) .fontWeight(.bold) .allowsTightening(true) } .lineLimit(1) .padding() .frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .topLeading) } ``` As I said, there are a bunch of ways to solve this!
I think I've managed to get this working, well as close as I could get it. Seems to be a bit better than before. Mildly satisfied but working now. ``` ZStack { VStack (alignment: .leading){ Text("Title") .font(.body) .fontWeight(.bold) .lineLimit(1) .allowsTightening(true) Text("subtitle") .font(.title) .fontWeight(.regular) HStack(alignment: .bottom){ Spacer() Image("fish") .resizable() .frame(width: 120, height: 80) .frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .bottomTrailing) .offset(x: 5, y: -10) // padding for images with frame bigger than actual image } .frame(minWidth: 0, maxWidth: .infinity, minHeight: 0, maxHeight: .infinity, alignment: .leading) } .padding(12) } .background(Color(UIColor.secondarySystemBackground)) ```
55,020,721
I want to insert an image to the database using servlet. i have created images folder inside web pages and i want sent the to that folder. here is my code. **CompanyReg.jsp** ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta name="description" content="Colorlib Templates"> <meta name="author" content="Colorlib"> <meta name="keywords" content="Colorlib Templates"> <title>New Member?</title> <link href="resources/CompanyReg/vendor/mdi-font/css/material-design-iconic-font.min.css" rel="stylesheet" media="all"> <link href="resources/CompanyReg/vendor/font-awesome-4.7/css/font-awesome.min.css" rel="stylesheet" media="all"> <link href="https://fonts.googleapis.com/css?family=Poppins:100,100i,200,200i,300,300i,400,400i,500,500i,600,600i,700,700i,800,800i,900,900i" rel="stylesheet"> <link href="resources/CompanyReg/vendor/select2/select2.min.css" rel="stylesheet" media="all"> <link href="resources/CompanyReg/vendor/datepicker/daterangepicker.css" rel="stylesheet" media="all"> <link href="resources/CompanyReg/css/main.css" rel="stylesheet" media="all"> <link rel="shortcut icon" href="resources/CompanyReg/favicon.ico"> <link rel="stylesheet" type="text/css" href="resources/CompanyReg/css/normalize.css" /> <link rel="stylesheet" type="text/css" href="resources/CompanyReg/css/demo.css" /> <link rel="stylesheet" type="text/css" href="resources/CompanyReg/css/component.css" /> </head> <body> <div class="page-wrapper bg-gra-02 p-t-130 p-b-100 font-poppins"> <div class="wrapper wrapper--w680"> <div class="card card-4"> <div class="card-body"> <h2 class="title">Registration Form</h2> <form method="POST" action="CompanyReg" enctype='multipart/form-data'> <div class="row row-space"> <div class="col-2"> <div class="input-group"> <label class="label">Company Name</label> <input class="input--style-4" type="text" name="company_name"> </div> </div> </div> <div class="row row-space"> <div class="col-2"> <div class="input-group"> <label class="label">Established Date</label> <div class="input-group-icon"> <input class="input--style-4 js-datepicker" type="text" name="est_year"> <i class="zmdi zmdi-calendar-note input-icon js-btn-calendar"></i> </div> </div> </div> </div> <div class="row row-space"> <div class="col-2"> <div class="input-group"> <label class="label">Address</label> <input class="input--style-4" type="text" name="address"> </div> </div> <div class="col-2"> <div class="input-group"> <label class="label">Email</label> <input class="input--style-4" type="text" name="email"> </div> </div> <div class="col-2"> <div class="input-group"> <label class="label">Phone Number</label> <input class="input--style-4" type="text" name="phone"> </div> </div> <div class="col-2"> <div class="input-group"> <label class="label">Password</label> <input class="input--style-4" type="Password" name="password"> </div> </div> <div class="col-2"> <div class="input-group"> <label class="label">Repeat Password</label> <input class="input--style-4" type="Password" name="repeatPassword"> </div> </div> </div> <div class="col-2"> <div class="input-group"> <label class="label">Upload Company Photo</label> <div class="box"> <input type="file" name="photo" id="file-1" class="inputfile inputfile-1" data-multiple-caption="{count} files selected" multiple style="display: none;" /> <label for="file-1"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="17" viewBox="0 0 20 17"><path d="M10 0l-5.2 4.9h3.3v5.1h3.8v-5.1h3.3l-5.2-4.9zm9.3 11.5l-3.2-2.1h-2l3.4 2.6h-3.5c-.1 0-.2.1-.2.1l-.8 2.3h-6l-.8-2.2c-.1-.1-.1-.2-.2-.2h-3.6l3.4-2.6h-2l-3.2 2.1c-.4.3-.7 1-.6 1.5l.6 3.1c.1.5.7.9 1.2.9h16.3c.6 0 1.1-.4 1.3-.9l.6-3.1c.1-.5-.2-1.2-.7-1.5z"/></svg> <span>Choose a file&hellip;</span></label> </div> </div> </div> <div class="p-t-15"> <button class="btn btn--radius-2 btn--blue" type="submit">Submit</button> </div> </form> </div> </div> </div> </div> <script src="resources/CompanyReg/vendor/jquery/jquery.min.js"></script> <script src="resources/CompanyReg/vendor/select2/select2.min.js"></script> <script src="resources/CompanyReg/vendor/datepicker/moment.min.js"></script> <script src="resources/CompanyReg/vendor/datepicker/daterangepicker.js"></script> <script src="resources/CompanyReg/js/global.js"></script> </body> </html> ``` **this is my servlet code** ``` package Company; import java.io.File; import java.io.IOException; import java.io.InputStream; import java.io.PrintWriter; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import javax.servlet.ServletException; import javax.servlet.annotation.MultipartConfig; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.servlet.http.Part; @MultipartConfig(maxFileSize = 16177215) public class CompanyReg extends HttpServlet { private static final String SAVE_DIR ="images"; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); try (PrintWriter out = response.getWriter()) { /* TODO output your page here. You may use following sample code. */ out.println("<!DOCTYPE html>"); out.println("<html>"); out.println("<head>"); out.println("<title>Servlet CompanyReg</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet CompanyReg at " + request.getContextPath() + "</h1>"); out.println("</body>"); out.println("</html>"); } } // <editor-fold @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String savePath = "F:\\JAVA EE Final\\Java Final\\web" + File.separator +SAVE_DIR; File fileSaveDir = new File(savePath); String company_name = request.getParameter("company_name"); String company_email = request.getParameter("email"); String est_date = request.getParameter("est_year"); String company_address = request.getParameter("address"); String company_pasword = request.getParameter("password"); String company_contactno = request.getParameter("phone"); Part part = request.getPart("photo"); String filename = extractFileName(part); try { Class.forName("com.mysql.jdbc.Driver"); Connection con= DriverManager.getConnection("jdbc:mysql://localhost:3306/javanew","root",""); PreparedStatement pst = con.prepareStatement(" insert into company (Name,Email,Est_date,Address,Password,ContactNo,Photo) values(?,?,?,?,?,?,?)"); pst.setString(1, company_name); pst.setString(2, company_email); pst.setString(3, est_date); pst.setString(4, company_address); pst.setString(5, company_pasword); pst.setString(6, company_contactno); String filePath = savePath + File.separator + filename; pst.setString(7, filePath); int rs=pst.executeUpdate(); if(rs>0) { getServletContext().getRequestDispatcher("").forward(request, response); } } catch (Exception e) { PrintWriter out = response.getWriter(); out.print(e); } } @Override public String getServletInfo() { return "Short description"; } private String extractFileName(Part part) { String contenDisp = part.getHeader("content-disposition"); String [] items = contenDisp.split(";"); for (String s : items) { if (s.trim().startsWith("filename")) { return s.substring(s.indexOf("=") + 2, s.length()-1); } } return ""; } ``` } **web.xml** ``` <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"> <servlet> <servlet-name>CompanyReg</servlet-name> <servlet-class>Company.CompanyReg</servlet-class> </servlet> <servlet-mapping> <servlet-name>CompanyReg</servlet-name> <url-pattern>/CompanyReg</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> </web-app> ``` my database shows the path correctly but the image not be seen in that folder. so plese tell me what to do
2019/03/06
[ "https://Stackoverflow.com/questions/55020721", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9960448/" ]
yes thats because the image is not in that folder. so when submitting your form you need to copy the image to the folder you created. here is a sample code to copy the image to your folder ``` public void copyFile(String fileName,String fileType, InputStream in) { try { //relativeWebPath is the path to the folder you created in your web directory File file = getUniqueFilename(new File(relativeWebPath+"/"+fileName)); try ( // write the inputStream to a FileOutputStream OutputStream out = new FileOutputStream(new File(relativeWebPath + "/"+file.getName()))) { int read = 0; byte[] bytes = new byte[1024]; while ((read = in.read(bytes)) != -1) { out.write(bytes, 0, read); } in.close(); out.flush(); } } catch (IOException e) { System.out.println(e); } } //returns a file with a unique name in case an image with the same name //already exist in the folder private static File getUniqueFilename( File file ) { String baseName = FilenameUtils.getBaseName( file.getName() ); String extension = FilenameUtils.getExtension( file.getName() ); int counter = 1; while(file.exists()) { file = new File( file.getParent(), baseName + "-" + (counter++) + "." + extension ); } return file; } ```
``` Part filePart = request.getPart("photo"); //Retrieves <input type="file" name="file"> fileName = Paths.get(filePart.getSubmittedFileName()).getFileName().toString(); // MSIE fix. uploadedInputStream = filePart.getInputStream(); ``` Try this code
365,019
I was thinking recently about what might happen if you were to place a block of material in the middle of a complete vacuum. Obviously there's not going to be a way to ever achieve such a scenario but what would happen if you were to put a block of let's say steel at 100C in a vacuum such that the block is not in contact with any material connected to the containment and have it such that outside energy is minimized. I assume the block would lose heat/vibrational energy but what would be the mechanism for such an energy loss and what time scale would it take for the block to reach let's say 0C? Let me know if there's anything I can add to make the question more clear.
2017/10/25
[ "https://physics.stackexchange.com/questions/365019", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/165190/" ]
The block of steel would lose energy via black body radiation. All objects at a temperature above absolute zero according the the priciples of black body radiation. A steel block at 100 degress C will radiate in the infrared. A typical blackbody spectrum is shown below. Notice how the frequecy gets smaller as the objects temperature gets less. Radiation can pass thru a vacuum fine. [![enter image description here](https://i.stack.imgur.com/rzbLu.gif)](https://i.stack.imgur.com/rzbLu.gif)
the sun is a block of material in the middle of a complete vacum. Although its not in contact with any material it loses energy through radiation.Radiation does not need an indermidiate medium in order to exchange energy. I also believe that there are ways to achieve the same with a block of steel in a given temprerature suspended in a vacum through magnetic forces. You can easy find from littature the rate of radiation of any material.
365,019
I was thinking recently about what might happen if you were to place a block of material in the middle of a complete vacuum. Obviously there's not going to be a way to ever achieve such a scenario but what would happen if you were to put a block of let's say steel at 100C in a vacuum such that the block is not in contact with any material connected to the containment and have it such that outside energy is minimized. I assume the block would lose heat/vibrational energy but what would be the mechanism for such an energy loss and what time scale would it take for the block to reach let's say 0C? Let me know if there's anything I can add to make the question more clear.
2017/10/25
[ "https://physics.stackexchange.com/questions/365019", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/165190/" ]
The block of steel would lose energy via black body radiation. All objects at a temperature above absolute zero according the the priciples of black body radiation. A steel block at 100 degress C will radiate in the infrared. A typical blackbody spectrum is shown below. Notice how the frequecy gets smaller as the objects temperature gets less. Radiation can pass thru a vacuum fine. [![enter image description here](https://i.stack.imgur.com/rzbLu.gif)](https://i.stack.imgur.com/rzbLu.gif)
An answer cannot be given unless you make a statement about the vacuum which can have em waves (radiation) travelling through it. Your block will lose energy by radiating it out as electromagnetic waves (mainly infra-red) but at the same time it might also be receiving radiation from whatever is outside it. You will also have to consider what the surface of your block is like because that will also be a factor in terms of how much energy your body emits and absorbs. The rate of energy emission is proportional to the temperature of your block (in kelvin) to the fourth power. This is called [Stefan’s law](https://en.m.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law). If the vacuum radiates much less than your block then all you need to do is to consider only the radiation emitted from your block. To estimate the time you will need to do an integration as the rate of energy loss by radiation will depend on the temperature of the block and remember to have the temperatures in kelvin.n
365,019
I was thinking recently about what might happen if you were to place a block of material in the middle of a complete vacuum. Obviously there's not going to be a way to ever achieve such a scenario but what would happen if you were to put a block of let's say steel at 100C in a vacuum such that the block is not in contact with any material connected to the containment and have it such that outside energy is minimized. I assume the block would lose heat/vibrational energy but what would be the mechanism for such an energy loss and what time scale would it take for the block to reach let's say 0C? Let me know if there's anything I can add to make the question more clear.
2017/10/25
[ "https://physics.stackexchange.com/questions/365019", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/165190/" ]
The block of steel would lose energy via black body radiation. All objects at a temperature above absolute zero according the the priciples of black body radiation. A steel block at 100 degress C will radiate in the infrared. A typical blackbody spectrum is shown below. Notice how the frequecy gets smaller as the objects temperature gets less. Radiation can pass thru a vacuum fine. [![enter image description here](https://i.stack.imgur.com/rzbLu.gif)](https://i.stack.imgur.com/rzbLu.gif)
The amount of energy will depend on the mass of the block and what it is made up of. A small body will lose energy at such a slow rate that it can outlast our sun.
27,972,844
the problem is I got large text file. Let it be ``` a=c("atcgatcgatcgatcgatcgatcgatcgatcgatcg") ``` I need to compare every 3rd symbol in this text with value (e.g. `'c'`) and if true, I want to add `1` to counter `i`. I thought to use `grep` but it seems this function wouldn't suite for my purpose. So I need your help or advice. More than that, I want to extract certain values from this string to a vector. 4 example, i want to extract 4:10 symbols, e.g. ``` a=c("atcgatcgatcgatcgatcgatcgatcgatcgatcg") [1] "gatcgatcga" ``` Thank you in advance. P.S. I know it's not the best idea to write script i need in R, but I'm curious if its possible to write it in an adequate way.
2015/01/15
[ "https://Stackoverflow.com/questions/27972844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3445783/" ]
**Edited to provide a solution that's fast for much larger strings:** If you have a very long string (on the order of millions of nucleotides), the lookbehind assertion in my original answer (below) is too slow to be practical. In that case, use something more like the following, which: (1) splits the string apart between every character; (2) uses the characters to fill up a three row matrix; and then (3) extracts the characters in the 3rd row of the matrix. This takes on the order of 0.2 seconds to process a 3-million character long string. ``` ## Make a 3-million character long string a <- paste0(sample(c("a", "t", "c", "g"), 3e6, replace=TRUE), collapse="") ## Extract the third codon of each triplet n3 <- matrix(strsplit(a, "")[[1]], nrow=3)[3,] ## Check that it works sum(n3=="c") # [1] 250431 table(n3) # n3 # a c g t # 250549 250431 249008 250012 ``` --- **Original answer:** I might use `substr()` in both cases. ``` ## Split into codons. (The "lookbehind assertion", "(?<=.{3})" matches at each ## inter-character location that's preceded by three characters of any type.) codons <- strsplit(a, "(?<=.{3})", perl=TRUE)[[1]] # [1] "atc" "gat" "cga" "tcg" "atc" "gat" "cga" "tcg" "atc" "gat" "cga" "tcg" ## Extract 3rd nucleotide in each codon n3 <- sapply(codons, function(X) substr(X,3,3)) # atc gat cga tcg atc gat cga tcg atc gat cga tcg # "c" "t" "a" "g" "c" "t" "a" "g" "c" "t" "a" "g" ## Count the number of 'c's sum(n3=="c") # [1] 3 ## Extract nucleotides 4-10 substr(a, 4,10) # [1] "gatcgat" ```
Compare every third character with `"c"`: ``` grepl("^(.{2}c)*.{0,2}$", a) # [1] FALSE ``` Extract characters 4 to 10: ``` substr(a, 4, 10) # [1] "gatcgat" ```
27,972,844
the problem is I got large text file. Let it be ``` a=c("atcgatcgatcgatcgatcgatcgatcgatcgatcg") ``` I need to compare every 3rd symbol in this text with value (e.g. `'c'`) and if true, I want to add `1` to counter `i`. I thought to use `grep` but it seems this function wouldn't suite for my purpose. So I need your help or advice. More than that, I want to extract certain values from this string to a vector. 4 example, i want to extract 4:10 symbols, e.g. ``` a=c("atcgatcgatcgatcgatcgatcgatcgatcgatcg") [1] "gatcgatcga" ``` Thank you in advance. P.S. I know it's not the best idea to write script i need in R, but I'm curious if its possible to write it in an adequate way.
2015/01/15
[ "https://Stackoverflow.com/questions/27972844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3445783/" ]
This is a simple approach using R primitives: ``` sum("c"==(strsplit(a,NULL))[[1]][c(FALSE,FALSE,TRUE)]) [1] 3 # this is the right answer. ``` The Boolean pattern `c(FALSE,FALSE,TRUE)` is replicated to be as long as the input string and then is used to index it. It can be adjusted to match a different element or for a longer length (for those with extended codons). --- Probably not performant enough for entire genomes, but perfect for casual use.
Compare every third character with `"c"`: ``` grepl("^(.{2}c)*.{0,2}$", a) # [1] FALSE ``` Extract characters 4 to 10: ``` substr(a, 4, 10) # [1] "gatcgat" ```
27,972,844
the problem is I got large text file. Let it be ``` a=c("atcgatcgatcgatcgatcgatcgatcgatcgatcg") ``` I need to compare every 3rd symbol in this text with value (e.g. `'c'`) and if true, I want to add `1` to counter `i`. I thought to use `grep` but it seems this function wouldn't suite for my purpose. So I need your help or advice. More than that, I want to extract certain values from this string to a vector. 4 example, i want to extract 4:10 symbols, e.g. ``` a=c("atcgatcgatcgatcgatcgatcgatcgatcgatcg") [1] "gatcgatcga" ``` Thank you in advance. P.S. I know it's not the best idea to write script i need in R, but I'm curious if its possible to write it in an adequate way.
2015/01/15
[ "https://Stackoverflow.com/questions/27972844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3445783/" ]
**Edited to provide a solution that's fast for much larger strings:** If you have a very long string (on the order of millions of nucleotides), the lookbehind assertion in my original answer (below) is too slow to be practical. In that case, use something more like the following, which: (1) splits the string apart between every character; (2) uses the characters to fill up a three row matrix; and then (3) extracts the characters in the 3rd row of the matrix. This takes on the order of 0.2 seconds to process a 3-million character long string. ``` ## Make a 3-million character long string a <- paste0(sample(c("a", "t", "c", "g"), 3e6, replace=TRUE), collapse="") ## Extract the third codon of each triplet n3 <- matrix(strsplit(a, "")[[1]], nrow=3)[3,] ## Check that it works sum(n3=="c") # [1] 250431 table(n3) # n3 # a c g t # 250549 250431 249008 250012 ``` --- **Original answer:** I might use `substr()` in both cases. ``` ## Split into codons. (The "lookbehind assertion", "(?<=.{3})" matches at each ## inter-character location that's preceded by three characters of any type.) codons <- strsplit(a, "(?<=.{3})", perl=TRUE)[[1]] # [1] "atc" "gat" "cga" "tcg" "atc" "gat" "cga" "tcg" "atc" "gat" "cga" "tcg" ## Extract 3rd nucleotide in each codon n3 <- sapply(codons, function(X) substr(X,3,3)) # atc gat cga tcg atc gat cga tcg atc gat cga tcg # "c" "t" "a" "g" "c" "t" "a" "g" "c" "t" "a" "g" ## Count the number of 'c's sum(n3=="c") # [1] 3 ## Extract nucleotides 4-10 substr(a, 4,10) # [1] "gatcgat" ```
This is a simple approach using R primitives: ``` sum("c"==(strsplit(a,NULL))[[1]][c(FALSE,FALSE,TRUE)]) [1] 3 # this is the right answer. ``` The Boolean pattern `c(FALSE,FALSE,TRUE)` is replicated to be as long as the input string and then is used to index it. It can be adjusted to match a different element or for a longer length (for those with extended codons). --- Probably not performant enough for entire genomes, but perfect for casual use.
176,409
I've done some changes in master page in SharePoint Designer. After that I'm trying to roll back to the original of master page. It's showing error > > Something went wrong > > > I need to get my original master page .
2016/04/11
[ "https://sharepoint.stackexchange.com/questions/176409", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/53154/" ]
The page layouts need to reside in the master page gallery (under \_catalogs/masterpage). You could create a feature with a module to deploy them there, and then make your features A and B be dependent on this one via [feature dependency](https://msdn.microsoft.com/en-us/library/ee231535.aspx)
The \_layouts directory doesn't store page layouts, they need to be provisioned to the master page gallery. The best option in your case would be to write Feature A with a module that would provision your page layout to the master page gallery and then you can write your Feature B with Feature dependency as Fran suggested in his answer. Alternatively in Feature B, you can write your Feature Activation code to check whether the Feature A is activated already and whether the page layout already exists.
56,481
I need to understand message queue services. Available services I know out there are Amazon SQS and IronMQ. What are they exactly? When should I use either one of them? Can you provide a real-world example?
2013/12/26
[ "https://webmasters.stackexchange.com/questions/56481", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/34751/" ]
Your website doesn't have 100,000 keywords. Your *pages* have *X* number of keywords. Google isn't ranking your entire website. They're ranking each *page*. Thus your meta keywords (which aren't used for ranking purposes anymore and haven't been for a very long time) should focus on the keywords for that *page*. It should **not** be the keywords for the entire website.
``` I have 100000 keywords on my website. But I can't put them in Meta keyword tag ``` Google doesn’t use the keywords meta tag in web search. [Here](http://www.mattcutts.com/blog/keywords-meta-tag-in-web-search/) is the news from Matt Cutts. If you make a function that displays 20 keywords in the footer of our website, "randomly" for each "refresh of page", Google will consider it as [Keyword stuffing](https://support.google.com/webmasters/answer/66358?hl=en). So it will negatively impact your website. Why your are stuffing your 100000 keywords using some `functions`? rather create a page with your contents and let Google to know your keywords. but don't write content for keywords and only for users and don't duplicate your contents. So if you have a useful website for your visitors then you no need to worry about your 100000 keywords
56,481
I need to understand message queue services. Available services I know out there are Amazon SQS and IronMQ. What are they exactly? When should I use either one of them? Can you provide a real-world example?
2013/12/26
[ "https://webmasters.stackexchange.com/questions/56481", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/34751/" ]
Your website doesn't have 100,000 keywords. Your *pages* have *X* number of keywords. Google isn't ranking your entire website. They're ranking each *page*. Thus your meta keywords (which aren't used for ranking purposes anymore and haven't been for a very long time) should focus on the keywords for that *page*. It should **not** be the keywords for the entire website.
If you are looking for the best way to proceed, I would say (if you insist): Create unique and compelling content with the keywords (put 2-4 per content) and create content for all the 10,000 of them (you may hire content writers for the same) and put them on your website. If you want to proceed with the way you just said (even after knowing all the SEO unfriendliness of that method) and you are looking for methods just to overcome the duplicate content trouble, I would say: 1. Put a 301 redirect to the same and and/or 2. Add a `rel="canonical"` tag to the content.
50,980,143
I have a issue when I want to create my emulator using Tizen Studio version 2.4 on Ubuntu 14.4 LTS [![enter image description here](https://i.stack.imgur.com/uzJ9N.png)](https://i.stack.imgur.com/uzJ9N.png) I can not see the platforms for Tizen TV, anw, when I choice any platform and click OK [![enter image description here](https://i.stack.imgur.com/BStfN.png)](https://i.stack.imgur.com/BStfN.png) Dialog appear without any message ! and nothing happening. How can I resolve this issue, I want to create **Tizen TV emulator**
2018/06/22
[ "https://Stackoverflow.com/questions/50980143", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5676318/" ]
finally, I find out the solution, I install TV Extensions-40 in Extension SDK tab, It's will create my TV emulator for me [![enter image description here](https://i.stack.imgur.com/xXRNj.png)](https://i.stack.imgur.com/xXRNj.png)
you have to visit configuration section from package manager and then enabled SDK for TV
52,592,479
My anchor even after applying CSS styles to it when it's disabled still acts like hyperlink. Changes colour when hovered on. [![enter image description here](https://i.stack.imgur.com/TByid.png)](https://i.stack.imgur.com/TByid.png) [![enter image description here](https://i.stack.imgur.com/1GQlq.png)](https://i.stack.imgur.com/1GQlq.png) I've spent some time on this already and almost giving up on this one. I want the magnifying glass to not change colour at all after hovering over it. This is the anchor ``` <a href="" class="postcode-search-icon clickable" ng-click="searchPostcode()" ng-disabled="true" title="Search Postcode"> </a href=""> ``` And my current CSS styles attempt to fix it ``` .postcode-search-icon[disabled], .postcode-search-icon[disabled]:hover { text-decoration: none; cursor: not-allowed; background-color: transparent; } ``` What am I doing wrong? In case you're wondering clickable class is just this so it doesn't matter ``` .clickable { cursor: pointer; } ``` **@edit** Looks like applying color: (original colour) makes a temporary workaround until I find something better.
2018/10/01
[ "https://Stackoverflow.com/questions/52592479", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9559251/" ]
It seems like your css selector is wrong. The `disabled` pseudo class only works with input fields and not with anchors. ``` input[disabled="disabled"], input.disabled { /* whatever you want */ } ``` Besides that, idk how you handle the addition of the `clickable` class, you need to handle that in order to not override styles.
If you are using Angular, you should be able to use a conditional class with the [ngClass attribute](https://angular.io/api/common/NgClass). Not sure if you are using Angular 2, 3, 4, 5, or JS (here's the [JS link for ng-class](https://docs.angularjs.org/api/ng/directive/ngClass)). I think I would make the clickable item into a button, as well. ```css .bright:hover { color: #0066ff; cursor: pointer; } .dim:hover { color: #ccc; cursor: default; } ``` ```html <button ng-class="{bright: enabled, dim: disabled}"><i class="search-icon"></i> Search</button> ```
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
You can try this: Just press `Ctrl`+`Alt`+`T` on your keyboard to open Terminal. When it opens, run the command below. ``` gksudo gedit /etc/default/tomcat7 ``` When the file opens, uncomment the line that sets the JAVA\_HOME variable. ![enter image description here](https://i.stack.imgur.com/1i7vl.png) Save and restart tomcat7 server.
Just add following line in /etc/default/tomcat7 at where JAVA\_HOME variable is defined ``` JAVA_HOME=/usr/lib/jvm/java-7-oracle ``` then run command ``` sudo service tomcat7 restart ```
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
You can try this: Just press `Ctrl`+`Alt`+`T` on your keyboard to open Terminal. When it opens, run the command below. ``` gksudo gedit /etc/default/tomcat7 ``` When the file opens, uncomment the line that sets the JAVA\_HOME variable. ![enter image description here](https://i.stack.imgur.com/1i7vl.png) Save and restart tomcat7 server.
Tomcat will not actually use your JAVA\_HOME environmente variable, but look in some predefined locations and in the JAVA\_HOME variable set inside the startup script, as other answers point out. If you don't like messing with the tomcat startup script, you could create a symlink for your preferred java installation, which will be picked up by tomcat. For example: ``` ln -s /usr/lib/jvm/java-8-oracle /usr/lib/jvm/default-java ```
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
You can try this: Just press `Ctrl`+`Alt`+`T` on your keyboard to open Terminal. When it opens, run the command below. ``` gksudo gedit /etc/default/tomcat7 ``` When the file opens, uncomment the line that sets the JAVA\_HOME variable. ![enter image description here](https://i.stack.imgur.com/1i7vl.png) Save and restart tomcat7 server.
Open terminal ``` echo $JAVA_HOME ``` Copy the result. Then ``` sudo -H gedit /etc/default/tomcat7 ``` Replace `#JAVA_HOME=/usr/lib/jvm/openjdk-6-jdk` with the output you copied from `$JAVA_HOME`.
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
You can try this: Just press `Ctrl`+`Alt`+`T` on your keyboard to open Terminal. When it opens, run the command below. ``` gksudo gedit /etc/default/tomcat7 ``` When the file opens, uncomment the line that sets the JAVA\_HOME variable. ![enter image description here](https://i.stack.imgur.com/1i7vl.png) Save and restart tomcat7 server.
Adding to the answer of Mitch (the accepted answer above), check your `/usr/lib/jvm/` directory. Usually, java is installed there itself. You might have oracle java installed or you might have a latest version of java installed. Just checkout the directories at `/usr/lib/jvm/` and add the one your java is in. For me, it was: ``` /usr/lib/jvm/java-8-oracle ``` So, replace ``` #JAVA_HOME=/some/directory ``` with ``` #JAVA_HOME=/usr/lib/jvm/java-8-oracle ``` did the job for me.
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
Tomcat will not actually use your JAVA\_HOME environmente variable, but look in some predefined locations and in the JAVA\_HOME variable set inside the startup script, as other answers point out. If you don't like messing with the tomcat startup script, you could create a symlink for your preferred java installation, which will be picked up by tomcat. For example: ``` ln -s /usr/lib/jvm/java-8-oracle /usr/lib/jvm/default-java ```
Just add following line in /etc/default/tomcat7 at where JAVA\_HOME variable is defined ``` JAVA_HOME=/usr/lib/jvm/java-7-oracle ``` then run command ``` sudo service tomcat7 restart ```
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
Open terminal ``` echo $JAVA_HOME ``` Copy the result. Then ``` sudo -H gedit /etc/default/tomcat7 ``` Replace `#JAVA_HOME=/usr/lib/jvm/openjdk-6-jdk` with the output you copied from `$JAVA_HOME`.
Just add following line in /etc/default/tomcat7 at where JAVA\_HOME variable is defined ``` JAVA_HOME=/usr/lib/jvm/java-7-oracle ``` then run command ``` sudo service tomcat7 restart ```
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
Just add following line in /etc/default/tomcat7 at where JAVA\_HOME variable is defined ``` JAVA_HOME=/usr/lib/jvm/java-7-oracle ``` then run command ``` sudo service tomcat7 restart ```
Adding to the answer of Mitch (the accepted answer above), check your `/usr/lib/jvm/` directory. Usually, java is installed there itself. You might have oracle java installed or you might have a latest version of java installed. Just checkout the directories at `/usr/lib/jvm/` and add the one your java is in. For me, it was: ``` /usr/lib/jvm/java-8-oracle ``` So, replace ``` #JAVA_HOME=/some/directory ``` with ``` #JAVA_HOME=/usr/lib/jvm/java-8-oracle ``` did the job for me.
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
Tomcat will not actually use your JAVA\_HOME environmente variable, but look in some predefined locations and in the JAVA\_HOME variable set inside the startup script, as other answers point out. If you don't like messing with the tomcat startup script, you could create a symlink for your preferred java installation, which will be picked up by tomcat. For example: ``` ln -s /usr/lib/jvm/java-8-oracle /usr/lib/jvm/default-java ```
Adding to the answer of Mitch (the accepted answer above), check your `/usr/lib/jvm/` directory. Usually, java is installed there itself. You might have oracle java installed or you might have a latest version of java installed. Just checkout the directories at `/usr/lib/jvm/` and add the one your java is in. For me, it was: ``` /usr/lib/jvm/java-8-oracle ``` So, replace ``` #JAVA_HOME=/some/directory ``` with ``` #JAVA_HOME=/usr/lib/jvm/java-8-oracle ``` did the job for me.
154,953
I have installed `tomcat7` (using `apt-get install`) and whenever I want to start `tomcat7` it says : ``` * no JDK found - please set JAVA_HOME ``` I have set `JAVA_HOME` in my `bash.bashrc` and also in `~/.bashrc` and when I issue `echo $JAVA_HOME` I clearly see that this variable is pointing to my jdk's root folder. Can someone help me with this please? Thanks
2012/06/23
[ "https://askubuntu.com/questions/154953", "https://askubuntu.com", "https://askubuntu.com/users/7111/" ]
Open terminal ``` echo $JAVA_HOME ``` Copy the result. Then ``` sudo -H gedit /etc/default/tomcat7 ``` Replace `#JAVA_HOME=/usr/lib/jvm/openjdk-6-jdk` with the output you copied from `$JAVA_HOME`.
Adding to the answer of Mitch (the accepted answer above), check your `/usr/lib/jvm/` directory. Usually, java is installed there itself. You might have oracle java installed or you might have a latest version of java installed. Just checkout the directories at `/usr/lib/jvm/` and add the one your java is in. For me, it was: ``` /usr/lib/jvm/java-8-oracle ``` So, replace ``` #JAVA_HOME=/some/directory ``` with ``` #JAVA_HOME=/usr/lib/jvm/java-8-oracle ``` did the job for me.
51,459,555
I'm trying to replicate this shortcut for easily generating an adder while separating the output carry and the result: ``` reg [31:0] op_1; reg [31:0] op_2; reg [31:0] sum; reg carry_out; always @(posedge clk) { carry_out, sum } <= op_1 + op_2; ``` In my case, it's Ok to use the nonstandard `ieee.STD_LOGIC_UNSIGNED`. I would prefer not to use VHDL-2008 features as I am stuck with Xilinx ISE, at the moment.
2018/07/21
[ "https://Stackoverflow.com/questions/51459555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2180200/" ]
Too bad that VHDL 2008 is not an option. If it was, the following would make it: ``` library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all; entity foo is port( clk: in std_ulogic; a, b: in u_unsigned(31 downto 0); s: out u_unsigned(31 downto 0); co: out std_ulogic ); end entity foo; architecture rtl of foo is begin process(clk) begin if rising_edge(clk) then (co, s) <= ('0' & a) + ('0' & b); end if; end process; end architecture rtl; ``` But as you are stuck with older versions, the following should work: ``` library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all; entity foo is port( clk: in std_ulogic; a, b: in unsigned(31 downto 0); s: out unsigned(31 downto 0); co: out std_ulogic ); end entity foo; architecture rtl of foo is begin process(clk) variable tmp: unsigned(32 downto 0); begin if rising_edge(clk) then tmp := ('0' & a) + ('0' & b); s <= tmp(31 downto 0); co <= tmp(32); end if; end process; end architecture rtl; ```
You have to extend op\_1, op\_2 to 33 bits (Adding a zero at the top as the numbers are unsigned, for signed values you would replicate the top bit). The result (sum) also must be 33 bits. Do a normal 33+33 bit addition (with all the conversions in VHDL). Then split off the sum[32] as the carry.
49,718,008
First of all, before any of you marks this post as duplicate, please check my code if the solution isn't implemented yet. OK, so I've been trying to resolve this issue for what's now close to three weeks and still can't wrap my head around it. I'm trying to make an HTML signature for a company and I'm almost at the end of it. Becuase they require their own fonts to be used, I use image slices in nested tables. ![https://i.imgur.com/bVZc2zE.png](https://i.imgur.com/bVZc2zE.png) Although when I send the signature from Outlook to Outlook, it looks pretty much the way it should, opening it in any web mail service shows it with additional spacing and the table cells larger than intended: ![https://i.imgur.com/wTbx2t8.png](https://i.imgur.com/wTbx2t8.png) What's going on is actually well visible when I select the whole table in the browser: ![https://i.imgur.com/2sc1rGM.png](https://i.imgur.com/2sc1rGM.png) I've tried every possible thing to make it right, but so far nothing has helped me. Here's my code: ``` <table border="0" cellspacing="0" cellpadding="0" style="border-collapse:collapse !important;"> <tbody> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <img src="images/uni_1.png" style="Margin:0 !important;border:0 !important;padding:0 !important;display: block !important;vertical-align:top !important;" alt="Logo Dekre" border="0" valign="top"/> </td> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <table width="140" border="0" cellspacing="0" cellpadding="0" style="border-collapse:collapse !important;"> <tbody> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <img width="140" style="Margin:0 !important;border:0 !important;padding:0 !important;display: block !important;vertical-align:top !important;" src="images/uni_2.png" valign="top" alt="" border="0"/> </td> </tr> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <a href="mailto:mailovaadresa"> <img width="140" style="Margin:0 !important;border:0 !important;padding:0 !important;display: block !important;vertical-align:top !important;" src="images/uni_3.png" valign="top" alt="" border="0" /> </a> </td> </tr> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <a href="http://www.dekre.cz/"> <img width="140" style="Margin:0 !important;border:0 !important;padding:0 !important;display: block !important;vertical-align:top !important;" src="images/uni_4.png" valign="top" alt="" border="0" /> </a> </td> </tr> </tbody> </table> </td> </tr> </tbody> </table> <img src="images/zapati500.png" /> ```
2018/04/08
[ "https://Stackoverflow.com/questions/49718008", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2359063/" ]
First of all the images need to be hosted somewhere (cloud, website, etc) to be able to be displayed when someone open your mail, since you cannot attach them every time in email content. I advise you to use plain text since some webmail services or even email client can block the images, therefore your contact informations may be suppressed (or make a text version beside the html one). Polices can be integrated in html mail (hosted remotely) but it will increase the mail size and the display as intended may not be guaranteed. Their font looks like Coda from Google... Secondly, you need to define the height of the TD that host the image and then on the image (use inline style) try with style="max-height:XXpx" where XX is the same size as the TD that host the image (the height that you defined already). Putting !important in inline style isn't mandatory since inline is the first to be executed no matter how many css and other rules are there.
The solution to your problem is simple, you didn't add the height and width to all of your images. I guessed at the height of the image on the left (87px), set the three images on the right at a width of 140 and a height of 29 (87/3=29) and ran the results through Litmus.com. With the added height and widths for the images, single email client shows the signature in the same way. Use this code: ``` <table border="0" cellspacing="0" cellpadding="0" style="border-collapse:collapse !important;"> <tbody> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <img height="87" width="43" src="images/uni_1.png" style="Margin:0 !important;border:0 !important;padding:0 !important;display: block !important;vertical-align:top !important; background: yellow;" alt="Logo Dekre" border="0" valign="top" /> </td> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <table width="140" border="0" cellspacing="0" cellpadding="0" style="border-collapse:collapse !important;"> <tbody> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <img width="140" height="29" style="Margin:0 !important;border:0 !important;padding:0 !important;display: block !important;vertical-align:top !important; background: red;" src="images/uni_2.png" valign="top" alt="" border="0" /> </td> </tr> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <a href="mailto:mailovaadresa"> <img width="140" height="29" style="Margin:0 !important;border:1 !important;padding:0 !important;display: block !important;vertical-align:top !important; background: green;" src="images/uni_3.png" valign="top" alt="" border="0" /> </a> </td> </tr> <tr> <td style="border-collapse:collapse !important;Margin:0 !important;border:0 !important;padding:0 !important;vertical-align:top !important;" valign="top"> <a href="http://www.dekre.cz/"> <img width="140" height="29" style="Margin:0 !important;border:0 !important;padding:0 !important;display: block !important;vertical-align:top !important; background: blue;" src="images/uni_4.png" valign="top" alt="" border="0" /> </a> </td> </tr> </tbody> </table> </td> </tr> </tbody> </table> ``` As @Valentin R. mentions, you need to host the images somewhere. You can host them in the WordPress install on the domain. Copy those image paths to the code above and it will work everywhere. Good luck with selling the smart walls.
30,386,989
I am trying to use the League of Legends API and request data on a certain user. I use the line ``` var user = getUrlVars()["username"].replace("+", " "); ``` to store the username. However, when I do the XMLHttpRequest with that username, it'll put %20 instead of a space. ``` y.open("GET", "https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/"+user, false); ``` Edit: When I run this code with a user that has no space in their name it works, however when they have a space in their name it says the user is undefined. For example, if I was looking for the user "the man", it would do a get at ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the%20man ``` But the correct request URL is ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the man ```
2015/05/22
[ "https://Stackoverflow.com/questions/30386989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4093160/" ]
When you're creating a URL, you should use `encodeURIComponent` to encode all the special characters properly: ``` y.open("GET", "https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/"+encodeURIComponent(user), false); ```
What you're experiencing is correct behaviour and is called URL encoding. HTTP requests have to conform to certain standards. The first line is always made up of three parts delimited by a space: 1. Method (GET, POST, etc.) 2. Path (i.e. /api/lol/na/v1.4/summoner/by-name/the%20man) 3. HTTP version (HTTP/1.1, HTTP/1.0, etc.) This is usually followed by HTTP headers which I'll leave out for the time being since it is beyond the scope of your question (if interested, read this <https://www.rfc-editor.org/rfc/rfc7230>). So a normal request looks like this: ``` GET /api/lol/na/v1.4/summoner/by-name/the%20man HTTP/1.1 Host: na.api.pvp.net User-Agent: Mozilla ... ``` With regards to your original question, the reason the library is URL encoding the space to `%20` is because you cannot have a space character in the request line. Otherwise, you would throw off most HTTP message parsers because the `man` would replace the HTTP version line like so: ``` GET /api/lol/na/v1.4/summoner/by-name/the man HTTP/1.1 Host: na.api.pvp.net User-Agent: Mozilla ... ``` In most cases, servers will return a 400 bad request response because they wouldn't understand what HTTP version `man` refers to. However, nothing to fear hear, most server-side applications/frameworks automatically decode the `%20` or `+` to space prior to processing the data in the HTTP request. So even though your URL looks unusual, the server side will process it as `the man`. Finally, one last thing to note. You shouldn't be using the `String.replace()` to URL decode your messages. Instead, you should be using decodeURI() and encodeURI() for decoding and encoding strings, respectively. For example: ``` var user = getUrlVars()["username"].replace("+", " "); ``` becomes ``` var user = decodeURI(getUrlVars()["username"]); ``` This ensures that usernames containing special characters (like `/` which would be URL encoded as `%2f`) are also probably decoded. Hope this helps!
30,386,989
I am trying to use the League of Legends API and request data on a certain user. I use the line ``` var user = getUrlVars()["username"].replace("+", " "); ``` to store the username. However, when I do the XMLHttpRequest with that username, it'll put %20 instead of a space. ``` y.open("GET", "https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/"+user, false); ``` Edit: When I run this code with a user that has no space in their name it works, however when they have a space in their name it says the user is undefined. For example, if I was looking for the user "the man", it would do a get at ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the%20man ``` But the correct request URL is ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the man ```
2015/05/22
[ "https://Stackoverflow.com/questions/30386989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4093160/" ]
When you're creating a URL, you should use `encodeURIComponent` to encode all the special characters properly: ``` y.open("GET", "https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/"+encodeURIComponent(user), false); ```
Actually there are no "spaces" in the summoner names on Riot's side. So: ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the man ``` Becomes: ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/theman ``` Have a look at this: <https://developer.riotgames.com/discussion/community-discussion/show/jomoRum7> I am unsure how + are handled (in fact I don't think you're able to have a + in your name). All you have to do is remove the spaces. For "funny" characters, just request them with the funny character in them, and Riot returned it fine. ``` https://euw.api.pvp.net/api/lol/euw/v1.4/summoner/by-name/Trøyer?api_key=<insert your own> ``` will auto correct to ``` https://euw.api.pvp.net/api/lol/euw/v1.4/summoner/by-name/Tr%C3%B8yer?api_key=<insert your own> ``` and you generally don't even have to decode it. (I used JS as my language to fetch it, if you use something else your results may require the decoded value)
30,386,989
I am trying to use the League of Legends API and request data on a certain user. I use the line ``` var user = getUrlVars()["username"].replace("+", " "); ``` to store the username. However, when I do the XMLHttpRequest with that username, it'll put %20 instead of a space. ``` y.open("GET", "https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/"+user, false); ``` Edit: When I run this code with a user that has no space in their name it works, however when they have a space in their name it says the user is undefined. For example, if I was looking for the user "the man", it would do a get at ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the%20man ``` But the correct request URL is ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the man ```
2015/05/22
[ "https://Stackoverflow.com/questions/30386989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4093160/" ]
Actually there are no "spaces" in the summoner names on Riot's side. So: ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/the man ``` Becomes: ``` https://na.api.pvp.net/api/lol/na/v1.4/summoner/by-name/theman ``` Have a look at this: <https://developer.riotgames.com/discussion/community-discussion/show/jomoRum7> I am unsure how + are handled (in fact I don't think you're able to have a + in your name). All you have to do is remove the spaces. For "funny" characters, just request them with the funny character in them, and Riot returned it fine. ``` https://euw.api.pvp.net/api/lol/euw/v1.4/summoner/by-name/Trøyer?api_key=<insert your own> ``` will auto correct to ``` https://euw.api.pvp.net/api/lol/euw/v1.4/summoner/by-name/Tr%C3%B8yer?api_key=<insert your own> ``` and you generally don't even have to decode it. (I used JS as my language to fetch it, if you use something else your results may require the decoded value)
What you're experiencing is correct behaviour and is called URL encoding. HTTP requests have to conform to certain standards. The first line is always made up of three parts delimited by a space: 1. Method (GET, POST, etc.) 2. Path (i.e. /api/lol/na/v1.4/summoner/by-name/the%20man) 3. HTTP version (HTTP/1.1, HTTP/1.0, etc.) This is usually followed by HTTP headers which I'll leave out for the time being since it is beyond the scope of your question (if interested, read this <https://www.rfc-editor.org/rfc/rfc7230>). So a normal request looks like this: ``` GET /api/lol/na/v1.4/summoner/by-name/the%20man HTTP/1.1 Host: na.api.pvp.net User-Agent: Mozilla ... ``` With regards to your original question, the reason the library is URL encoding the space to `%20` is because you cannot have a space character in the request line. Otherwise, you would throw off most HTTP message parsers because the `man` would replace the HTTP version line like so: ``` GET /api/lol/na/v1.4/summoner/by-name/the man HTTP/1.1 Host: na.api.pvp.net User-Agent: Mozilla ... ``` In most cases, servers will return a 400 bad request response because they wouldn't understand what HTTP version `man` refers to. However, nothing to fear hear, most server-side applications/frameworks automatically decode the `%20` or `+` to space prior to processing the data in the HTTP request. So even though your URL looks unusual, the server side will process it as `the man`. Finally, one last thing to note. You shouldn't be using the `String.replace()` to URL decode your messages. Instead, you should be using decodeURI() and encodeURI() for decoding and encoding strings, respectively. For example: ``` var user = getUrlVars()["username"].replace("+", " "); ``` becomes ``` var user = decodeURI(getUrlVars()["username"]); ``` This ensures that usernames containing special characters (like `/` which would be URL encoded as `%2f`) are also probably decoded. Hope this helps!
56,159,327
I have wrote a function that converts a format (eg. April 16, into 16.04.) It does the job, but unfortunately it doesn't convert days lower than 10. (April 5, is not converted into 05.05.) Any idea why's that? Thanks. ```js var replaceArry = [ [/January 1, /gi, '01.01.'], [/January 2, /gi, '02.01.'], [/January 3, /gi, '03.01.'], [/January 4, /gi, '04.01.'], [/January 5, /gi, '05.01.'], [/January 6, /gi, '06.01.'], [/January 7, /gi, '07.01.'], [/January 8, /gi, '08.01.'], [/January 9, /gi, '09.01.'], [/January 10, /gi, '10.01.'], [/January 11, /gi, '11.01.'], [/January 12, /gi, '12.01.'], [/January 13, /gi, '13.01.'], [/January 14, /gi, '14.01.'], [/January 15, /gi, '15.01.'], [/January 16, /gi, '16.01.'], [/January 17, /gi, '17.01.'], [/January 18, /gi, '18.01.'], [/January 19, /gi, '19.01.'], [/January 20, /gi, '20.01.'], [/January 21, /gi, '21.01.'], [/January 22, /gi, '22.01.'], [/January 23, /gi, '23.01.'], [/January 24, /gi, '24.01.'], [/January 25, /gi, '25.01.'], [/January 26, /gi, '26.01.'], [/January 27, /gi, '27.01.'], [/January 28, /gi, '28.01.'], [/January 29, /gi, '29.01.'], [/January 30, /gi, '30.01.'], [/January 31, /gi, '31.01.'], [/February 1, /gi, '01.02.'], [/February 2, /gi, '02.02.'], [/February 3, /gi, '03.02.'], [/February 4, /gi, '04.02.'], [/February 5, /gi, '05.02.'], [/February 6, /gi, '06.02.'], [/February 7, /gi, '07.02.'], [/February 8, /gi, '08.02.'], [/February 9, /gi, '09.02.'], [/February 10, /gi, '10.02.'], [/February 11, /gi, '11.02.'], [/February 12, /gi, '12.02.'], [/February 13, /gi, '13.02.'], [/February 14, /gi, '14.02.'], [/February 15, /gi, '15.02.'], [/February 16, /gi, '16.02.'], [/February 17, /gi, '17.02.'], [/February 18, /gi, '18.02.'], [/February 19, /gi, '19.02.'], [/February 20, /gi, '20.02.'], [/February 21, /gi, '21.02.'], [/February 22, /gi, '22.02.'], [/February 23, /gi, '23.02.'], [/February 24, /gi, '24.02.'], [/February 25, /gi, '25.02.'], [/February 26, /gi, '26.02.'], [/February 27, /gi, '27.02.'], [/February 28, /gi, '28.02.'], [/February 29, /gi, '29.02.'], [/February 30, /gi, '30.02.'], [/February 31, /gi, '31.02.'], [/March 1, /gi, '01.03.'], [/March 2, /gi, '02.03.'], [/March 3, /gi, '03.03.'], [/March 4, /gi, '04.03.'], [/March 5, /gi, '05.03.'], [/March 6, /gi, '06.03.'], [/March 7, /gi, '07.03.'], [/March 8, /gi, '08.03.'], [/March 9, /gi, '09.03.'], [/March 10, /gi, '10.03.'], [/March 11, /gi, '11.03.'], [/March 12, /gi, '12.03.'], [/March 13, /gi, '13.03.'], [/March 14, /gi, '14.03.'], [/March 15, /gi, '15.03.'], [/March 16, /gi, '16.03.'], [/March 17, /gi, '17.03.'], [/March 18, /gi, '18.03.'], [/March 19, /gi, '19.03.'], [/March 20, /gi, '20.03.'], [/March 21, /gi, '21.03.'], [/March 22, /gi, '22.03.'], [/March 23, /gi, '23.03.'], [/March 24, /gi, '24.03.'], [/March 25, /gi, '25.03.'], [/March 26, /gi, '26.03.'], [/March 27, /gi, '27.03.'], [/March 28, /gi, '28.03.'], [/March 29, /gi, '29.03.'], [/March 30, /gi, '30.03.'], [/March 31, /gi, '31.03.'], [/April 1, /gi, '01.04.'], [/April 2, /gi, '02.04.'], [/April 3, /gi, '03.04.'], [/April 4, /gi, '04.04.'], [/April 5, /gi, '05.04.'], [/April 6, /gi, '06.04.'], [/April 7, /gi, '07.04.'], [/April 8, /gi, '08.04.'], [/April 9, /gi, '09.04.'], [/April 10, /gi, '10.04.'], [/April 11, /gi, '11.04.'], [/April 12, /gi, '12.04.'], [/April 13, /gi, '13.04.'], [/April 14, /gi, '14.04.'], [/April 15, /gi, '15.04.'], [/April 16, /gi, '16.04.'], [/April 17, /gi, '17.04.'], [/April 18, /gi, '18.04.'], [/April 19, /gi, '19.04.'], [/April 20, /gi, '20.04.'], [/April 21, /gi, '21.04.'], [/April 22, /gi, '22.04.'], [/April 23, /gi, '23.04.'], [/April 24, /gi, '24.04.'], [/April 25, /gi, '25.04.'], [/April 26, /gi, '26.04.'], [/April 27, /gi, '27.04.'], [/April 28, /gi, '28.04.'], [/April 29, /gi, '29.04.'], [/April 30, /gi, '30.04.'], [/April 31, /gi, '31.04.'], [/May 1, /gi, '01.05.'], [/May 2, /gi, '02.05.'], [/May 3, /gi, '03.05.'], [/May 4, /gi, '04.05.'], [/May 5, /gi, '05.05.'], [/May 6, /gi, '06.05.'], [/May 7, /gi, '07.05.'], [/May 8, /gi, '08.05.'], [/May 9, /gi, '09.05.'], [/May 10, /gi, '10.05.'], [/May 11, /gi, '11.05.'], [/May 12, /gi, '12.05.'], [/May 13, /gi, '13.05.'], [/May 14, /gi, '14.05.'], [/May 15, /gi, '15.05.'], [/May 16, /gi, '16.05.'], [/May 17, /gi, '17.05.'], [/May 18, /gi, '18.05.'], [/May 19, /gi, '19.05.'], [/May 20, /gi, '20.05.'], [/May 21, /gi, '21.05.'], [/May 22, /gi, '22.05.'], [/May 23, /gi, '23.05.'], [/May 24, /gi, '24.05.'], [/May 25, /gi, '25.05.'], [/May 26, /gi, '26.05.'], [/May 27, /gi, '27.05.'], [/May 28, /gi, '28.05.'], [/May 29, /gi, '29.05.'], [/May 30, /gi, '30.05.'], [/May 31, /gi, '31.05.'], [/June 1, /gi, '01.06.'], [/June 2, /gi, '02.06.'], [/June 3, /gi, '03.06.'], [/June 4, /gi, '04.06.'], [/June 5, /gi, '05.06.'], [/June 6, /gi, '06.06.'], [/June 7, /gi, '07.06.'], [/June 8, /gi, '08.06.'], [/June 9, /gi, '09.06.'], [/June 10, /gi, '10.06.'], [/June 11, /gi, '11.06.'], [/June 12, /gi, '12.06.'], [/June 13, /gi, '13.06.'], [/June 14, /gi, '14.06.'], [/June 15, /gi, '15.06.'], [/June 16, /gi, '16.06.'], [/June 17, /gi, '17.06.'], [/June 18, /gi, '18.06.'], [/June 19, /gi, '19.06.'], [/June 20, /gi, '20.06.'], [/June 21, /gi, '21.06.'], [/June 22, /gi, '22.06.'], [/June 23, /gi, '23.06.'], [/June 24, /gi, '24.06.'], [/June 25, /gi, '25.06.'], [/June 26, /gi, '26.06.'], [/June 27, /gi, '27.06.'], [/June 28, /gi, '28.06.'], [/June 29, /gi, '29.06.'], [/June 30, /gi, '30.06.'], [/June 31, /gi, '31.06.'], [/July 1, /gi, '01.07.'], [/July 2, /gi, '02.07.'], [/July 3, /gi, '03.07.'], [/July 4, /gi, '04.07.'], [/July 5, /gi, '05.07.'], [/July 6, /gi, '06.07.'], [/July 7, /gi, '07.07.'], [/July 8, /gi, '08.07.'], [/July 9, /gi, '09.07.'], [/July 10, /gi, '10.07.'], [/July 11, /gi, '11.07.'], [/July 12, /gi, '12.07.'], [/July 13, /gi, '13.07.'], [/July 14, /gi, '14.07.'], [/July 15, /gi, '15.07.'], [/July 16, /gi, '16.07.'], [/July 17, /gi, '17.07.'], [/July 18, /gi, '18.07.'], [/July 19, /gi, '19.07.'], [/July 20, /gi, '20.07.'], [/July 21, /gi, '21.07.'], [/July 22, /gi, '22.07.'], [/July 23, /gi, '23.07.'], [/July 24, /gi, '24.07.'], [/July 25, /gi, '25.07.'], [/July 26, /gi, '26.07.'], [/July 27, /gi, '27.07.'], [/July 28, /gi, '28.07.'], [/July 29, /gi, '29.07.'], [/July 30, /gi, '30.07.'], [/July 31, /gi, '31.07.'], [/August 1, /gi, '01.08.'], [/August 2, /gi, '02.08.'], [/August 3, /gi, '03.08.'], [/August 4, /gi, '04.08.'], [/August 5, /gi, '05.08.'], [/August 6, /gi, '06.08.'], [/August 7, /gi, '07.08.'], [/August 8, /gi, '08.08.'], [/August 9, /gi, '09.08.'], [/August 10, /gi, '10.08.'], [/August 11, /gi, '11.08.'], [/August 12, /gi, '12.08.'], [/August 13, /gi, '13.08.'], [/August 14, /gi, '14.08.'], [/August 15, /gi, '15.08.'], [/August 16, /gi, '16.08.'], [/August 17, /gi, '17.08.'], [/August 18, /gi, '18.08.'], [/August 19, /gi, '19.08.'], [/August 20, /gi, '20.08.'], [/August 21, /gi, '21.08.'], [/August 22, /gi, '22.08.'], [/August 23, /gi, '23.08.'], [/August 24, /gi, '24.08.'], [/August 25, /gi, '25.08.'], [/August 26, /gi, '26.08.'], [/August 27, /gi, '27.08.'], [/August 28, /gi, '28.08.'], [/August 29, /gi, '29.08.'], [/August 30, /gi, '30.08.'], [/August 31, /gi, '31.08.'], [/September 1, /gi, '01.09.'], [/September 2, /gi, '02.09.'], [/September 3, /gi, '03.09.'], [/September 4, /gi, '04.09.'], [/September 5, /gi, '05.09.'], [/September 6, /gi, '06.09.'], [/September 7, /gi, '07.09.'], [/September 8, /gi, '08.09.'], [/September 9, /gi, '09.09.'], [/September 10, /gi, '10.09.'], [/September 11, /gi, '11.09.'], [/September 12, /gi, '12.09.'], [/September 13, /gi, '13.09.'], [/September 14, /gi, '14.09.'], [/September 15, /gi, '15.09.'], [/September 16, /gi, '16.09.'], [/September 17, /gi, '17.09.'], [/September 18, /gi, '18.09.'], [/September 19, /gi, '19.09.'], [/September 20, /gi, '20.09.'], [/September 21, /gi, '21.09.'], [/September 22, /gi, '22.09.'], [/September 23, /gi, '23.09.'], [/September 24, /gi, '24.09.'], [/September 25, /gi, '25.09.'], [/September 26, /gi, '26.09.'], [/September 27, /gi, '27.09.'], [/September 28, /gi, '28.09.'], [/September 29, /gi, '29.09.'], [/September 30, /gi, '30.09.'], [/September 31, /gi, '31.09.'], [/October 1, /gi, '01.10.'], [/October 2, /gi, '02.10.'], [/October 3, /gi, '03.10.'], [/October 4, /gi, '04.10.'], [/October 5, /gi, '05.10.'], [/October 6, /gi, '06.10.'], [/October 7, /gi, '07.10.'], [/October 8, /gi, '08.10.'], [/October 9, /gi, '09.10.'], [/October 10, /gi, '10.10.'], [/October 11, /gi, '11.10.'], [/October 12, /gi, '12.10.'], [/October 13, /gi, '13.10.'], [/October 14, /gi, '14.10.'], [/October 15, /gi, '15.10.'], [/October 16, /gi, '16.10.'], [/October 17, /gi, '17.10.'], [/October 18, /gi, '18.10.'], [/October 19, /gi, '19.10.'], [/October 20, /gi, '20.10.'], [/October 21, /gi, '21.10.'], [/October 22, /gi, '22.10.'], [/October 23, /gi, '23.10.'], [/October 24, /gi, '24.10.'], [/October 25, /gi, '25.10.'], [/October 26, /gi, '26.10.'], [/October 27, /gi, '27.10.'], [/October 28, /gi, '28.10.'], [/October 29, /gi, '29.10.'], [/October 30, /gi, '30.10.'], [/October 31, /gi, '31.10.'], [/November 1, /gi, '01.11.'], [/November 2, /gi, '02.11.'], [/November 3, /gi, '03.11.'], [/November 4, /gi, '04.11.'], [/November 5, /gi, '05.11.'], [/November 6, /gi, '06.11.'], [/November 7, /gi, '07.11.'], [/November 8, /gi, '08.11.'], [/November 9, /gi, '09.11.'], [/November 10, /gi, '10.11.'], [/November 11, /gi, '11.11.'], [/November 12, /gi, '12.11.'], [/November 13, /gi, '13.11.'], [/November 14, /gi, '14.11.'], [/November 15, /gi, '15.11.'], [/November 16, /gi, '16.11.'], [/November 17, /gi, '17.11.'], [/November 18, /gi, '18.11.'], [/November 19, /gi, '19.11.'], [/November 20, /gi, '20.11.'], [/November 21, /gi, '21.11.'], [/November 22, /gi, '22.11.'], [/November 23, /gi, '23.11.'], [/November 24, /gi, '24.11.'], [/November 25, /gi, '25.11.'], [/November 26, /gi, '26.11.'], [/November 27, /gi, '27.11.'], [/November 28, /gi, '28.11.'], [/November 29, /gi, '29.11.'], [/November 30, /gi, '30.11.'], [/November 31, /gi, '31.11.'], [/December 1, /gi, '01.12.'], [/December 2, /gi, '02.12.'], [/December 3, /gi, '03.12.'], [/December 4, /gi, '04.12.'], [/December 5, /gi, '05.12.'], [/December 6, /gi, '06.12.'], [/December 7, /gi, '07.12.'], [/December 8, /gi, '08.12.'], [/December 9, /gi, '09.12.'], [/December 10, /gi, '10.12.'], [/December 11, /gi, '11.12.'], [/December 12, /gi, '12.12.'], [/December 13, /gi, '13.12.'], [/December 14, /gi, '14.12.'], [/December 15, /gi, '15.12.'], [/December 16, /gi, '16.12.'], [/December 17, /gi, '17.12.'], [/December 18, /gi, '18.12.'], [/December 19, /gi, '19.12.'], [/December 20, /gi, '20.12.'], [/December 21, /gi, '21.12.'], [/December 22, /gi, '22.12.'], [/December 23, /gi, '23.12.'], [/December 24, /gi, '24.12.'], [/December 25, /gi, '25.12.'], [/December 26, /gi, '26.12.'], [/December 27, /gi, '27.12.'], [/December 28, /gi, '28.12.'], [/December 29, /gi, '29.12.'], [/December 30, /gi, '30.12.'], [/December 31, /gi, '31.12.'], ]; var numTerms = replaceArry.length; var txtWalker = document.createTreeWalker ( document.body, NodeFilter.SHOW_TEXT, { acceptNode: function (node) { if (node.nodeValue.trim() ) return NodeFilter.FILTER_ACCEPT; return NodeFilter.FILTER_SKIP; } }, false ); var txtNode = null; while (txtNode = txtWalker.nextNode () ) { var oldTxt = txtNode.nodeValue; for (var J = 0; J < numTerms; J++) { oldTxt = oldTxt.replace (replaceArry[J][0], replaceArry[J][1]); } txtNode.nodeValue = oldTxt; } ```
2019/05/16
[ "https://Stackoverflow.com/questions/56159327", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9389198/" ]
You can remove the entire array and format your dates like this: ``` var date = new Date("April 5"); var m = date.getMonth() < 10 ? "0" + (date.getMonth() + 1) + "." : (date.getMonth() + 1) + "."; var d = date.getDate() < 10 ? "0" + (date.getDate()) + ".": date.getDate() + "."; var formatted = d + m; ``` `console.log(formatted)` will output `05.04.`
Alternatively, you could use the `Date` and `String` features of JavaScript like this: ```js let dateString = "April 5, 2019"; let customFormat = new Date(dateString) .toLocaleString('en-GB', { month: "2-digit", day: "2-digit"}) .substring(0, 5) .split('/') // .reverse() .join('.'); console.log(customFormat); document.querySelector('body').innerText = customFormat; ``` *The commented `reverse()` function would allow you to change the `day.month` order to `month.day` if you desired.*
21,379,986
I am trying to change checkout/cart.phtml through layout update in my module's layout file i.e. mymodule.xml ``` <layout> <checkout_cart_index> <reference name="checkout.cart"> <action method="setCartTemplate"><value>mymodule/checkout/cart.phtml</value></action> </reference> </checkout_cart_index> </layout> ``` But It is not working. Any clues?
2014/01/27
[ "https://Stackoverflow.com/questions/21379986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2978099/" ]
Ankita, What I'm about to write is the *actual* way to get what you want. While the official answer by John Hickling will work, it is not how Magento intended the main cart template to be modified. Magento deliberately chose to use different methods for setting the cart templates, namely, `setCartTemplate` and `setEmptyTemplate`. They can be seen in Magento's own `app/design/frontend/base/default/layout/checkout.xml`. This was done so that two templates can be managed, each to handle their own condition. The first condition is for a cart with items, while the second condition is for a cart without items. By using the common `setTemplate` method, that distinction will be lost: a cart with items and a cart without items will both display the same template. This is no good. You were so close. You were correct in trying to use the `setCartTemplate` method. That is what you should be using. However, you were missing one essential method call that would allow Magento to even consider using it: you forgot to include the `chooseTemplate` method call. Note Magento's own `checkout.xml` file: ``` <block type="checkout/cart" name="checkout.cart"> <action method="setCartTemplate"><value>checkout/cart.phtml</value></action> <action method="setEmptyTemplate"><value>checkout/cart/noItems.phtml</value></action> <action method="chooseTemplate"/> ``` Look at that last method call, `chooseTemplate`. If you look in `app/code/core/Mage/Checkout/Block/Cart.php` you will see the following method, within which those familiar `setCartTemplate` and `setEmptyTemplate` methods are called, but because they are magic methods, they are not easily searchable in Magento's source, which is problematic for a lot of people: ``` public function chooseTemplate() { $itemsCount = $this->getItemsCount() ? $this->getItemsCount() : $this->getQuote()->getItemsCount(); if ($itemsCount) { $this->setTemplate($this->getCartTemplate()); } else { $this->setTemplate($this->getEmptyTemplate()); } } ``` You were missing that `chooseTemplate` method call. This is what your own layout XML file should look like: ``` <checkout_cart_index> <reference name="checkout.cart"> <action method="setCartTemplate"><value>mymodule/checkout/cart.phtml</value></action> <action method="setEmptyTemplate"><value>mymodule/checkout/noItems.phtml</value></action> <action method="chooseTemplate"/> </reference> </checkout_cart_index> ``` I recommend you update your code if it is still under your control. This is how Magento intended the cart templates to be updated. The common `setTemplate` method is too destructive for this task. Granularity was Magento's intention, so updates should maintain that granularity. I also recommend you mark this as the correct answer.
The method is setTemplate not setCartTemplate, like so: ``` <layout> <checkout_cart_index> <reference name="checkout.cart"> <action method="setTemplate"><value>mymodule/checkout/cart.phtml</value></action> </reference> </checkout_cart_index> </layout> ```
21,211
I have developed a method to process images I use for my research. It's nothing revolutionary but I think it might be useful to others than me, and why not, be worthy of being published somewhere (at least for me to cite when I use it). As it is outside my primary field of research, nobody in my direct lab vicinity can help evaluating its scientific value or novelty (I did a bit of literature research and didn't find any obvious precedent). Normally I would just put it as an appendix in the first article where I use it, but this one is quite long to describe and completely out of my field. It would thus be off-topic in the journals in which I usually publish. I though of seeking collaboration from someone in my university who works in signal/image processing, I don't know anyone personally and I foresee possible political issues, authorship quarrels, etc. I would nonetheless like to get some sort of evaluation before submitting to a journal. As an outsider, I wouldn't like to waste an editor's time and make a fool of myself. I have zero experience submitting to the arXiv, by looking at the website it's not clear how/if there is an active system of feedback, even informal. **Is it advisable to submit my methods paper there and expect a feedback?**
2014/05/20
[ "https://academia.stackexchange.com/questions/21211", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/10643/" ]
There is not much of a "formal feedback" system associated with the arxiv. I say "not much" instead of "none" because the arxiv apparently does do some degree of automatic tracking of citations to its papers. For instance [this arxiv submission](http://arxiv.org/abs/1208.0977) contains a link to a MO post in which my paper is (briefly) mentioned. I had not seen that post before, so that was somewhat interesting. However, I don't know how this system works and it seems to be much less systematic than, say, what google scholar does. In particular, I have 25 arxiv preprints and the arxiv itself lists this kind of citation for very few of them, whereas google scholar lists much more. In terms of an informal feedback system: yes, the arxiv works very well for that, in the following organic way: for many academic fields and subfields it is by far the one place to put your preprint in order to get the most (and the most interested) people to read it. This includes publication in most journals and the implications this has had on some academic fields are immense. E.g. I hear that in theoretical physics -- a fast-moving field in which it is apparently rare to look up a paper written much more than ten years ago -- pretty much everyone who is anyone uploads their preprints to the arxiv, and as a result theoretical physicists almost never go to the library anymore or look through actual journal papers: they don't need to. My field -- mathematics -- seems to be converging to this kind of phenomenon rather more slowly. On the other hand, to **expect feedback** may be putting it a bit strongly. The volume of papers uploaded to the arxiv is fast and rapidly increasing. I just looked at the math.NT arxiv submissions, and last night 15 papers were uploaded. I am a number theorist with broad interests, and if these papers came at a rate of one a day or less, I would probably peruse about half of them. But the current volume forces me to be much more selective. The arxiv is great advertising, but all the advertising in the world doesn't guarantee that people will engage with your product rather than the sea of competing products (competing *for their attention*, anyway; they need not be competing with you in the academic sense). In my experience, I most definitely get enough feedback from my arxiv submissions in order to justify uploading them (although there are arguments to be made for doing so even if you never hear directly from anyone about them). It happens that in the last three weeks or so I have uploaded four arxiv submissions. (Since I have 25 altogether, this is obviously a spike in the upload rate. Some other people do this too. Now that I think about it, from an advertising perspective it would probably be better *not* to do this.) Since then I have received comments on two of the four papers. The two papers that I haven't heard from yet are I think perfectly solid and interesting -- in fact, one of the two concerns the Combinatorial Nullstellensatz so probably has broader appeal than most papers I have written, and the other is a really substantial project that I did jointly with my PhD student -- so the fact that I've gotten no feedback about them seems to be mostly random. In summary: yes, posting your papers on the arxiv is a great way to get feedback. Will it *guarantee* feedback? No, guaranteed feedback is exactly what you're buying (so to speak) when you submit to a journal. Other than that it seems impossible to guarantee. I would definitely submit to the arxiv and see what happens. If you hear nothing, then you might try sending a few emails to suspected experts which just point to your arxiv preprint. Having an arxiv preprint versus just enclosing a file adds a certain veneer of legitimacy.
You need to find a way to advertise your work so that people find it on arXiv, use it, then eventually cite it and criticize it. The main advantages of arXiv are: * the publication timing. When you submit to some closed review journal or conference, you are months before an official decision. During this time (that can get long), you need a way to disclose properly your work; * getting a larger audience. Everybody is not on her university's network, or some universities won't have access to all the journals. If your work is on the editor's site only, then more people than you might think will be blocked by the paywall. Furthermore, some people (I know some in Image Processing) will make monthly explorations of arXiv and publish some reading lists on-line, tghus giving you a larger audience.
21,211
I have developed a method to process images I use for my research. It's nothing revolutionary but I think it might be useful to others than me, and why not, be worthy of being published somewhere (at least for me to cite when I use it). As it is outside my primary field of research, nobody in my direct lab vicinity can help evaluating its scientific value or novelty (I did a bit of literature research and didn't find any obvious precedent). Normally I would just put it as an appendix in the first article where I use it, but this one is quite long to describe and completely out of my field. It would thus be off-topic in the journals in which I usually publish. I though of seeking collaboration from someone in my university who works in signal/image processing, I don't know anyone personally and I foresee possible political issues, authorship quarrels, etc. I would nonetheless like to get some sort of evaluation before submitting to a journal. As an outsider, I wouldn't like to waste an editor's time and make a fool of myself. I have zero experience submitting to the arXiv, by looking at the website it's not clear how/if there is an active system of feedback, even informal. **Is it advisable to submit my methods paper there and expect a feedback?**
2014/05/20
[ "https://academia.stackexchange.com/questions/21211", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/10643/" ]
There is not much of a "formal feedback" system associated with the arxiv. I say "not much" instead of "none" because the arxiv apparently does do some degree of automatic tracking of citations to its papers. For instance [this arxiv submission](http://arxiv.org/abs/1208.0977) contains a link to a MO post in which my paper is (briefly) mentioned. I had not seen that post before, so that was somewhat interesting. However, I don't know how this system works and it seems to be much less systematic than, say, what google scholar does. In particular, I have 25 arxiv preprints and the arxiv itself lists this kind of citation for very few of them, whereas google scholar lists much more. In terms of an informal feedback system: yes, the arxiv works very well for that, in the following organic way: for many academic fields and subfields it is by far the one place to put your preprint in order to get the most (and the most interested) people to read it. This includes publication in most journals and the implications this has had on some academic fields are immense. E.g. I hear that in theoretical physics -- a fast-moving field in which it is apparently rare to look up a paper written much more than ten years ago -- pretty much everyone who is anyone uploads their preprints to the arxiv, and as a result theoretical physicists almost never go to the library anymore or look through actual journal papers: they don't need to. My field -- mathematics -- seems to be converging to this kind of phenomenon rather more slowly. On the other hand, to **expect feedback** may be putting it a bit strongly. The volume of papers uploaded to the arxiv is fast and rapidly increasing. I just looked at the math.NT arxiv submissions, and last night 15 papers were uploaded. I am a number theorist with broad interests, and if these papers came at a rate of one a day or less, I would probably peruse about half of them. But the current volume forces me to be much more selective. The arxiv is great advertising, but all the advertising in the world doesn't guarantee that people will engage with your product rather than the sea of competing products (competing *for their attention*, anyway; they need not be competing with you in the academic sense). In my experience, I most definitely get enough feedback from my arxiv submissions in order to justify uploading them (although there are arguments to be made for doing so even if you never hear directly from anyone about them). It happens that in the last three weeks or so I have uploaded four arxiv submissions. (Since I have 25 altogether, this is obviously a spike in the upload rate. Some other people do this too. Now that I think about it, from an advertising perspective it would probably be better *not* to do this.) Since then I have received comments on two of the four papers. The two papers that I haven't heard from yet are I think perfectly solid and interesting -- in fact, one of the two concerns the Combinatorial Nullstellensatz so probably has broader appeal than most papers I have written, and the other is a really substantial project that I did jointly with my PhD student -- so the fact that I've gotten no feedback about them seems to be mostly random. In summary: yes, posting your papers on the arxiv is a great way to get feedback. Will it *guarantee* feedback? No, guaranteed feedback is exactly what you're buying (so to speak) when you submit to a journal. Other than that it seems impossible to guarantee. I would definitely submit to the arxiv and see what happens. If you hear nothing, then you might try sending a few emails to suspected experts which just point to your arxiv preprint. Having an arxiv preprint versus just enclosing a file adds a certain veneer of legitimacy.
The arXiv is a tool for establishing a presence, but it does not market or disseminate your results. It merely places them in a certain category and presents title, author, and sometimes abstract in a summary fashion, depending on how one uses it. Some researchers get RSS and email updates in their favorite areas about new arXiv submissions, but that should be considered small in number, and not likely to generate interest in your paper. If you want feedback, you need to advertise your own work, on your webpage, at conferences, at society meetings, and other appropriate venues. You can prepare a short version (abstract or highlight only) and include the URL of the arXiv abstract. Use of arXiv does not indicate peer review, but as there is some endorsement system involved in arXiv submissions, there is also some cachet associated with having the URL. When you have gotten some people interested in your work, they too can refer to the URL, and this can lead to more publicizing and hopefully direct feedback on your work. It can also lead to others posting their opinions on their blogs or elsewhere, which can be harder to track.