text
stringlengths
64
89.7k
meta
dict
Q: div, without trigger horizontal scroll I'm trying to create this design for a WP template: http://minus.com/lbi1iH25EcKsu7 Right now I'm like this: http://www.uncensuredftw.es/plantilla-blueftw/boilerplate/index.html I think you can get the general idea ;) I know...it's my fault: The browser calculate the size of the window from left to right, so if I put a margin it will move the div with the 100% size to de right. But the thing is: I don't know how to make it work :(. I wanted to make the "black bars" with divs (I painted the ones than don't work in red and orange) and the trick worked...but only the left ones works like I want. I'm getting out of ideas. I tried like everything I could think off, and nothing works. Maybe you can help me? ;) This is the html code: <div class="barraUL"></div><div class="barraDL"></div> <div class="presentacionbg"></div> <div class="presentacion"> <div class="barraUR"></div><div class="barraDR"></div> And this the css: .barraUL { position: absolute; width: 50%; height: 27px; background-color: black; right: 50%; margin-right: 500px; margin-top: -20px; } .barraDL { position: absolute; width: 50%; height: 27px; background-color: black; right: 50%; margin-right: 500px; margin-top: 309px; } /* This next two are the ones than "doesn't work" */ .barraUR { position: absolute; width: 50%; height: 27px; background-color: red; left: 50%; margin-left: 500px; margin-top: -4px; } .barraDR { position: absolute; width: 50%; height: 27px; background-color: orange; left: 50%; margin-left: 500px; margin-top: 325px; } A: The right divs are expanding to 50% the window width. For a liquid layout where the bars extend to the length of the window and then cut off, you'd usually make an underlaying div (in this case the bars and the black patterned background) and then expand it to 100% of the window. You can't make an additive layout using relative lengths like percent (left div + fixed middle image + right div) with just CSS (especially not with absolute positioning). If you insist on using this, you'll have to overflow: hidden; the html {} or body {} tag after centering your content and that's just bad practice. I recommend just having two long divs go all the way across the screen under your sprite image.
{ "pile_set_name": "StackExchange" }
Q: Issue with accessing the state and method in jest I've been working around with unit tests using jest in React and HOC. Currently, I am facing an issue in accessing the state and method of my class. Check the sample code below //Login.Container.js import { connect } from 'react-redux'; import { withRouter } from 'react-router-dom'; class Login extends Component { constructor(props) { super(props); this.state = { isShowLoader: false, }; this.handleSubmit = this.handleSubmit.bind(this); } handleSubmit() { //sample code } render() { return ( <LoginComponent /> ); } function mapStateToProps(state) { return { task: state }; } function mapDispatchToProps(dispatch) { return bindActionCreators( Object.assign({}, actions), dispatch); } export default withRouter(connect(mapStateToProps, mapDispatchToProps)(Login)); Login.displayName = 'Login'; Sample login component //Login.Component.js const LoginComponent = props => { return (<div>hi</div>); } My sample test suite using jest and enzyme //login.test.js import { Provider } from 'react-redux'; import { history, store} from '../store'; import { ConnectedRouter } from 'react-router-redux'; describe('>>>Login --- Container', () => { let wrapperInner it('should perform login container by using ComponentWapper', async () => { wrapperInner = mount(<Provider store={store}> <ConnectedRouter history={history}> <Login /> </ConnectedRouter> </Provider>); const instance = wrapperInner.instance(); expect(wrapperInner.state('isShowLoader')).toBe(true); const responseJson = await instance.handleSubmit(); }); }); A: Finally, I found the solution for this issue. wrapperInner.find("Login").instance() // to access the methods wrapperInner.find("Login").instance().state // to access the state For more details check here: https://github.com/airbnb/enzyme/issues/361
{ "pile_set_name": "StackExchange" }
Q: with height=100% only fills one 'screenful' Please check the following example: http://www.esaer.com.br/csstest/ If the vertical scrollbar doesn't appear, please resize the window so it does. Problem is: when you scroll down, the portion of the screen that was hidden does not show the blue div background, which has height 100%, even though the red div forces the height of the 'page' overall be greater than one screen. I want the blue background to span the entire page, even if the page is bigger than one screen. How can I make that happen? (I've been suggested a javascript solution already, but would prefer a css-only approach) Thanks in advance! A: "Solved" the issue by forcing a certain height (bigger than the fixed-size element) and living with the limitations. Thank you for all the answers, though, they might be helpful in some other situations!
{ "pile_set_name": "StackExchange" }
Q: ngModel binding for ng-bootstrap radio button does not work in Angular 2 when jQuery script is added The binding works when I have the following code in my Angular 2 app: <!-- script src="node_modules/jquery/dist/jquery.min.js"></script--> <script src="node_modules/tether/dist/js/tether.min.js"></script> <script src="node_modules/bootstrap/dist/js/bootstrap.min.js"></script> But the console in browser gives the following error: bootstrap.min.js:6 Uncaught Error: Bootstrap's JavaScript requires jQuery. jQuery must be included before Bootstrap's JavaScript. at bootstrap.min.js:6 But when I add it (by removing the comment), binding does not work. app.component.html: <div [(ngModel)]="model" ngbRadioGroup name="radioBasic"> <label class="btn btn-primary"> <input type="radio" [value]="1"> Left (pre-checked) </label> <label class="btn btn-primary"> <input type="radio" value="middle"> Middle </label> <label class="btn btn-primary"> <input type="radio" [value]="false"> Right </label> </div> <hr> <pre>{{model}}</pre> A: The whole point of the ng-bootstrap project is to get rid of the dependency on bootstrap.js and jQuery. The reasoning here is that we can drop unneeded dependencies and offer APIs that are more natural in the Angular world. So, as others commented, do NOT add dependencies on Bootstrap's JavaScript (and skip jQuery while you are at it). Follow the ng-bootstrap's installation instructions instead: https://ng-bootstrap.github.io/#/getting-started What you need from Bootstrap is only CSS, no need for JavaScript part.
{ "pile_set_name": "StackExchange" }
Q: Inserting into a Table the result between a variable and a table parameter Having the following procedure: CREATE PROCEDURE [dbo].[Gest_Doc_SampleProc] @Nome nvarchar(255), @Descritivo nvarchar(255), @SampleTable AS dbo.IDList READONLY AS DECLARE @foo int; SELECT @foo=a.bar FROM TableA a WHERE a.Nome=@Nome IF NOT EXISTS (SELECT a.bar FROM TableA a WHERE a.Nome=@Nome) BEGIN INSERT INTO TableA VALUES (@Nome,@Descritivo) INSERT INTO TableB VALUES (scope_identity(),@SampleTable) END I am trying, as shown, inserting into TableB all the values of SampleTable, together with the scope_identity. SampleTable is as: CREATE TYPE dbo.SampleTable AS TABLE ( ID INT ); GO How can I correctly achieve this? A: The right way to do this type of work is the OUTPUT clause. Although technically not needed for a single row insert, you might as well learn how to do it correctly. And even what looks like a single row insert can have an insert trigger that does unexpected things. PROCEDURE [dbo].[Gest_Doc_SampleProc] ( @Nome nvarchar(255), @Descritivo nvarchar(255), @SampleTable AS dbo.IDList ) READONLY AS BEGIN DECLARE @ids TABLE (id int); DECLARE @foo int; SELECT @foo = a.bar FROM TableA a WHERE a.Nome = @Nome; IF NOT EXISTS (SELECT 1 FROM TableA a WHERE a.Nome = @Nome) BEGIN INSERT INTO TableA (Nome, Descritive) OUTPUT Inserted.id -- or whatever the id is called INTO @ids; VALUES (@Nome,@Descritivo) INSERT INTO TableB (id, sampletable) SELECT id, @SampleTable FROM @ids; END; END; -- Gest_Doc_SampleProc In addition to using OUTPUT, this code also adds column lists to the INSERTs. That is another best practice.
{ "pile_set_name": "StackExchange" }
Q: Is design of "Budweiser's bow-tie shaped beer can " feasible & unique from engineering point of view? Is design of "Budweiser's bow-tie shaped beer can " feasible & unique from engineering point of view? Can there be a better design than it? Like the shape of can is inclined at ten degree & appears easy to crush it? Is there any other advantage/disadvantage of this design? Please help me. A: Budweiser made some design modification to regain its marketing and branding endeavours which had seen huge decline in sales threatening Budweiser's status as America's Best selling beer. The new design emphasized the iconic bowtie and updated its appearance giving it a eye-catching look, providing good holding experience which was easy to grip. After can modification Due to Can's slimmer middle and sleek design it contained lesser amount of bear than its traditional design. In packaging terms, it took twice as much aluminium as normal canes which increased its production cost. So the new modification failed to impress the customer, instead of focussing on can appearance they would have focussed on taste of beer and should not make compromise on quantity of beer.
{ "pile_set_name": "StackExchange" }
Q: creating clearcase dynamic view in jenkins I am looking for a plugin or extension, which can be used to create clearcase dynamic view using Jenkins. The existing clearcase plugin gives this functionality for snapshot view only. This post also gives an idea of using script for creating CC view. Has somebody done/doing similar work? That will be nice if I can get some ideas how to proceed further. It should be for base clearcase, not for UCM. A: Create, maybe not. But the ClearCase plugin allows for using an existing dynamic view. Optionally, you can use an existing dynamic view, rather than a new snapshot view. To do so, check "Use dynamic view" under the advanced options. View root Required for dynamic view use - this is the directory or drive under which dynamic views live. On Unix, this is generally "/view", while on Windows, it's generally "M:\". Do Not Reset Config Spec If selected, the dynamic view's config spec won't be changed, regardless of whether it matches the config spec specified in the job configuration. The plugin itself creates snapshot view in hudson.plugins.clearcase.ClearToolExec class. You can use a similar code for dynamic view.
{ "pile_set_name": "StackExchange" }
Q: Django: FormView: How to add initial logic? I have a FormView with some validations, all works fine. However, I'd like to add some initial logic that would immediately redirect to another view, based on some condition(s) I have. Something like this: class UserFormView(FormView): template_name = 'pages_fixed/users/userform.html' form_class = UserForm def get(self, request): if condition: # if some condition met, want to immediately go to another URL.. return redirect('/dashboard/') else: # condition not met, just display the form.. return request def form_valid(self, form): # Form validates, code away... I can't seem to make it work -- is the get() method not available in FormView? Since I can't pass more than two args into it, either I get infinite redirects or it's missing the form... I know I can convert it to use a View but I'd really like to understand the difference. When is one better than the other? (So far, using FormView seems a lot faster -- in my experience whenever I do return render(request, url) Django takes a while..) A: Don't return the request. You need to return the super's get result. if condition: return redirect('/dashboard/') else: return super(UserFormView, self).get(request)
{ "pile_set_name": "StackExchange" }
Q: Get *all* current jobs from python-rq I'm using python-rq to manage Redis-based jobs and I want to determine which jobs are currently being processed by my workers. python-rq offers a get_current_job function to find 'the current job' for a connection but: I can't get this to work, and I really want a list of all of the jobs which are being currently processed by all workers on all the queues for this connection rather than one job from one queue. Here is my code (which always returns None): from rq import Queue, get_current_job redis_url = os.getenv('REDIS_FOO') parse.uses_netloc.append('redis') url = parse.urlparse(redis_url) conn = Redis(host=url.hostname, port=url.port, db=0, password=url.password) q = Queue(connection=conn) get_current_job(connection=conn) Does anyone have any ideas, please, on getting the above code to work but, more importantly, on a way to get a list of all current jobs from all workers on all queues from this connection? A: Looked into some source code, I figure this what you need: There is one more thing you should notice: the number of the running jobs is equal to the number of rq worker. Because worker only process one job at a time. from rq import Queue from redis import Redis from rq.registry import StartedJobRegistry from jobs import count_words_at_url redis_conn = Redis() q = Queue('default', connection=redis_conn) for i in range(5000): job = q.enqueue(count_words_at_url, 'http://nvie.com', ttl=43) registry = StartedJobRegistry('default', connection=redis_conn) running_job_ids = registry.get_job_ids() # Jobs which are exactly running. expired_job_ids = registry.get_expired_job_ids()
{ "pile_set_name": "StackExchange" }
Q: Editing misleading question, possibly against authors intention Situation I recently spotted this question: Police forcing me to install Jingwang spyware app, how to minimize impact? In this post there is a good technical question, but it also provides situational context based on sources. However, and it is not clear to me whether this is intentional or not, the situation description in the post is not consistent with the sources. A broader research suggests that the sources are correct. Without claiming that the description in the question is untrue, at the very least the question could use some added detail to represent the situation more accurately. I tried editing the post, unfortunately this got rejected, by a third person: https://security.stackexchange.com/review/suggested-edits/125580 The reason for rejection is that it goes against the askers intent. Question Assuming that: My edit is not disputed to improve the description Any possible deviation from the authors intent would not result in the author still getting his technical question properly answered Would it now be the right decision to edit the question? A: This is the sort of thing to use comments for: to ask the poster for clarification. As the poster is the one closer to the situation than we are, it makes more sense that they provide the details rather than we, who are only getting the information 2nd hand.
{ "pile_set_name": "StackExchange" }
Q: Is it necessary to inject everything with Dagger 2? I am new to Dagger. I have a confusion about what to and what not to inject with dagger. I know that it is necessary to inject Android Framework classes and my classes using Dagger but is it really necessary to inject even basic Java classes like String, StringBuilder etc using dagger or not. public String create(Context context) // Creating Simple objects in the method itself { StringBuilder builder=new StringBuilder(); .... return builder.toString(); } public String create(Context context,StringBuilder builder) // Injecting everything { .... return builder.toString(); } A: Put a different way, the important part of injection is that it lets you manage the state of your objects (including lifecycle, if you want that). So if you have a class with no state (e.g. a Util class that provides some stateless functions), you should never inject that. If your StringBuilder class doesn't need to share state (i.e. use the same StringBuilder across two objects), then you don't need to inject it.
{ "pile_set_name": "StackExchange" }
Q: Python: Element assignment in a scipy.sparse.diags matrix I'm currently trying to change a row in a matrix which I created using the scipy.sparse.diags function. However, it returns the following error saying that I cannot assign to this object: TypeError: 'dia_matrix' object does not support item assignment Is there any way around this without having to change the original vectors used to form the tridiagonal matrix? The following is my code: def Mass_Matrix(x0): """Finds the Mass matrix for any non uniform mesh x0""" x0 = np.array(x0) N = len(x0) - 1 h = x0[1:] - x0[:-1] a = np.zeros(N+1) a[0] = h[0]/3 for j in range(1,N): a[j] = h[j-1]/3 + h[j]/3 a[N] = h[N-1]/3 b = h/6 c = h/6 data = [a.tolist(), b.tolist(), c.tolist()] Positions = [0,1,-1] Mass_Matrix = diags(data, Positions, (N+1,N+1)) return Mass_Matrix def Initial_U(x0): #BC here x0 = np.array(x0) h = x0[1:] - x0[:-1] N = len(x0) - 1 Mass = Mass_Matrix(x0) Mass[0] = 0 #ITEM ASSIGNMENT ERROR print Mass.toarray() A: For a sparse matrix defined with your function: x0=np.arange(10) mm=Mass_Matrix(x0) The csr format is the one that is normally used for calculations, such as matrix multiplication, and linalg solve. It does define assignment, but gives an efficiency warning: In [29]: mmr=mm.tocsr() In [30]: mmr[0]=0 /usr/lib/python3/dist-packages/scipy/sparse/compressed.py:690: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. SparseEfficiencyWarning) lil works fine In [31]: mml=mm.tolil() In [32]: mml[0]=0 Many of the sparse functions and methods convert one format to another to take advantage of their respective strengths. But the developers haven't implemented all possible combinations. You need to read the pros and cons of the various formats, and also note the methods for each.
{ "pile_set_name": "StackExchange" }
Q: SqlCommand back up Database I have an c# winforms application (.net 2 framework). I need to backup data bases from my application. I am trying to do this by executing an SqlCommand asynchronously. The code is executed with no exceptions but I dont get the .bak file in my destination... this is the code : #region backup DB using T-SQL command string connString = "Data Source=" + ConfigurationManager.AppSettings.Get("localhost_SQLEXPRESS") + ";Initial Catalog=" + db + ";UserID=" + ConfigurationManager.AppSettings.Get("user") + ";Password=" + ConfigurationManager.AppSettings.Get("password"); SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connString); builder.AsynchronousProcessing = true; using (SqlConnection sqlConnection1 = new SqlConnection(builder.ConnectionString)) { using (SqlCommand cmd = new SqlCommand("BACKUP DATABASE " + db + " TO DISK=" + location + "\\" + ConfigurationManager.AppSettings.Get("DataBaseBackupsFolderName") + "\\" + db + ".bak'", sqlConnection1)) { sqlConnection1.Open(); IAsyncResult result = cmd.BeginExecuteNonQuery(); while (!result.IsCompleted) { Thread.Sleep(100); } } } #endregion A: In your SQL backup line you seem to be missing a single quote at the beginning of the path to the backup file. using (SqlCommand cmd = new SqlCommand("BACKUP DATABASE " + db + " TO DISK='" + location + "\\" + ConfigurationManager.AppSettings.Get("DataBaseBackupsFolderName") + "\\" +db + ".bak'", sqlConnection1))
{ "pile_set_name": "StackExchange" }
Q: Batch file to delete first 3 lines of a text file As the title states I need a batch file to delete the FIRST 3 lines of a text file. for example: A B C D E F G in this example I need A,B and C deleted along with the line A: more +3 "file.txt" >"file.txt.new" move /y "file.txt.new" "file.txt" >nul The above is fast and works great, with the following limitations: TAB characters are converted into a series of spaces. The number of lines to be preserved must be less than ~65535. MORE will hang, (wait for a key press), if the line number is exceeded. All lines will be terminated by carriage return and linefeed, regardless how they were formatted in the source. The following solution using FOR /F with FINDSTR is more robust, but is much slower. Unlike a simple FOR /F solution, it preserves empty lines. But like all FOR /F solutions, it is limited to a max line length of a bit less than 8191 bytes. Again, all lines will be terminated by carriage return and linefeed. @echo off setlocal disableDelayedExpsnsion >"file.txt.new" ( for /f "delims=" %%A in ('findstr /n "^" "file.txt"') do ( set "ln=%%A" setlocal enableDelayedExpansion echo(!ln:*::=! endlocal ) ) move /y "file.txt.new" "file.txt" >nul If you have my handy-dandy JREPL.BAT regex text processing utility, then you could use the following for a very robust and fast solution. This still will terminate all lines with carriage return and linefeed (\r\n), regardless of original format. jrepl "^" "" /k 0 /exc 1:3 /f "test.txt" /o - You can write \n line terminators instead of \r\n by adding the /U option. If you must preserve the original line terminators, then you can use the following variation. This loads the entire source file into a single JScript variable, so the total file size is limited to approximately 1 or 2 gigabytes (I forgot the exact number). jrepl "(?:.*\n){1,3}([\s\S]*)" "$1" /m /f "test.txt" /o - Remember that JREPL is a batch file, so you must use CALL JREPL if you use the command within another batch script. A: This should do it for /f "skip=3 delims=*" %%a in (C:\file.txt) do ( echo %%a >>C:\newfile.txt ) xcopy C:\newfile.txt C:\file.txt /y del C:\newfile.txt /f /q That will re-create the file with the first 3 lines removed. To keep the user updated you could integrate messages in the batch file in vbscript style or output messages in the command prompt. @echo off echo Removing... for /f "skip=3 delims=*" %%a in (C:\file.txt) do ( echo %%a >>C:\newfile.txt ) >nul echo Lines removed, rebuilding file... xcopy C:\newfile.txt C:\file.txt /y >nul echo File rebuilt, removing temporary files del C:\newfile.txt /f /q >nul msg * Done! exit >nul Hope this helps. A: Use sed to only print beginning with the 4th line (Edit: Only if you use Un*x :) $ sed -e '4,$p' in.txt
{ "pile_set_name": "StackExchange" }
Q: Slow checkpoint and 15 second I/O warnings on flash storage Last couple of weeks we've been working on getting to the root cause of what could likely be the cause of the occurrence of these I/O issues and slowdown of the checkpoints. At first glance it looks to be clearly an I/O subsystem error and the SAN admin was to be blamed for it. But recently we changed the SAN to utilize Full Flash but as of today the error still pops up and I have no clue as to why since every metric, whether wait stats or any other metric, that I run to check if SQL server is a possible culprit seems out to return normal. It doesn't really add up. It could also be very likely that something else is chewing the disk and SQL Server is becoming victimized here...but I am not able to find out what? Dbs are in Availability Groups and as and when these events occur we do see role changes and flip overs occurring along with timeouts. Any help in figuring this out would be highly appreciated. Let me know if any further details is needed. Error msgs. below SQL Server has encountered 14212 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [E:\MSSQL\DATA\ABC.mdf] in database [ABC] (7). The OS file handle is 0x0000000000000D64. The offset of the latest long I/O is: 0x0000641262c000 SQL Server has encountered 5347 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [E:\MSSQL\DATA\XYZ.mdf] in database [XYZ] (7). The OS file handle is 0x0000000000000D64. The offset of the latest long I/O is: 0x0000506c060000 FlushCache: cleaned up 111476 bufs with 62224 writes in 925084 ms (avoided 19 new dirty bufs) for db 7:0 average throughput: 0.94 MB/sec, I/O saturation: 55144, context switches 98407 last target outstanding: 10240, avgWriteLatency 14171 FlushCache: cleaned up 5616 bufs with 3126 writes in 248687 ms (avoided 3626 new dirty bufs) for db 6:0 average throughput: 0.18 MB/sec, I/O saturation: 10080, context switches 20913 last target outstanding: 2, avgWriteLatency 3 Here's the virtual file stats info over a 30 minute span: And wait stats as well: Here is the note from the system architect: We separate workloads for high I/O intense workloads (such as DB) so that we only have one per host. The specs for the current host is Dell R730 with 16 cores of Xeon E5-2620 (2 sockets), 512GB, and 2x10G interconnects for storage. No other VM on the cluster nor host are experiencing these issues. Storage for VMs and workloads is on Pure FA-x20. General System Information: SQL Server 2012 sp3-cu9 (Enterprise Edition) Total RAM: 128 GB Total DB size: Close to 1 TB A: Last couple of weeks we've been working on getting to the root cause of what could likely be the cause of the occurrence of these I/O issues and slowdown of the checkpoints. Sounds good. Have you collected and cut up the minifilter and storport tracing, yet? If so, what did it show? At first glance it looks to be clearly an I/O subsystem error and the SAN admin was to be blamed for it. But recently we changed the SAN to utilize Full Flash but as of today the error still pops up and I have no clue as to why since every metric, whether wait stats or any other metric, that I run to check if SQL server is a possible culprit seems out to return normal. I want go over two different areas here. The first is that SQL Server itself doesn't actually do anything with I/O, it posts it to Windows using the typical Windows APIs. Whether it is ReadFile, WriteFile, or the vectored I/O of those it's all up to Windows. SQL Server keeps a list of pending I/O and checks that I/O at various times to get the status if it isn't completed. This is done using, again, the typical Windows asynchronous I/O model. The message is printed when the I/O has been pending and not completed, according to Windows for over 15 seconds as we're using the GetOverlappedResult Windows API to check status. This means, SQL Server doesn't really have a say in the matter, it's what is being returned via Windows. The second item is that just because it's all flash and 10 Gb fiber doesn't mean something isn't setup or configured incorrectly, that a driver, filter, or other bug or item isn't hit, or that something isn't physically wrong. Just to get an idea: Windows Config Windows Drivers such as mutli-pathing being setup and the latest version Filter Drivers (you know, disk devices, antivirus, backup, etc.) Hypervisors (if any) HBA drivers HBA firmware HBA configuration Physical Cabling Fiber switching I/O Group connections/SAN/Device Configuration of SAN/Device That's all under SQL Server, it's just that SQL Server is the one telling you about it. Dbs are in Availability Groups and as and when these events occur we do see role changes and flip overs occurring along with timeouts. That's really good information to know, although it doesn't necessarily mean it's exactly related. Now, if it only happens when there is a failover, then that would hone in the issue much more and that would sound to me more like the drivers et al. doesn't like throwing a whole lot of mixed I/O at it as a failover typically results in the redo/undo and resync's happening which could be a spike in outstanding I/O. Any help in figuring this out would be highly appreciated. Unless it's a query or set of queries that are pushing high IOPs, which it doesn't sound like as the snapshot for 30 minutes you have was only 737,465 I/O operations which averages to 410 IOPs (not that high, especially if it's flash) looking inside of SQL Server isn't going to help with this issue since SQL Server is the messenger. You'd want to collect if not already: Minifilter time spent. This can be done through WPR (XPerf) if you don't have anything else. This can help if the I/O is getting stalled in a filter driver. Storport trace. This will be the last stop on the way our and the first stop on the way back. Any time between these two readings is time spent outside of Windows... It'll also show you the targets and where the slowness might be on the other end (but isn't always conclusive). If none of those are helpful in diagnosis or narrowing in the scope of the issue, it may be time to open a ticket with Windows Storage support and have all the data already collected so that you all can start on the same page. A: You mentioned you're checking wait stats and "every other metric." I assume you are seeing high PAGELATCH and WRITELOG waits? Just to double check, have you reviewed sys.dm_io_virtual_file_stats? That's where I would start when getting these 15 second I/O messages. Use Erin Stellato's excellent article "What Virtual Filestats Do, and Do Not, Tell You About I/O Latency" as a guide in terms of what queries to use. Log snapshots of that DMV to a table every 5 or 15 minutes. Look for spikes in average stalls / latency. Look to see if the number reads / writes, or the average bytes per read / write, has gone up during these spikes. It could be that you have maintenance or user queries that are flooding the I/O subsystem with more traffic than it can handle. These queries will need to be tuned, or the maintenance tasks need to be broken up or moved to a different time of day. Work with your SAN admin to see if there are any "noisy neighbors" or errors in the SAN that correlate with these times. Compare the SAN setup with other SQL Server boxes - it's possible you have a throughput issue at the physical connection level, or you have caching settings that need to be tweaked, or updates that need to be installed, etc. I realize these are somewhat general steps, but hopefully it gives you some direction of where to go next. Regarding this: We separate workloads for high I/O intense workloads (such as DB) so that we only have one per host...No other VM on the cluster nor host are experiencing these issues I think it makes sense that SQL Server would be the only one seeing these problems, if it's the only one with a high I/O workload on the host - the other servers / applications might not even notice or have any way of reporting if they are experiencing disk latency. The E drive looks particularly problematic in your screenshot of the virtual file stats. Is there anything different about that drive? ...2x10G interconnects for storage You could have a cabling issue. Consider reseating them / making sure they have a solid connection. Possibly swap with different, known-good cables. As mentioned above, have the SAN team review caching settings and other configuration to see if there are any differences with this volume / host vs other SQL Server VMs.
{ "pile_set_name": "StackExchange" }
Q: how to handle IRP_MN_QUERY_DEVICE_RELATIONS in USB filter driver I am new in driver development. I started to develop usb filter driver for Windows7 in order to hide from user some usb device types. I attach my driver on USB hub and can intercept IRP_MN_QUERY_DEVICE_RELATIONS. I have a few questions: 1 - On IRP_MN_QUERY_DEVICE_RELATIONS (QueryDeviceRelations.Type is BusRelations) I receive a pointer to DEVICE_RELATIONS struct.As I understand the Objects array in the struct should hold pointers to PDOs. But, when I test DO_BUS_ENUMERATED_DEVICE flag (From msdn: The operating system sets this flag in each physical device object (PDO). Drivers must not modify this flag.) sometime I see this flag is turned on, and some time the flag is turned off. Does it means that sometime I see PDO and sometime I see FDO? Or some another explanation for this issue? When I get some PDEVICE_OBJECT, how can I know Is it PDO or FDO? 2 - When the user plugs in some usb device, and the filter driver should handle IRP_MN_QUERY_DEVICE_RELATIONS, how can I determine which device from Object array is just now plugged in device and which one was plugged in before, and which one is marked as inactive? Thanks in advance. Felix. A: There is undocumented member DeviceNode in DEVOBJ_EXTENSION because it is not a part of WDM.h and NTDDK.h, so private to the IO or PnP Managers. In any case it is NULL for non-PDO's so "unsupported way" is if (DeviceObject->DeviceObjectExtension->DeviceNode) { // PDO! } else { // non-PDO! } I'd prefer not to use it. Instead of it, you can find actual device object via IoGetDeviceObjectPointer or by walking through devobj list starting from PDRIVER_OBJECT. In order to determinate whether devobj is PDO send QDR/TargetDeviceRelation (unref PDEVICE_OBJECT in list when done). If succeeded, resulting devobj in the QDR will be the PDO your device. Here is a good explanation of this. Another option is to use DO_BUS_ENUMERATED_DEVICE. Also take into account that this flag does not means initialized PDO. It set before initialization and upon structure allocation.
{ "pile_set_name": "StackExchange" }
Q: Integer number too large (android / java) Declaring int constant like private static final int REQUEST_CODE = 0909; this would result into "Integer number too large" while private static final int REQUEST_CODE = 1909; works fine why would the android studio suggest that 0909 is too large but 1909 is fine A: In Java, numbers with a leading 0 are treated as octal numbers. Octal numbers are base 8 and use the digits 0 through 7, while the digits 8 and 9 are not valid. To declare a decimal constant, use: private static final int REQUEST_CODE = 909;
{ "pile_set_name": "StackExchange" }
Q: MySQL JOIN with SUM This question probably has been asked a lot of times, therefore, please, excuse me for duplicates, but I just couldn't seem to find something like this nor could I manage to build something similar to what I want to achieve. For example, lets say, I have the following table structure: //tasks +-------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | id | int(11) | NO | P | None | AI | | user | int(11) | NO | | None | | | data | varchar(200) | NO | | None | | +-------+--------------+------+-----+---------+-------+ //votes +-------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | id | int(11) | NO | P | | AI | | user | int(11) | NO | | | | | item | int(11) | NO | | | | | up | tinyint(1) | NO | | 0 | | | down | tinyint(1) | NO | | 0 | | +-------+--------------+------+-----+---------+-------+ With the following data: //tasks +----+------+------------+ | id | user | data | +----+------+------------+ | 1 | 1 | something | | 2 | 2 | lorem ip | | 3 | 1 | biggy | +----+------+------------+ //votes +----+------+------+----+------+ | id | user | item | up | down | +----+------+------+----+------+ | 1 | 8 | 1 | 1 | 0 | | 2 | 4 | 1 | 1 | 0 | | 3 | 2 | 1 | 0 | 1 | | 4 | 2 | 2 | 1 | 0 | | 5 | 1 | 2 | 1 | 0 | +----+------+------+----+------+ I want to do something similar to: SELECT r.* FROM `tasks` WHERE `user` = '1' r LEFT JOIN (SELECT SUM(t.up) AS up, SUM(t.down) AS down FROM `votes` t WHERE t.item = r.id) r ON r.id = t.item And yes, that's my query so far, but it doesn't work, and I have no idea how to correct that. Basicly, I want to: Select everything from table tasks where user is "x" Join each row from tasks (selected at step 1) with sum of up, sum of down from table votes where item is equal to id from tasks And that should produce something like (ie. user = 1): +----+------+------------+----+------+ | id | user | data | up | down | +----+------+------------+----+------+ | 1 | 1 | something | 3 | 1 | | 3 | 1 | biggy | 0 | 0 | +----+------+------------+----+------+ Well, I hope you guys understand and can help me with this. Thanks in advance! A: The WHERE clause needs to go below joins, and aliases should be declared using AS. This is a (syntactically) corrected version of your query: SELECT r.* FROM `tasks` LEFT JOIN ( SELECT SUM(t.up) AS up, SUM(t.down) AS down FROM `votes` AS t WHERE t.item = r.id ) AS r ON r.id = t.item WHERE `user` = '1' This is how i would do it (untested): SELECT `tasks`.`id`, `tasks`.`user`, `tasks`.`data`, `votes`.`up`, `votes`.`down` FROM `tasks` LEFT JOIN ( SELECT `item`, SUM(`up`) AS `up`, SUM(`down`) AS `down` FROM `votes` GROUP BY `item` ) AS `votes` ON `votes`.`item` = `tasks`.`id` WHERE `tasks`.`user` = 1
{ "pile_set_name": "StackExchange" }
Q: AngularJS Accessing DOM element by id inside ng-repeat I have a situation where I need to manipulate the CSS inside of a controller - I know this is a bit of a no-no but it is necessary in this instance as I will explain below. I have a title and a description contained inside an list element in an ng-repeat. I want to give the description a CSS class dependent on the height of the title div. However, when I try to get the height of the 'title' div by id inside the controller, it always just gets the height of the div inside the first element. You can see this explicitly when I get the text of the title. The controller code looks like this: $scope.getClass = function() { var title = document.getElementById('title'); var titleHeight = window.getComputedStyle(title, null).getPropertyValue("height"); var titleText = document.getElementById('title').textContent; if(titleHeight == '50px') { return 'description-small'; } else if(titleHeight == '25px') { return 'description-large'; } }; The HTML looks like this: <ul> <li class="li-element" ng-repeat="item in itemList"> <div id="title" class="title" data-ellipsis data-ng-bind="item.title"></div> <div class="{{getClass()}}" data-ellipsis data-ng-bind="item.description"></div> </li> </ul> So, dependent on whether the first item in the list has a title over 1 or 2 lines, all subsequent descriptions get the height required for the first item, regardless of how tall/small it's title is. I assume this is something to do with how ng-repeat works, but I'm pretty new to this and have no idea how to get around it - any ideas? I've created a plnkr to show the problem here: http://plnkr.co/edit/G1QEq62CbJ0HpNZ0FfSm REASON FOR MANIPULATING DOM IN CONTROLLER: I have a title and a description. The title can be 1-2 lines high, depending on the length of the title. If it doesn't fit on 2 lines, I am using the 'data-ellipsis' directive to put ellipsis on the end of the title. Similarly, the description also uses the 'data-ellipsis' for any overflow. Between the title and the description, they need to have a combined height of 125px, say. But because we don't know how tall the title will be, we don't know how high to set the description. The 'data-ellipsis' directive depends on the value of the height set in the CSS. So therefore, I need to set the description class dependent on the title height on the fly. If there is another way around this, I'm eager to find out! Here is a link to the data ellipsis directive: https://github.com/dibari/angular-ellipsis A: The problem is that your assigning the same id to multiple divs. getElementById is returning the first div with id title. The solution is simple, add an index to the id. Using ng-attr-id we can dynamically create ids for every div in the ng-repeat block. The $index variable gives you the current index of ng-repeat block. Then we pass the $index to the getClass function allowing the getClass to know which title we are talking about. Plunkr UPDATES TO YOUR CODE ng-repeat <li class="li-element" ng-repeat="item in itemList"> <div ng-attr-id="{{'title'+$index}}" class="title" data-ellipsis data-ng-bind="item.title"></div> <div class="{{getClass($index)}}" data-ellipsis data-ng-bind="item.description"></div> </li> getClass $scope.getClass = function(i) { var title = document.getElementById('title'+i); var titleHeight = window.getComputedStyle(title, null).getPropertyValue("height"); var titleText = title.textContent; if(titleHeight == '50px') { console.log("titleheight = 62px"); console.log("titleText = " + titleText); return 'description-small'; } else if(titleHeight == '25px') { console.log("titleheight = 31px"); console.log("titleText = " + titleText); return 'description-large'; } };
{ "pile_set_name": "StackExchange" }
Q: Angular 2 component DOM binds to the property in component before it has bean resolved in OnInit method I am learning Angular 2 and trying to follow their tutorial. Here is the code of the service that returns "Promise" of a mock object Folder. import {Injectable, OnInit} from "@angular/core"; import {FOLDER} from "./mock-folder"; import {Folder} from "./folder"; @Injectable() export class FolderService { getFolder():Promise<Folder>{ return Promise.resolve(FOLDER); } } It is declared in providers of my FolderModule import {NgModule} from "@angular/core"; import {CommonModule} from "@angular/common"; import {FolderComponent} from "./folder.component"; import {MaterialModule} from "@angular/material"; import {FolderService} from "./folder.service"; @NgModule({ imports:[CommonModule, MaterialModule.forRoot()], exports:[FolderComponent], declarations:[FolderComponent], providers:[FolderService] }) export class FolderModule{ } Folder component should import FolderService and use it to obtain the Folder object. import {Component, OnInit} from "@angular/core"; import {Folder} from "./folder"; import {FolderService} from "./folder.service"; @Component({ selector: 'folder', moduleId: module.id, templateUrl: "./folder.component.html" }) export class FolderComponent implements OnInit { folder:Folder; constructor(private folderService:FolderService) { } ngOnInit():void { this.getFolder(); } getFolder() { this.folderService.getFolder().then((folder) => this.folder = folder); } } And yes, i do import my FolderModule in the root module @NgModule({ imports: [BrowserModule, CommonModule, MaterialModule.forRoot(), FolderModule, AppRoutingModule], providers:[], declarations: [AppComponent, LifeMapComponent, MyPageNotFoundComponent], bootstrap: [AppComponent] }) export class AppModule { } Here is the folder component template <md-grid-list cols="3" [style.background] ="'lightblue'" gutterSize="5px"> <md-grid-tile *ngFor="let card of folder.cards">{{card.title}}</md-grid-tile> </md-grid-list> And here is the error i get in the console EXCEPTION: Uncaught (in promise): Error: Error in http://localhost:3000/app/folders/folder.component.html:1:16 caused by: Cannot read property 'cards' of undefined TypeError: Cannot read property 'cards' of undefined export class Folder { public id:number; public title:string; public cards:Card[]; } export class Card{ id :number; title:string; } A: Voland, This can be solved by using the "Elvis" operator on the collection being iterated over. <md-grid-tile *ngFor="let card of folder.cards">{{card.title}}</md-grid-tile> Should instead be: <md-grid-tile *ngFor="let card of folder?.cards">{{card?.title}}</md-grid-tile> Note the "?" after folder -- this will coerce the whole path to 'null', so it won't iterate. The issue is with the accessor on a null object. You could also declare folder to an empty array [] to prevent this. EDIT: To any onlookers, note that the Elvis operator is not available in your Typescript code, as it's not supported by the language. I believe that Angular 2 supports it though, so it is available in your views (really useful for AJAX requests, where your data has not arrived at the point of component instantiation!)
{ "pile_set_name": "StackExchange" }
Q: SQL Server Primary Filename with different versions of SQL Server I'm working on a team project and many of us do not have the same version of SQL Server (one person is running SQL Server 2014 and another is running SQL Server 2012 Express). In the DB script, our paths for the primary and log filenames are different as we have different versions on SQL Server. One primary is (SQL Server 2012): (NAME = N'Database_Name', FILENAME = N'c:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Datebase_Name.mdf', SIZE =...) While another may be (SQL Server 2014): (NAME = N'Database_Name', FILENAME = N'c:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA\Datebase_Name.mdf', SIZE = ...) Is there a way to make the filename dynamic so it automatically recognizes the path it should go to based upon the version of SQL Server installed? Trying not to repeatedly change the filename depending on who has edited the script last. A: Thank you for the recommendations on using the development environments. I understand this is best, but is not feasible in this situation. More research found an alternative solution for this case. ALTER DATABASE [Datebase_Name] MODIFY FILE ( NAME = N'Datebase_Name', SIZE = 5184KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ) GO ALTER DATABASE [Datebase_Name] MODIFY FILE ( NAME = N'Datebase_Name_log', SIZE = 832KB , MAXSIZE = 2048GB , FILEGROWTH = 10%) GO
{ "pile_set_name": "StackExchange" }
Q: Seeing uniform continuity from continuity Let us introduce the definitions with which I am working. Definition 1 (Continuity) Let $f:A\to\mathbb{R}$. Then $f$ is continuous on $A$ if for all $y\in A$ we have that for all $\epsilon>0$ there exists $\delta>0$ such that for all $x\in A$ with $|x-y|<\delta$, we have $|f(x)-f(y)|<\epsilon$. Definition 2 (Uniform Continuity) Let $f:A\to\mathbb{R}$. Then $f$ is uniformly continuous on $A$ if for all $\epsilon>0$ there exists $\delta>0$ such that for all $x,y\in A$ with $|x-y|<\delta$ we have $|f(x)-f(y)|<\epsilon$. I notice that these definitions are very similar. However I must clarify I know that continuity does not imply uniform continuity. My question is this: suppose I have in front of me some proof that some $f$ is continuous. Uniform continuity cannot allow the choice of $\delta$ to be dependent on the points $x,y$. I wondered whether if I could see that in the proof that $f$ is continuous, that the choice of $\delta$ has no dependence on $x,y$, that in fact I have uniform continuity as well. In other words, if I have $\delta$ dependent only on $\epsilon$ in some continuity proof, do I automatically have that $f$ is also uniformly continuous, or does a separate proof need to be written? A: If in your proof the value of $\delta$ doesn't depend on your choice of $x$, but uniquely on $\varepsilon$, then you could have fixed $x$ after fixing $\varepsilon$ and determining $\delta$, therefore proving absolute continuity of the function, hence you can basically use the same proof, except for fixing $x$ and $y$ after determining $\delta$.
{ "pile_set_name": "StackExchange" }
Q: Creating a constructor that accepts an Object or PrimitiveType as a parameter and then creates a corresponding list? My goal here is as follows: public ListClass{ public List fooList; public ListClass(Object objType) { fooList = new List<(((Object)objType).getClass())>(); fooList.add(objType); } public ListClass(PrimitiveType primType) { fooList = new List<(((Object)objType).getClass())>(); fooList.add(objType); } } ... so pretty much, I want to be able to pass in an object or a primitive as a parameter, create a List or an ArrayList of whatever type that object is, and then add that object to the List or ArrayList that has just been constructed. I can't find any information as to how to go about this online; my best efforts were very quickly thwarted, and so I'm asking here (the code above is what I attempted to make work, but could not get to work). Thanks in advance. A: I want to be able to pass in an object or a primitive as a parameter, create a List or an ArrayList of whatever type that object is Generic types are only used at compile time to enforce type safety. Generic types are not available at runtime due to type erasure. You cannot create a generic type at runtime. See Java generics type erasure: when and what happens? for more details.
{ "pile_set_name": "StackExchange" }
Q: Parsing mixed flat file data to write into xls using Python I've complex flat file with huge data of mixed type. Trying to parse it using Python (best known to me), Succeeded to segregate data categorically using manual parsing. Now stuck at a point where I have extracted data and need to make it tabular so that I could write it into xls, using pandas or any other lib. I have pasted data at pastebin , url is https://pastebin.com/qn9J5nUL data comes in non-tabualr and tabular format, out of which I need to discard non-tabular data and only need to write tabular data into xls. To be precise I want to delete below data - ABC Command-----UIP BLOCK:; SE : ABC_UIOP_89TP Report : +ve ABC_UIOP_89TP 2016-09-23 15:16:14 O&M #998459350 %%/*Web=1571835373:;%% ID = 0 Result Ok. and only utilize below format data into xls (example, not exact. Please refer to pastebin url to see complete data format) - Local Info ID ID Name ID Frequency ID Data My ID 0 XXX_1 0 12 13 A: Since your datafile has certain pattern i think you can do it this way. import pandas s = [] e = [] with open('data_to_be_parsed.txt') as f: datafile = f.readlines() for idx,line in enumerate(datafile): if 'Local' in line: s.append(idx) if '(Number of results' in line: e.append(idx) maindf = pd.DataFrame() for i in range(len(s)): head = list(datafile[s[i]].split(" ")) head = [x for x in head if x.strip()] tmpdf = pd.DataFrame(columns=head) for l_ in range(s[i]+1,e[i]): da = datafile[l_] if len(da)>1: data = list(da.split(" ")) data = [x for x in data if x.strip()] tmpdf = tmpdf.append(dict(zip(head,data)),ignore_index=True) maindf = pd.concat([maindf,tempdf]) maindf.to_excel("output.xlsx")
{ "pile_set_name": "StackExchange" }
Q: OS X iconutil reports "Invalid iconset" I'm packaging an existing Java Swing application as an OS X app, using JarBundler, and everything's going nicely except for adding the icon. I created PNG files in Photoshop named as follows: icon_16x16.png icon_32x32.png icon_128x128.png These went into a folder called JHOVEicons, then I ran: iconutil -c icns -o JHOVEicons.icns JHOVEicons/ This results in the error message: JHOVEicons/:error: Invalid Iconset. I double-checked that the files are the size they claim to be. I tried adding 256 and 512 sizes, though supposedly a complete set isn't required. It keeps giving the same error. What might I be doing wrong? I'm running on Mountain Lion. A: I figured it out. The directory of icons which I'm converting has to have the .iconset extension.
{ "pile_set_name": "StackExchange" }
Q: Does the C++ standard guarantee that a function return value has a constant address? Consider this program: #include <stdio.h> struct S { S() { print(); } void print() { printf("%p\n", (void *) this); } }; S f() { return {}; } int main() { f().print(); } As far as I can tell, there is exactly one S object constructed here. There is no copy elision taking place: there is no copy to be elided in the first place, and indeed, if I explicitly delete the copy and/or move constructor, compilers continue to accept the program. However, I see two different pointer values printed. This happens because my platform's ABI returns trivially copyable types such as this one in CPU registers, so there is no way with that ABI of avoiding a copy. clang preserves this behaviour even when optimising away the function call altogether. If I give S a non-trivial copy constructor, even if it's inaccessible, then I do see the same value printed twice. The initial call to print() happens during construction, which is before the start of the object's lifetime, but using this inside a constructor is normally valid so long as it isn't used in a way that requires the construction to have finished -- no casting to a derived class, for instance -- and as far as I know, printing or storing its value doesn't require the construction to have finished. Does the standard allow this program to print two different pointer values? Note: I'm aware that the standard allows this program to print two different representations of the same pointer value, and technically, I haven't ruled that out. I could create a different program that avoids comparing pointer representations, but it would be more difficult to understand, so I would like to avoid that if possible. A: T.C. pointed out in the comments that this is a defect in the standard. It's core language issue 1590. It's a subtly different issue than my example, but the same root cause: Some ABIs require that an object of certain class types be passed in a register [...]. The Standard should be changed to permit this usage. The current suggested wording would cover this by adding a new rule to the standard: When an object of class type X is passed to or returned from a function, if each copy constructor, move constructor, and destructor of X is either trivial or deleted, and X has at least one non-deleted copy or move constructor, implementations are permitted to create a temporary object to hold the function parameter or result object. [...] For the most part, this would permit the current GCC/clang behaviour. There is a small corner case: currently, when a type has only a deleted copy or move constructor that would be trivial if defaulted, by the current rules of the standard, that constructor is still trivial if deleted: 12.8 Copying and moving class objects [class.copy] 12 A copy/move constructor for class X is trivial if it is not user-provided [...] A deleted copy constructor is not user-provided, and nothing of what follows would render such a copy constructor non-trivial. So as specified by the standard, such a constructor is trivial, and as specified by my platform's ABI, because of the trivial constructor, GCC and clang create an extra copy in that case too. A one-line addition to my test program demonstrates this: #include <stdio.h> struct S { S() { print(); } S(const S &) = delete; void print() { printf("%p\n", (void *) this); } }; S f() { return {}; } int main() { f().print(); } This prints two different addresses with both GCC and clang, even though even the proposed resolution would require the same address to be printed twice. This appears to suggest that while we will get an update to the standard to not require a radically incompatible ABI, we will still need to get an update to the ABI to handle the corner case in a manner compatible with what the standard will require.
{ "pile_set_name": "StackExchange" }
Q: How to resolve LazyInitializationException in Spring Data JPA? I have to classes that have a one-to-many-relation. When I try to access the lazily loaded collection I get the LazyInitializationException. I searching the web for a while now and now I know that I get the exception because the session that was used to load the class that holds the collection is closed. However I did not find a solution (or at least I did not understand them). Basically I have those classes: User @Entity @Table(name = "user") public class User { @Id @GeneratedValue @Column(name = "id") private long id; @OneToMany(mappedBy = "creator") private Set<Job> createdJobs = new HashSet<>(); public long getId() { return id; } public void setId(final long id) { this.id = id; } public Set<Job> getCreatedJobs() { return createdJobs; } public void setCreatedJobs(final Set<Job> createdJobs) { this.createdJobs = createdJobs; } } UserRepository public interface UserRepository extends JpaRepository<User, Long> {} UserService @Service @Transactional public class UserService { @Autowired private UserRepository repository; boolean usersAvailable = false; public void addSomeUsers() { for (int i = 1; i < 101; i++) { final User user = new User(); repository.save(user); } usersAvailable = true; } public User getRandomUser() { final Random rand = new Random(); if (!usersAvailable) { addSomeUsers(); } return repository.findOne(rand.nextInt(100) + 1L); } public List<User> getAllUsers() { return repository.findAll(); } } Job @Entity @Table(name = "job") @Inheritance @DiscriminatorColumn(name = "job_type", discriminatorType = DiscriminatorType.STRING) public abstract class Job { @Id @GeneratedValue @Column(name = "id") private long id; @ManyToOne @JoinColumn(name = "user_id", nullable = false) private User creator; public long getId() { return id; } public void setId(final long id) { this.id = id; } public User getCreator() { return creator; } public void setCreator(final User creator) { this.creator = creator; } } JobRepository public interface JobRepository extends JpaRepository<Job, Long> {} JobService @Service @Transactional public class JobService { @Autowired private JobRepository repository; public void addJob(final Job job) { repository.save(job); } public List<Job> getJobs() { return repository.findAll(); } public void addJobsForUsers(final List<User> users) { final Random rand = new Random(); for (final User user : users) { for (int i = 0; i < 20; i++) { switch (rand.nextInt(2)) { case 0: addJob(new HelloWorldJob(user)); break; default: addJob(new GoodbyeWorldJob(user)); break; } } } } } App @Configuration @EnableAutoConfiguration @ComponentScan public class App { public static void main(final String[] args) { final ConfigurableApplicationContext context = SpringApplication.run(App.class); final UserService userService = context.getBean(UserService.class); final JobService jobService = context.getBean(JobService.class); userService.addSomeUsers(); // Generates some users and stores them in the db jobService.addJobsForUsers(userService.getAllUsers()); // Generates some jobs for the users final User random = userService.getRandomUser(); // Picks a random user System.out.println(random.getCreatedJobs()); } } I have often read that the session has to be bound to the current thread, but I don't know how to do this with Spring's annotation based configurations. Can someon point me out how to do that? P.S. I want to use lazy loading, thus eager loading is no option. A: Basically, you need to fetch the lazy data while you are inside of a transaction. If your service classes are @Transactional, then everything should be ok while you are in them. Once you get out of the service class, if you try to get the lazy collection, you will get that exception, which is in your main() method, line System.out.println(random.getCreatedJobs());. Now, it comes down to what your service methods need to return. If userService.getRandomUser() is expected to return a user with jobs initialized so you can manipulate them, then it's that method's responsibility to fetch it. The simplest way to do it with Hibernate is by calling Hibernate.initialize(user.getCreatedJobs()). A: Consider using JPA 2.1, with Entity graphs: Lazy loading was often an issue with JPA 2.0. You had to define at the entity FetchType.LAZY or FetchType.EAGER and make sure the relation gets initialized within the transaction. This could be done by: using a specific query that reads the entity or by accessing the relation within business code (additional query for each relation). Both approaches are far from perfect, JPA 2.1 entity graphs are a better solution for it: http://www.thoughts-on-java.org/jpa-21-entity-graph-part-1-named-entity/ http://www.thoughts-on-java.org/jpa-21-entity-graph-part-2-define/ A: You have 2 options. Option 1 : As mentioned by BetaRide, use the EAGER fetching strategy Option 2 : After getting the user from database using hibernate, add the below line in of code to load the collection elements: Hibernate.initialize(user.getCreatedJobs()) This tells hibernate to initialize the collection elements
{ "pile_set_name": "StackExchange" }
Q: PHP curl response unserialize issue I am getting "Notice: unserialize(): Error at offset 0 of 1081 bytes" error while unserializing curl response. Curl request page - ping1.php : <?php $ch = curl_init(); $curlConfig = array( CURLOPT_URL => "http://example.com/test/curl/ping2.php", CURLOPT_POST => true, CURLOPT_RETURNTRANSFER => true ); curl_setopt_array($ch, $curlConfig); $result = curl_exec($ch); curl_close($ch); echo unserialize($result); ?> Curl response page - ping2.php <?php $data=array('test'=>1,'testing'=>2); echo serialize($data); ?> A: Got your issue . ERROR When I ran your code and saw the result I got string '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/experimentation/Stack/stack.php<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. <hr> <address>Apache/2.2.22 (Fedora) Server at localhost Port 80</address> </body></html> a:2:{s:4:"test";i:1;s:7:"testing";i:2;}' (length=474) WHY AM I GETTING THIS ERROR? You are getting this error beacause you are using CURLOPT_POST but not sending any post data . Rather than explaining it here I will refer you to this post which is base of your issue. RESOLUTION CURLOPT_POST is not required since we are not posting any data . Here is your working code <?php $ch = curl_init(); $curlConfig = array( CURLOPT_URL => "http://example.com/test/curl/ping2.php", CURLOPT_RETURNTRANSFER => true ); curl_setopt_array($ch, $curlConfig); $result = curl_exec($ch); curl_close($ch); print_r(unserialize($result)) ; ?>
{ "pile_set_name": "StackExchange" }
Q: How to query a field in a related object in a ParseQuery I'm using Parse.com and I am running a query that obtains objects in a many-to-many relational table (call this table 'RelationTable'). Obviously this table has links to objects in another table (let's call this SubObject). Now, from this query, I need to filter results by searching on a field contained within the SubObject (call this SearchField). Any ideas on how to do this? I already have the includeKey and am trying the '.' operator in SQL to access a field in the subclass, but it's not working. Below is the code I have so far: ParseQuery<ParseObject> query = ParseQuery.getQuery("RelationTable); query.include("subObject"); //subObject is field name where SubObject is stored. Note CAPS difference query.whereContains("SubObject.SearchField", searchString); A: You can create a subquery on the user object, and use whereMatchesQuery on your RelationTable query : ParseQuery<ParseObject> query = ParseQuery.getQuery("RelationTable); query.include("subObject"); ParseQuery<ParseObject> innerQuery = ParseQuery.getQuery("SubObject"); innerQuery.whereContains("SearchField", searchString); query.whereMatchesQuery("subObject", innerQuery);
{ "pile_set_name": "StackExchange" }
Q: How can I show an image as a background of a div? inside home.html : <div id="footer"> this is working!!! </div> in css file: #footer { height: 70px; background-image:url('footer.png'); border: 1px solid #000; } I check the url of image again & again.. But I cannot see the image as background in footer div. what is the reason. please help me A: The syntax seems to be fine Put the path relative to the css file e.g. if your files are located as WEB-INF | | css | | your.css | | images | | footer.png Then use background-image:url('../images/footer.png'); You could use firebug to look for the correct path, you can edit the url by putting ../../images and once the image is visible update your css file with the same data.
{ "pile_set_name": "StackExchange" }
Q: Interesting question on Graph Theory In a village there are an equal number of boys and girls of marriageable age. Each boy dates a certain number of girls and each girl dates a certain number of boys. Under what condition is it possible that every boy and girl gets married to one of their dates?(Polygamy and polyandry not allowed) A: What you actually have is a bipartite graph and you are looking for a perfect matching. The theorem that provides a characterization for such conditions is Hall's marriage theorem: There exists a perfect matching for the graph if and only if for any set of boys $B$, $|B|\leq|N(B)|$ where $N(B)$ denotes the set of girls that date at least one of the boys from $B$.
{ "pile_set_name": "StackExchange" }
Q: switch php variables between two files on different servers Ok , so i have a.php on server #1 and b.php on server #2 let say that a.php contains : <?php $google = 'www.google.com'; ?> and b.php contains : <?php require('http://[server #1]/a.php'); echo $google; ?> basically that won't work , how to make it work without editing php.ini ? :D A: Using JSON, read the global variables and export them as a JSON string, the extract() is then used on the reciving side to restore the variables into the global scope. config.php on server A // Define your Variables $google = 'www.google.com'; $string = 'foobar'; $int = 1; $bool = true; // JSON Export echo json_encode(array_diff_key(get_defined_vars(),array_flip(array('_GET','_POST','_COOKIE','_FILES','_ENV','_REQUEST','_SERVER')))); application.php on server B // JSON Import extract(json_decode(file_get_contents('http://www.domain.com/config.php'),true)); // Now test your Variables var_dump($google); // Now the $google variable exists var_dump($string); var_dump($int); var_dump($bool); You can invent your own security mechanism into this code to suit your requirements. I would much prefer you consume a specific list of variables on both the transmit and receive end, this method of using get_defined_vars() and extract() is not ideal, but since you're targeting a specific URL that you're in control of your risk is minimized.
{ "pile_set_name": "StackExchange" }
Q: Characterization of Brownian Motion (Problem Karatzas/Shreve) In the book "Brownian Motion and Stochastic Calculus" by Karatzas/Shreve, they state the following problem (chapter 5, problem 4.4): A continuous, adapted process $W= \{W_t,\mathcal{F}_t;0\leq t < \infty \}$ is a Brownian motion if and only if \begin{equation} f(W_t) - f(W_0) - \frac{1}{2} \int_0^t f^{\prime\prime}(W_s) \mathrm{d}s,\quad \mathcal{F}_t; \quad 0 \leq t <\infty \end{equation} is a continuous local martingale for every $f\in C^2(\mathbb{R})$. For the "only if" part, one applies Ito's formula and gets that \begin{equation} f(W_t) - f(W_0) - \frac{1}{2} \int_0^t f^{\prime\prime}(W_s) \mathrm{d}s= \int_0^t f^{\prime}(W_s) \mathrm{d}W_s, \end{equation} which is a continuous local martingale. But I am struggling with the "if" part. Does anyone have a hint on how to prove that? Thank you very much! Best, Luke A: If you take $f(x) = x$, you see that $W_t$ is a continuous local martingale. Then, taking $f(x) = x^2$ gives us that $W_t^2 - W_0^2 - t$ is also a continuous local martingale so that $W_t$ has quadratic variation at time $t$ equal to $t$. Now apply Levy's characterisation of Brownian motion to conclude.
{ "pile_set_name": "StackExchange" }
Q: USE ROLLUP to compute grand total while displaying month names and sorting by month number This is adapted from https://www.databasejournal.com/features/mssql/using-the-rollup-cube-and-grouping-sets-operators.html Here's what I have: SELECT DATENAME(month, PurchaseDate) PurchaseMonth , CASE WHEN DATENAME(month, PurchaseDate) is null then 'Grand Total' ELSE coalesce (PurchaseType,'Monthly Total') end AS PurchaseType , Sum(PurchaseAmt) as SummorizedPurchaseAmt FROM tPurchaseItem GROUP BY ROLLUP(DATENAME(month, PurchaseDate), PurchaseType); This works but doesn't sort in chronological order. The result is this: I want the order to be January, February, etc. A: DATENAME returns a nvarchar, so the ordering will be like that of an nvarchar. 'April' < 'January'. One method would be to change your GROUP BY to DATEPART and derive the month's name from the number: SELECT CHOOSE(DATEPART(MONTH, PurchaseDate),'January','February','March','April','May','June','July','August','September','October','November','December') AS PurchaseMonth , CASE WHEN DATENAME(month, PurchaseDate) is null then 'Grand Total' ELSE coalesce (PurchaseType,'Monthly Total') end AS PurchaseType , Sum(PurchaseAmt) as SummorizedPurchaseAmt FROM tPurchaseItem GROUP BY ROLLUP(DATEPART(MONTH, PurchaseDate), PurchaseType);
{ "pile_set_name": "StackExchange" }
Q: How to install the latest version of VLC (2.1.2) on Ubuntu 12.04? I need play video in ultra-hd 4K definition (3840×2160 pixels) and my favorite player is VLC. In Ubuntu 12.04 LTS the latest available version of VLC is 2.0.8 which does not have support for this type of videos and need to update to the latest available version of VLC (2.1.2). How I can install the latest stable version of VLC on Ubuntu 12.04? A: There's now a third-party PPA that provides the most recent build of VLC for Ubuntu 12.04, Ubuntu 12.10, Ubuntu 13.04, Ubuntu 13.10. Press Ctrl+Alt+T on keyboard to open the terminal. When it opens, run the commands below one by one: sudo add-apt-repository ppa:djcj/vlc-stable sudo apt-get update sudo apt-get install vlc via: http://ubuntuhandbook.org/index.php/2014/02/install-vlc-2-1-2-ubuntu-12-04/ A: Check the official VLC PPA. For the current stable version of VLC that is ppa:videolan/stable-daily Add it to your system sudo add-apt-repository ppa:videolan/stable-daily Update and upgrade / install VLC sudo apt-get update && sudo apt-get install vlc To install the latest version add the daily updated sudo add-apt-repository ppa:videolan/master-daily Note: The VLC Stable PPA contains the same version of Ubuntu 12.04 repositories (2.0.8) and VLC Master PPA contains 2.2.0 (development version).
{ "pile_set_name": "StackExchange" }
Q: Can I go for both Tichu types in one round? The rules seem to imply that people cannot call two or more tichus of the same kind. Doing this would also take the fun out of the game, as the first player to have a strong hand simply calls for enough tichus to reach the limit and win (or lose, if unlucky) the game. The grand tichu, however, is presented as an additional mechanism. Does this mean that I can go for both and score 300 bonus points in a single round? A: No, you can only call one type of Tichu per hand. Honestly, I spent an inordinate amount of time looking at different rules (here, here, here, and here). Ultimately, none of these explicitly state whether a player is allowed to call both types of Tichu, but all of them imply it. I finally found one source with this wording after Grand Tichu: A single player may not call both a "Grand Tichu" and a standard "Tichu". This seems to explicitly call out what the other rule sets imply: after calling a Grand Tichu, you cannot call a small Tichu, and vice versa (assuming you're calling a small Tichu before you see nine cards). The website these rules come from also has some strategy documents that appear to support it's claim that two Tichus cannot be called by a single player. On a separate note, part of the reason this might not be explicitly called out in many rule sets is because they are translations. I don't read German, but someone might be able to verify that the rules as originally written indicate that only one Tichu can be called, and translations failed to pick up on this.
{ "pile_set_name": "StackExchange" }
Q: Finding the minimum value based on another column in pandas I have to find the channel type that has the minimum value of sales among all 3 channels (Hotel,Cafe or restaurant) of dataframe df(snapshot attached).So output would be the channel with minimum sales. The code i am writing is below: df1=df.groupby(['Channel']).sum() df1=df1.sum(axis=1) print(df1[['Channel']].idxmin()) But it seems to be some 'index error'. Do i need to set Channel as index and find? Or is there any other approach? A: Change your last line print(df1.idxmin())
{ "pile_set_name": "StackExchange" }
Q: What kind of redirection is >>&? I'm looking at a cron file that has the following line: 35 0 * * * /bin/csh -c "/home/abc/.cshrc;/home/abc/appTools/bin/xxx.pl >>& /home/abc/appTools/log/xxx.cronlog" Is this another form of redirecting STDOUT and STDERR, like 2>&1 ? Is there any difference between >>& and 2>&1 ? This command seems to be working, unless xxx.cronlog does not already exist. A: I don't even csh, but the manpage says that it's the same thing like &>> in bash and family—that is open for appending (the >>) and also redirect stderr instead of just stdout. The forms involving '&' route the diagnostic output into the specified file as well as the standard output. name is expanded in the same way as '<' input filenames are. http://linux.die.net/man/1/csh: The forms involving '&' route the diagnostic output into the specified file as well as the standard output. name is expanded in the same way as '<' input filenames are. >> name >>& name >>! name >>&! name Like '>', but appends output to the end of name. If the shell variable noclobber is set, then it is an error for the file not to exist, unless one of the '!' forms is given.
{ "pile_set_name": "StackExchange" }
Q: jQuery Offset returns negative value am having a scenario like this below: In my ui, I will have a textbox. If I have enter a number in the textbox,I need to scroll down to the respective page number. In Dom, I will have some divs with the respective id's. If user entered a page number as 5. I will check for 5th div offset in dom and get top value. By using scrollTop It will scrolled to the 5th div. Here, Issue is after scrolled down to the 5th div. If again, entered a page number as 2. offset top value in negative. Hence,ScrollTop defaultly moved to top. Here is fiddle To reproduce this exactly,Go to page number 7 and again go to page number 3 or 4. A: Late answer, in case it will help someone: Just use dom offsetTop property. That will be a correct position despite whatever scrolling has happened before. $( something )[0].offsetTop; A: http://jsfiddle.net/Ldfmvwr9/3/ Try it topValue=$('#container').scrollTop() + $("#"+parseInt(userPageNum)).offset().top - $("#"+parseInt(userPageNum)).height() / 2;
{ "pile_set_name": "StackExchange" }
Q: how to specify height of row? I want to implement a tableview which shows an expandable cell for a specific row, so I create my custom table cell which will be expanded if setting its expandContent as following: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; CustomCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; NSString *shouldExpand = [self.datasource objectAtIndex:indexPath.row]; if([shouldExpand isEqualToString:@"expand"]){ [cell setExpandContent:@"Expand"]; } else{ [cell setTitle:@"a line"]; } return cell; } however, in order to tell tableview the row height, I need to implement the following code: - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { CustomCell *cell = [self tableView:tableView cellForRowAtIndexPath:indexPath]; return [cell cellHeight]; } The problem is heightForRowAtIndexPath method will call 1000 times of tableView:cellForRowAtIndexPath: if my datasource contains 1000 data, and it cost too much time. how to fix the problem? A: no u should find the size of cell first then send the hight calculated, dont call tableView:cellForRowAtIndexPath: it will results in recursion instead, first calculate and send the height . for example //say suppose you are placing the string inside tableview cell then u need to calculate cell for example NSString *string = @"hello world happy coding"; CGSize maxSize = CGSizeMake(280, MAXFLOAT);//set max height CGSize cellSize = [self.str sizeWithFont:[UIFont systemFontOfSize:17] constrainedToSize:maxSize lineBreakMode:NSLineBreakByWordWrapping];//this will return correct height for text return cellSize.height +10; //finally u return your height
{ "pile_set_name": "StackExchange" }
Q: How do I create a nested list from a list of ages? I am downloading tables of data in csv format from an online model. The data includes a column for age. My program works fine when all the data in the column has the one age, but now I am downloading data for a large range of ages so that I might have 400 rows of data at age 1 Billion Years, and then 350 at 1.1 Billion Years, etc. There are around 30,000 rows and 40 columns in my csv file. I thought I would create nested lists controlled by the age, and then loop through each sub-list. I pick my data up as follows log_age = data_upload[:,2] mass = data_upload[:,5] log_L = data_upload[:,6] log_Teff = data_upload[:,7] log_g = data_upload[:,8] mbolmag = data_upload[:,24] Umag = data_upload[:,25] Bmag = data_upload[:,26] How would I go about creating nested lists from these individual lists? To generalise the problem if I have a list as follows: age = [1,1,1,1,1,1,1,1,1,1.1,1.1,1.1,1.1,1.1,1.1,1.1,1.2,1.2,1.2...] how do I get it into the following format: [[1,1,1,1,1,1,1,1,1],[1.1,1.1,1.1,1.1,1.1,1.1,1.1],[1.2,1.2,1.2...]] I would need to do this for all the lists using the structure of the age list. I am thinking a list comprehension might be the way to go? I have come across them but don't really know how to use them. There is a command called np.unique which will list the unique numbers in my original list so I can start by: unique_age = np.unique(age) nested_age = [[] for _ in range(len(unique_age))] I could then repeat this for all the nested lists that I want to create, but then I have to go through each list and convert them to a nested list. Could someone show me how to this? Thanks A: I'd like to generate the result like this: from collections import Counter age = [1,1,1,1,1,1,1,1,1,1.1,1.1,1.1,1.1,1.1,1.1,1.1,1.2,1.2,1.2] c = Counter(age) result = [[k]*v for k,v in c.items()] print(result) # Result would be: # [[1, 1, 1, 1, 1, 1, 1, 1, 1], [1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1], [1.2, 1.2, 1.2]] Line 3 means: Group the list according to the content of list, the item of Counter result looks like a dict, the key is age, while the value is frequency of each age. Line 4 means: Iterate the item of Counter result, get the keys(k) and values(v) Create list of same value by [k]*v
{ "pile_set_name": "StackExchange" }
Q: Using a recursive function to output the Nth number from fibonacci pattern that has alternating negative numbers (0, 1, -1, 2, -3, 5 ...) I'm trying to solve a recursive function practice problem using Java and it has me completely stumped: Given the small integer n (0 <= n <= 40) you need to find the n-th number of the alternating Fibonacci sequence. The sequence starts with 0, 1, -1, 2, -3, 5, -8, 13, -21, ... So, fib(0) = 0, fib(1) = 1 => fib(2) = -1. I can implement the function to find the Nth fibonacci number, however the specific problem requirements are defeating me. Any time I try to implement some sort of negative number, it ends up screwing up the arithmatic instead of altering the final number that is output. My mind keeps coming back to creating some sort of conditional that only triggers on the top-most frame, but I don't think that is something that can be implemented. Does anyone have an idea how to solve this? This is my base function without implementing any sort of the negative number requirements: public static long fib(long n){ if (n == 0){ return 0; } else if (n == 1){ return 1; } else if (n == 2){ return 1; } else { return fib(n-2)+fib(n-1); } } A: You can simply have another function to deal with the negative requirement: public static int AlternatingFiboonacci(int n){ if(n > 0 && n % 2 == 0) return -fib(n); //if n is even and greater than 0 else return fib(n); } If you need a single working function, this should do the job public static int fib(int n){ if(n < 2) return n; if(n % 2 == 0) return -1 * (fib(n - 1) - fib(n - 2)); else return (-1 * fib(n - 1)) + fib(n - 2); } What this function does is: when n is even, return fib(n - 1) (which is odd, thus positive) - fib(n - 2) (which is even, thus negative). The subtraction will be a positive value, that you multiply by -1. when n is odd, return -1 * fib(n - 1) (which is even, thus negative) + fib(n - 2) (which is odd, thus positive). A: Maybe it's not too late to put this as an answer: public static long fib(long n){ if (n <= 1){ return n; } else { return fib(n-2) - fib(n-1); } }
{ "pile_set_name": "StackExchange" }
Q: How do I read special characters (0x80 ... 0x9F) from the Windows console in C#? I've finally solved the issue of writing special characters (0x80...0x9F) to the Windows console with the help of David: The output encoding has to be set to UTF-8. The font used by the console should be something like Consolas. Now I'd like to be able to read back text which contains special characters found in the 0x80-0x9F range (using the Windows 1252 encoding), such as the EURO sign (€): string text = System.Console.ReadLine (); returns null whenever I type one of the special characters. I tried naively to set the InputEncoding to UTF8: System.Console.InputEncoding = System.Text.Encoding.UTF8; but this does not help. A: You could set the input code page of your console application to read those special characters. There is a Win32 api called SetConsoleCP to set the input code page. In the following example I use Windows-1252 code page: [DllImport("kernel32.dll")] private static extern bool SetConsoleCP(uint wCodePageID); static void Main(string[] args) { SetConsoleCP((uint)1252); Console.OutputEncoding = Encoding.UTF8; System.Console.Out.WriteLine("œil"); string euro = Console.In.ReadLine(); Console.Out.WriteLine(euro); } EDIT: AS L.B. suggested you could also use Console.InputEncoding = Encoding.GetEncoding(1252). Here is the complete example without interop (note, you could also use Windows-1252 code page for output encoding: static void Main(string[] args) { Console.InputEncoding = Encoding.GetEncoding(1252); Console.OutputEncoding = Encoding.GetEncoding(1252); System.Console.Out.WriteLine("œil"); string euro = Console.In.ReadLine(); Console.Out.WriteLine(euro); } END EDIT Hope, this helps.
{ "pile_set_name": "StackExchange" }
Q: How does one control the thrusters on a Hydra in GTA:SA on Android? The UI provided for the Hydra seems to be the same as for Dodo, Shamal, etc (as far as movement is concerned). The thrusters seem to automatically switch between pointing backwards and downwards, perhaps based on speed, but it's not clear how. Is there a way to control them directly? This also makes movement on ground hard, since the thrusters are pointing downwards. A: Unfortunately on Android & iOS there is no way to control this. This may come in the future as there has been a lot of criticism on the flight controls for mobile devices, but that is only speculation. What i have done for other games is get a bluetooth controller connected to my device, but that is not compatible with all games, and i cant confirm it would be with SA, i have managed it with Vice City though.
{ "pile_set_name": "StackExchange" }
Q: Entity Framework schedule query I know that nhibernate has a scheduler that will execute a function at certain intervals. I was wanting this same functionality with EF. So I want to send an email to a user a week after they were initially notified. Is there anyway to do this on the server, say check the db once a day and send out emails to everyone that meets the criteria? A: I used the fluent package and used the fluent scheduler to run queries at specific intervals.
{ "pile_set_name": "StackExchange" }
Q: Why is their a limit on max number of clauses in bool query in elasticsearch I want to know that why there is a limit on max clauses in a bool query that is 1024 indices.query.bool.max_clause_count : 1024. Also is it ok if an OR query is fired with say 1 million terms in a bool query? A: max_clause_count setting isn't specific to elasticsearch. Its a static lucene setting hence can only be setup in config file. I think this limit is set for safeguarding your search where by passing gigantic query can easily DOS your server. By upping the limit you understand the consequences and accept the performance implications. The right limit also seems to be debated in the Lucene community itself when you look at their discussions. In the discussion they are even comfortable to change the allowed number to Integer.MAX_VALUE but again larger numbers can impact performance. These queries will probably be slower, but it also depends on the kind of data you have. Also profile for eviction in filter caches. In our usecase we are querying for average 50,000 clauses and haven't seen much performance effect as the nature of clauses is very dense.
{ "pile_set_name": "StackExchange" }
Q: Questions to help decide which SSL cert to get for three websites on Windows server First of all, my first Stack Exchange question! I hope I'm at the right place. Secondary, I know TLS is the successor of SSL, but since everywhere I go, they just called it SSL, I'm using it for this entry. Unless I'm wrong on that... Scenario I need to decide which SSL certifications to get for three websites under a single parent company, but each websites have it's own identity, and one of them does not want to be known to be under the other (for marketing reason). Question 1 Based on my price research, getting multiple single-domain SSL is actually cheaper than getting one multi-domain SSL? Example is this: https://sslmate.com/pricing (I hope I'm allowed to post link). If I get three single-domain, it's $15.95 times 3. If I get multi-domain with three domains, it's $24.95 times 3. Question 2 From what I understand, multi-domain SSL allows different domains to be identified as under the same organisation. If I have no need for this, is there any other advantage of just getting multiple single-domain SSL? Question 3 If the above assumption is correct, is the "multi-domain SSL allows different domains to be identified as under the same organisation" limited to EV or OV? Question 4 While I assume there is no difference if we are using Windows server and not Linux, I can't find a Google search result that supports this. All I found from Google is that installing SSL is done differently. Does having Windows server affect the decision to get which SSL cert? Question 5 (Sorry, the more I type, the more questions pop up in my head...) Does having SSL affect SMTP/email in any positive way? A: Some background before getting to the answers. Certificate Authority The choice of vendor for a company's certificates is an important one. The certificate provider is what's called a Certificate Authority. Purchasing a certificate creates a implicit trust or custodial relationship between the company and the Certificate Authority. It is similar to choosing a lawyer. If the company has assets and its brands value, do not choose a CA solely on the basis of price. Research equivalent companies and brands in the relevant sectors and see what CAs they are using (this can be determined by clicking on the lock icon in the browser URL bar). Terminology TLS is the slightly more correct way to refer to the more recent improvements to the encryption protocol, but SSL is still colloquially accurate. Answers: Only one multi-domain certificate needs to be purchased to service three different domains, not three multi-domain certificates. The presence of three domains on the single certificate could be an indication for those who care to look that the three domains may be related. However, few people care to look and any relationship is not obvious to ordinary consumers. If the distinction is branding rather than corporate- 3 brands under one company, rather than 3 separate companies- the single multi-domain certificate does not by itself introduce a conflict. Convenience and cost. Certificates are purchased and have to be deployed in web servers and/or load balancers, and the unit of cost and unit of deployment is the individual certificate. Purchasing and deploying one certificate that addresses the needs of every brand- for companies with many brands- is significantly cheaper and more convenient than purchasing and deploying individual certificates for each brand. No, a multidomain certificate does not need to be EV or OV. No, while the installation process depends on the specific web server or load balancing software- and therefore operating system platform- the certificate- a file generated by the Certificate Authority- is the same. Having SSL/TLS on a website has no impact on email service. Furthermore, it is possible to use a certificate on a mail server for outgoing and/or for inbound mail, but it does not add appreciably to the security attributes of one's email, nor does it make a significant difference in one's email reputation. Email is a different protocol from web- store-and-forward, vs end-to-end- and guarantees offered by a sender's or recipient's certificates do not apply to the full path traversed by email messages, as they do with web requests and responses.
{ "pile_set_name": "StackExchange" }
Q: Make HTML button appear pressed with JavaScript? How can I make an HTML button looks like pressed via JavaScript? I have a button that is sending a network message while pressed and sending another network message when released. This specific named button may be displayed on more than once in a page or more than one device. So, if any of the identical purpose buttons pressed, I need to make the rest of the copies look like pressed. How can I achieve that behaviour? A: Personally I think it would be an overkill to use JavaScript for this, this done using CSS. Your code would go between the { }. button:active { color: red; background-color: pink; padding: 8px; } Warning! This will style all the button elements, not just one. If you'd like one only then use an ID, if more than one use class. http://jsfiddle.net/cp4bob0z/ Edit 1 JS Only <button id="test" onclick="styleButton(this);">Hello!</button> function styleButton(element) { document.getElementById(element.id).className = "yourClassName"; }
{ "pile_set_name": "StackExchange" }
Q: can we get the data from five tables having a common id with one query? i want to get the data of five table having a common id from one query can we do this , for example tbl_student,tbl_batch,tbl_section,tbl_level,tbl_faculty all have a common id college_id how can i get all the tables value with one query if anybody can help me i would be greatful A: If I understand you correctly that sounds like a join. select * from tbl_student st join tbl_batch ba on ba.college_id=st.college_id join tbl_section se on se.college_id=st.college_id join tbl_level le on le.college_id=st.college_id join tbl_faculty fa on fa.college_id=st.college_id This is most probably not exactly the way you want to get the data because the data model would not make much sense. Hopefully you get the idea though.
{ "pile_set_name": "StackExchange" }
Q: How can I let a div fill 100% width if no other elements are beside it? I have simple markup like this (a tab menu): <div class="container"> <div class="tab1"></div> <div class="tab2"></div> <div class="tab3"></div> </div> That is the case when all elements have an equal width of 33% to fill 100% of the container. Is it possible to apply a general CSS rule for all containers that automatically detects if, for example, there are only 1 other container or none? And then adjusts the width of the tabs? ("Strech-To-Fit") Perhaps something with min-width or max-width? A: Depending on what browsers you need to support, you may be able to use flexbox: $('.tab').click(function() { $(this).css('display', 'none'); }); .container { display: flex; } .tab { border: 1px solid black; padding: 5px; flex: 1; margin: 5px; } <p>Click a tab to remove it</p> <div class="container"> <div class="tab">Tab 1</div> <div class="tab">Tab 2</div> <div class="tab">Tab 3</div> </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> A: You mean like flexbox? .container { display: flex; height: 50px; margin-bottom: 1em; } [class*="tab"] { flex: 1; border: 1px solid red; } <div class="container"> <div class="tab1"></div> </div> <div class="container"> <div class="tab1"></div> <div class="tab2"></div> </div> <div class="container"> <div class="tab1"></div> <div class="tab2"></div> <div class="tab3"></div> </div> Or CSS Tables .container { display: table; height: 50px; width: 100%; } [class*="tab"] { display: table-cell; border: 1px solid red; } <div class="container"> <div class="tab1"></div> </div> <div class="container"> <div class="tab1"></div> <div class="tab2"></div> </div> <div class="container"> <div class="tab1"></div> <div class="tab2"></div> <div class="tab3"></div> </div> A: You can use display:table and display:table-cell see below code HTML: <div class="container"> <div class="tab">Tab 1</div> <div class="tab">Tab 2</div> <div class="tab">Tab 3</div> </div> CSS: .container { display: table; width:100%; } .tab { border: 1px solid black; padding: 5px; display: table-cell; margin: 5px; }
{ "pile_set_name": "StackExchange" }
Q: How to get unique user ID for HockeyApp? In the HockeyApp SDK v. 3.5, they have shifted to a new method of user identification. In previous versions of the SDK, there was a callback method - (NSString*)userNameForCrashManager:(BITCrashManager *)crashManager which would set a string which would identify all crash reports sent from the client. However, in version 3.5 of the SDK, it seems that this is deprecated, and it is preferred that you simply call: [[BITHockeyManager sharedHockeyManager].authenticator authenticateInstallation]; This sets a unique ID for the user. But how can I access this identifier? I want to attach it to support emails so that I can search for crash reports the user has submitted. A: You can use the following delegate to set the userName: - (NSString *)userNameForHockeyManager:(BITHockeyManager *)hockeyManager componentManager:(BITHockeyBaseManager *)componentManager This is documented in the header and help ofBITHockeyManagerDelegate and the replacement is also mentioned in the header and help of BITCrashManagerDelegate documentation. BITAuthenticator is only used for beta distribution due to the fact that Apple removed the UDID calls from iOS 7. See the documentation and help. It is automatically disabled in App Store builds and without further setup creates anonymous IDs! Please read the mentioned documentation.
{ "pile_set_name": "StackExchange" }
Q: Angular input model conditionally reads from one property, writes to another I have a Person class where edits made to the person must be verified by an admin user. Each attribute has an "approved" and "tmp" version. Sometimes the "tmp" version is not set: person = {first:'Bob', firstTmp:'Robert', last:'Dobbs', lastTmp:undefined} When displaying the person, I want to display the "tmp" value if it is set, otherwise display the "approved" value. When writing, I want to write to the "tmp" value (unless logged in as an admin). Ideally, this would not require a lot of custom markup, nor writing cover methods for each property (there are around 100 of them). Something like this would be nice: <input ng-model="person.first" tmp-model="person.firstTmp" bypass-tmp="session.user.isAdmin" /> When displaying the value, display the tmp value if it is defined. Otherwise display the approved value. When writing the value, write to the tmp value, unless logged in as an admin. Admins write directly to the approved value. What's a good clean way to implement this in Angular? Extend NgModelController somehow? Use a filter/directive on the input? Cover methods? Just do the writing server-side? A: I will try to go through your options one by one: Extend NgModelController somehow? I don't think this is a good idea. It won't be nice if something goes wrong and you don't know if you can even rely on something as basic as ng-model Just do the writing server-side? This would seem like the easier way (if you already know or find it easy to manage it in the back end), although the interaction would need a new request to the server. Use a filter/directive on the input? I believe this would be the best way to do it, as it is easy to understand what is going on by just taking a look at the markup. It's angular, you already know that some property like tmp-model is extending the markup. Cover methods? This would also be easy to implement, and you would be implementing some sort of "business logic" as a validator in your cover method. Given that I've extended a bit in my answer, I can give you an inline example of the last one. <input ng-model="person.firstTmp" ng-init="person.firstTmp = person.firstTmp || person.first" ng-change="updateProperty(person, 'first')" /> And on the controller, you could do something like: $scope.updateProperty = function(person, propertyName) { // The temporary property has already been changed, update the original one. if($scope.session.user.isAdmin) person[propertyName] = person[propertyName + 'Tmp']; }
{ "pile_set_name": "StackExchange" }
Q: How can I annex a puppet city in Civilization V? When I conquered a city state I chose to turn it into a puppet. The dialogue promised that I'd be able to annex it at any point. How do I do so? A: Just left click it =) A: As Jens says, if you click on the city name, a dialog box should pop up asking if you want to annex the city, or if you want to leave it as a puppet (and it should indicate the Happiness impact of such a decision). Is this not the case for you?
{ "pile_set_name": "StackExchange" }
Q: Javascript issue confim message I made a short Javascript function: function confirmMessage() { var messages = confirm("Are you sure"); if(messages) { //go trough }else{ //stop, cancel the action return false; } } HTML: <a href='?pagina=uitloggen' onclick="confirmMessage()"> When I click cancel, it returns false and jumps to the else statement. But it doesn't stop the action! What is the problem here? Thanks in advance. A: Your <a> tag's onclick should return the result of the event handler for it to stop propagating. <a href='?pagina=uitloggen' onclick="return confirmMessage()">
{ "pile_set_name": "StackExchange" }
Q: The right way to authenticate user on Symfony 4 I'm struggling with Symfony authentication. I've read many manuals, but with no result. I want to understand how can I correctly render the login form using bootstrap_4_layout.html.twig for the errors displaying. Cause when I'm trying to login it shows ugly message invalid credentials. So, my UserLoginType is : class UserLoginType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('username', TextType::class, ['constraints' => array(new Length(array('min' => 3)))]) ->add('password', PasswordType::class) ; } public function configureOptions(OptionsResolver $resolver) { $resolver->setDefaults(array( 'data_class' => User::class, )); } } My security.yaml file exactly like in documentation: Of course with additional fields names (cause I'm using symfony forms for the rendering), Here my test login method: public function login(Request $request, AuthenticationUtils $authenticationUtils) { $error = $authenticationUtils->getLastAuthenticationError(); $lastUsername = $authenticationUtils->getLastUsername(); $form = $this->createForm(UserLoginType::class); $form->handleRequest($request); if ($form->isSubmitted() && $form->isValid()) { dump($form->getData()); die(); } return $this->render('Security/login.html.twig', ['form' => $form->createView(), 'last_username' => $lastUsername, 'error' => $error, ]); } And here my login.html.twig: {% extends 'layout.html.twig' %} {% block title %}Login page{% endblock %} {% block description %}This is login page{% endblock %} {% form_theme form 'bootstrap_4_layout.html.twig' %} {% block content %} <div class="container"> {{ form_start(form) }} {{ form_widget(form) }} <input class="btn btn-primary" type="submit" value="Login" /> {{ form_end(form) }} </div> {% endblock %} So, when I'm trying to login: How can I get something like this: A: You are getting the "invalid credential" message because Symfony handles the authentication, you dont need to process the form yourself. From the docs: Don't let this controller confuse you. As you'll see in a moment, when the user submits the form, the security system automatically handles the form submission for you. If the user submits an invalid username or password, this controller reads the form submission error from the security system, so that it can be displayed back to the user. Why would you give an error like "username should be 3 characters or longer" on a login form ? I Don't think these error message are needed for a login form. Nevertheless you might be able to validate the request yourself with: $errors = $validator->validate($user); And then pass the errors to your view or set the errors in the form.
{ "pile_set_name": "StackExchange" }
Q: What is the best way to create a query with a big set of data? I have about 360 lines of data which I need to make into 360 query's for SQL, Is there any quick way of putting all the data into one query instead of having to put them all in separate, for example here is just 3 lines of data I have: 1,81,32 1,101,82 1,60,65 I have to make this data into query's so these 3 would look like: INSERT INTO `Fixtures` (weeks, HomeID, AwayID) Values (1,81,32); INSERT INTO `Fixtures` (weeks, HomeID, AwayID) Values (1,101,82); INSERT INTO `Fixtures` (weeks, HomeID, AwayID) Values (2,60,65); But I have 360 lines of this, so It will be hard to make these all into query's 1 by 1, Is there any quicker way of doing this, Like just one query to insert them all? A: You can usually use INSERT...SELECT syntax. This syntax varies by db, but is often something like: INSERT INTO `Fixtures` (weeks, HomeID, AwayID) select 1,81,32 union all select 3,31,22 Another variation is: INSERT INTO Fixtures (`weeks`, `HomeID`, `AwayID`) VALUES (1, 81, 32), (3, 31, 22) ;
{ "pile_set_name": "StackExchange" }
Q: How to get the textual representation of GetLastError as a QString? GetLastError() can be somehow passed to FormatMessageW to get a formatted message. The objective is to get a QString at the end. What is the correct and safe way of doing it? A: This does the trick. QString getLastErrorMsg() { LPWSTR bufPtr = NULL; DWORD err = GetLastError(); FormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, err, 0, (LPWSTR)&bufPtr, 0, NULL); const QString result = (bufPtr) ? QString::fromUtf16((const ushort*)bufPtr).trimmed() : QString("Unknown Error %1").arg(err); LocalFree(bufPtr); return result; } One should always specify FORMAT_MESSAGE_IGNORE_INSERTS when calling FormatMessage. Some error messages do contain placeholders, which will lead to bugs, unless your code passes an argument list. Since we're passing NULL this would be bug waiting to happen.
{ "pile_set_name": "StackExchange" }
Q: Como trazer data sem registro Sql server 2008 Tenho uma consulta SQL somando valores de cada dia, retornando somente os dias que tem valores. Porem gostaria de apresentar também os dias que não tiveram registros. Consulta: SELECT DATA, SUM(VALOR) FROM TABELA1 WHERE CONTA = '176087' GROUP BY DATA Saída: 02/10/2015 36312 05/10/2015 25382 06/10/2015 3655 A: Uma solução passa por criar uma estrutura auxiliar com o intervalo de datas para o qual queres gerar os resultados. Se desejares utilizar esta solução, tens várias alternativas (a lista não é exaustiva): Criar uma tabela física na base de dados que contém, por exemplo, todas as datas desde o ano 2000; Criar uma tabela temporária com um intervalo de datas; Usar uma CTE; Vou deixar aqui uma solução usando uma CTE. create table tabela1 ( conta nvarchar(05), [data] datetime, valor float ); insert into tabela1(conta, [data], valor)values ('12345', '2016-05-25', 1), ('12345', '2016-05-26', 3), ('12345', '2016-05-26', 1), ('12345', '2016-05-28', 2), ('12345', '2016-05-28', 3); with Datas as ( select cast('2016-05-25' as date) [data] union all select dateadd(dd, 1, t.[data]) from Datas t where dateadd(dd, 1, t.[data]) <= '2016-05-31' ) -- gera uma view descartável com o intervalo de datas entre 25 a 31 de Maio select d.[data], isnull(sum(valor), 0) sum_valor from Datas d left join tabela1 t on t.[data] = d.[data] and t.conta = '12345' group by d.[data] order by d.[data] A instrução anterior irá gerar o seguinte output: data sum_valor 2016-05-25 1 2016-05-26 4 2016-05-27 0 2016-05-28 5 2016-05-29 0 2016-05-30 0 2016-05-31 0 Fica aqui o SQLFiddle
{ "pile_set_name": "StackExchange" }
Q: How to display the frequency at the top of each factor in a barplot in R Possible Duplicate: add text to horizontal barplot in R, y-axis at different scale? Annotate values above bars (ggplot faceted) Using the code below, I'm hoping to display a number above each column that corresponds to the y-value for that column. In other words, I'm trying to get "QnWeight_initial" to display 593 at the top of the gray bar and so and... My data: data<-structure(list(V1 = structure(c(2L, 1L), .Label = c("593", "QnWeight_initial" ), class = "factor"), V2 = structure(c(2L, 1L), .Label = c("566", "Head"), class = "factor"), V3 = structure(c(2L, 1L), .Label = c("535", "V1"), class = "factor"), V4 = structure(c(2L, 1L), .Label = c("535", "V2"), class = "factor"), V5 = structure(c(2L, 1L), .Label = c("535", "V3"), class = "factor"), V6 = structure(c(2L, 1L), .Label = c("482", "Left_Leg"), class = "factor"), V7 = structure(c(2L, 1L), .Label = c("474", "Left_Antenna"), class = "factor"), V8 = structure(c(2L, 1L), .Label = c("237", "Qn_Weight_Loss"), class = "factor"), V9 = structure(c(2L, 1L ), .Label = c("230", "Days_wrkr_eclosion"), class = "factor"), V10 = structure(c(2L, 1L), .Label = c("81", "Growth_all"), class = "factor"), V11 = structure(c(2L, 1L), .Label = c("79", "Growth_1_2"), class = "factor"), V12 = structure(c(2L, 1L), .Label = c("62", "Growth_1_3"), class = "factor"), V13 = structure(c(2L, 1L), .Label = c("60", "Growth_2_3"), class = "factor"), V14 = structure(c(2L, 1L), .Label = c("51", "Right_Antenna" ), class = "factor"), V15 = structure(c(2L, 1L), .Label = c("49", "Left_Leg_Remeasure"), class = "factor"), V16 = structure(c(2L, 1L), .Label = c("49", "Right_Leg"), class = "factor"), V17 = structure(c(2L, 1L), .Label = c("47", "Head_Remeasure"), class = "factor"), V18 = structure(c(2L, 1L), .Label = c("46", "Left_Antenna_Remeasure" ), class = "factor")), .Names = c("V1", "V2", "V3", "V4", "V5", "V6", "V7", "V8", "V9", "V10", "V11", "V12", "V13", "V14", "V15", "V16", "V17", "V18"), class = "data.frame", row.names = c(NA, -2L)) dat<-data.frame(fac=unlist(data[1,, drop=FALSE]), freqs=unlist(data[2,, drop=FALSE])) The plot: barplot( as.numeric( as.character(dat$freqs)) , main="Sample Sizes of Various Fitness Traits", xaxt='n', xlab='', width=0.85, ylab="Frequency") par(mar=c(5,8,4,2)) labels<-unlist(data[1,,drop=FALSE]) text(1:18, par("usr")[3] -0.25, srt=90, adj=1,labels=labels,xpd=TRUE, cex=0.6) A: You are having problems because dat$freqs is a factor, even though it's printed representation 'looks like' it's numeric. (It's almost always helpful to type str(foo) -- here str(dat) or str(dat$freqs) -- to have a look at the real structure of the data you're working with.) In any case, once you've converted dat$freq to class "numeric", constructing the plot becomes straightforward: ## Make the frequencies numbers (rather than factors) dat$freqs <- as.numeric(as.character(dat$freqs)) ## Find a range of y's that'll leave sufficient space above the tallest bar ylim <- c(0, 1.1*max(dat$freqs)) ## Plot, and store x-coordinates of bars in xx xx <- barplot(dat$freqs, xaxt = 'n', xlab = '', width = 0.85, ylim = ylim, main = "Sample Sizes of Various Fitness Traits", ylab = "Frequency") ## Add text at top of bars text(x = xx, y = dat$freqs, label = dat$freqs, pos = 3, cex = 0.8, col = "red") ## Add x-axis labels axis(1, at=xx, labels=dat$fac, tick=FALSE, las=2, line=-0.5, cex.axis=0.5)
{ "pile_set_name": "StackExchange" }
Q: How to define knex migrations using Typescript I would like to define a knex migration using TypeScript (v1.5), but do not manage to define the up/down callbacks in a type safe way: This is what I thought might work (but obviously doesn't): /// <reference path="../../typings/knex/knex.d.ts" /> /// <reference path="../../typings/bluebird/bluebird.d.ts" /> exports.up = function(k : knex.Knex, Promise : Promise<any>) { }; exports.down = function(k: knex.Knex, Promise : Promise<any>) { }; TSC output is error TS2304: Cannot find name 'knex'. I tried a few variations including adding import knex = require("knex"); but with no luck. For your reference, the knex.d.ts file declares a module like this: declare module "knex" { // ... interface Knex extends QueryInterface { } interface Knex { // ... } } Any idea? Or am I trying something impossible here? A: The issue is caused by a knex.d.ts file that does not export the Knex interface itself, but instead only exports a function to create the "Knex" instance. Solution: Change the knex.d.ts file to export Knex itself. For this, replace the export = _ statement at the end of the file with this code block: function Knex( config : Config ) : Knex; export = Knex; The working migration looks like this: /// <reference path="../../typings/knex/knex.d.ts" /> /// <reference path="../../typings/bluebird/bluebird.d.ts" /> import Knex = require("knex"); exports.up = function(knex : Knex, Promise : Promise<any>) { }; exports.down = function(knex: Knex, Promise : Promise<any>) { }; Works great this way. I will create a pull request to get the updated knex.d.ts into the DefinitlyTyped repository.
{ "pile_set_name": "StackExchange" }
Q: Error while opening shared object: SunGrid Engine My application uses the Sun N1 grid engine through the API DRMAA present as shared object libdrmaa.so . I am using dlopen and dlsym to acess functions of the library. That works fine. Now if I try to link it form command line the executable is built but executing it gives the error " Cannot open shared object file". Can anyone suggest what may be the reason. I am using g++ 2.95.3 for compilation and the machine is linux x86_64. Thanx A: Your question and answer are both very confused: if you can link your executable directly against libdrmaa.so, then there is absolutely no good reason to also dlopen that same library (and presumably call dlsym() on its handle as well).
{ "pile_set_name": "StackExchange" }
Q: Excel copy data from different workbooks into 1 workbook I am trying to paste data from different workbooks into 1 master workbook. So far the copy and pasting of data is working, but, when i paste the data into the workbook, there are rows being skipped after each workbook is being pasted into the master workbook. The picture below shows the problem. 2,3 and 6-12 are being skipped. Below is my code: Sub Macro1() ' ' Macro1 Macro ' Dim wb1 As Workbook Set wb1 = ThisWorkbook Path = "C:\Users\Tester\Documents\test\" Filename = Dir(Path & "*.xls") Do While Filename <> "" Workbooks.Open Filename:=Path & Filename, ReadOnly:=True For Each Sheet In ActiveWorkbook.Sheets Sheet.Rows("2:" & Range("A1").End(xlDown).Row).copy _ wb1.Sheets(1).Range("A" & Range("A1").End(xlDown).Row + 1) Application.CutCopyMode = False Next Sheet Workbooks(Filename).Close Filename = Dir() Loop End Sub I think the problem has something to do with this line "wb1.Sheets(1).Range("A" & Range("A1").End(xlDown).Row + 1)" but i am not sure how to fix this. Any suggestions? Thank you! A: You only define the range you want to copy to, but inside it command Range("A1").End(xlDown).Row the file is not specified yet, so excel will get form active file. Try to change you destination to wb1.Sheets(1).Range("A" & wb1.Sheets(1).Cells(Rows.Count, 1).End(xlUp).Row + 1) Your code will look like Sheet.Rows("2:" & Range("A1").End(xlDown).Row).copy _ wb1.Sheets(1).Range("A" & wb1.Sheets(1).Cells(Rows.Count, 1).End(xlUp).Row + 1)
{ "pile_set_name": "StackExchange" }
Q: how to remove browser history in codeigniter? Hi I am working on a project (PHP-CodeIgniter, MySQL). In my application I am creating session for users data(userid, username) after successful login by the user and unset the session after signout. "Here the main problem if i click on back button of browser I can able to see the previously visited pages in my application after signout also". How can prevent this with codeigniter? Thanks in advance A: There are some JavaScript ways of stopping the use of the back button (as Raheel mentioned) but reliable control over this sort of behaviour could be tricky. If a user was on a page and could see it, there is nothing to stop them taking a screenshot, for example. You should also make sure you are sending the correct headers to stop the page being cached: $this->output->set_header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); $this->output->set_header("Cache-Control: no-store, no-cache, must-revalidate"); $this->output->set_header("Cache-Control: post-check=0, pre-check=0", false); $this->output->set_header("Pragma: no-cache"); Source: http://codeigniter.com/forums/viewthread/137096/#675799 Even with this, I'm not sure if all browsers will obey the headers, but it's a good start. A: Just add these lines in .htaccess after RewriteEngine on <Files *> Header set Cache-Control: "private, pre-check=0, post-check=0, max-age=0" Header set Expires: 0 Header set Pragma: no-cache </Files> I prefer this because we need to add this at only one place(.htaccess) and it affect the entire application.
{ "pile_set_name": "StackExchange" }
Q: In reactjs modal close button is not working I am use bootstrap modal in reactjs project. Here is the link of package which i have installed in my project: https://www.npmjs.com/package/react-responsive-modal When, I click on open the modal button then it is working, but when i click on close button then close button is not working. I am using the hooks in my project. Below, I have mentioned my code: import React, { useState } from 'react' import Modal from 'react-responsive-modal' const Login = () => { const [open, openModal] = useState(false) const onOpenModal = () => { openModal({open: true}) }; const onCloseModal = () => { openModal({open: false}) }; return( <div> <h1>Login Form</h1> <button onClick={onOpenModal}>Open modal</button> <Modal open={open} onClose={onCloseModal} center> <h2>Simple centered modal</h2> </Modal> </div> ) } export default Login; A: The issue is because, you are setting object in state, openModal({open: true}) This will store object in state. setState require's direct value which needs to be change, your setState must be this, const onOpenModal = () => { openModal(!open) //This will negate the previous state }; const onCloseModal = () => { openModal(!open) //This will negate the previous state }; Demo You can simplify your code and just use 1 change handle for your modal, const Login = () => { const [open, openModal] = useState(false) const toggleModal = () => { openModal(!open) }; return( <div> <h1>Login Form</h1> <button onClick={toggleModal}>Open modal</button> <Modal open={open} onClose={toggleModal} center> <h2>Simple centered modal</h2> </Modal> </div> ) } Demo
{ "pile_set_name": "StackExchange" }
Q: Spring boot test unable to autowire service class I am attempting to create a Spring Boot test class which should create the Spring context and autowire the service class for me to test. This is the error I am getting: Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.gobsmack.gobs.base.service.FileImportService' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)} The file structue: The Test class: package com.example.gobs.base.service; import com.example.gobs.base.entity.FileImportEntity; import com.example.gobs.base.enums.FileImportType; import lombok.val; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest; import org.springframework.test.context.junit4.SpringRunner; import java.util.Date; import static org.assertj.core.api.AssertionsForClassTypes.assertThat; @DataJpaTest @RunWith(SpringRunner.class) public class FileImportServiceTest { @Autowired private FileImportService fileImportService; private FileImportEntity entity; The Main application class: package com.example.gobs.base; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; /** * Used only for testing. */ @SpringBootApplication public class Main { public static void main(String[] args) { SpringApplication.run(Main.class, args); } } FileImportService interface: package com.example.gobs.base.service; import com.example.gobs.base.entity.FileImportEntity; import com.example.gobs.base.enums.FileImportType; import java.util.List; public interface FileImportService { /** * List all {@link FileImportEntity}s. Which is implemented by: package com.example.gobs.base.service.impl; import com.example.gobs.base.entity.FileImportEntity; import com.example.gobs.base.enums.FileImportType; import com.example.gobs.base.repository.FileImportRepository; import com.example.gobs.base.service.FileImportService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; import java.util.List; @Service @Transactional public class FileImportServiceImpl implements FileImportService { @Autowired private FileImportRepository repository; @Override public List<FileImportEntity> listAllFileImportsByType(FileImportType type) { return repository.findAllByType(type.name()); } Why can it not find the implementation? A: @DataJpaTest annotation doesn't make services loaded to the application context. From Spring documentation: https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-testing-spring-boot-applications-testing-autoconfigured-jpa-test You can use the @DataJpaTest annotation to test JPA applications. By default, it scans for @Entity classes and configures Spring Data JPA repositories. If an embedded database is available on the classpath, it configures one as well. Regular @Component beans are not loaded into the ApplicationContext. You could use @SpringBootTest annotation instead of DataJpaTest. Hope that helps!
{ "pile_set_name": "StackExchange" }
Q: Ошибка "command denied" Hibernate при попытке выполнения команд с удалённое базой MySQL Здравствуйте. При попытке выполнить команду (любую) с удалённой базой данных в Hibernate вижу ошибку: июн 27, 2016 7:44:51 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions WARN: SQL Error: 1142, SQLState: 42000 июн 27, 2016 7:44:51 PM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions ERROR: INSERT command denied to user 'a0080939_zzoorm'@'178.215.96.5' for table 'chats' Exception in thread "main" org.hibernate.exception.SQLGrammarException: INSERT command denied to user 'a0080939_zzoorm'@'178.215.96.5' for table 'chats' at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:82) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) at org.hibernate.engine.jdbc.internal.proxy.AbstractStatementProxyHandler.continueInvocation(AbstractStatementProxyHandler.java:129) at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81) at com.sun.proxy.$Proxy9.executeUpdate(Unknown Source) at org.hibernate.engine.jdbc.batch.internal.NonBatchingBatch.addToBatch(NonBatchingBatch.java:56) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3028) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3469) at org.hibernate.action.internal.EntityInsertAction.execute(EntityInsertAction.java:88) at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:362) at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:354) at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:275) at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:326) at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:52) at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1213) at org.hibernate.internal.SessionImpl.managedFlush(SessionImpl.java:402) at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.beforeTransactionCommit(JdbcTransaction.java:101) at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(AbstractTransactionImpl.java:175) at ru.syndicategames.mud.Main.main(Main.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: INSERT command denied to user 'a0080939_zzoorm'@'178.215.96.5' for table 'chats' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4120) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4052) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2503) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2664) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2815) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.hibernate.engine.jdbc.internal.proxy.AbstractStatementProxyHandler.continueInvocation(AbstractStatementProxyHandler.java:122) ... 21 more Проблема в том, что я спокойно выполняю эти команды из MySQL Workbench, например. На удалённой базе никаких ограничений на пользователя нету. С локальными базами всё работает отлично. Два примера-сравнения, как я заполняю. В файле hibernate.cfg.xml (так не работает) <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD//EN" "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="connection.url">jdbc:mysql://myhost.ru:3306/a0080939_mud</property> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.username">a0080939_zzoorm</property> <property name="connection.password">12345</property> <property name="hibernate.connection.useUnicode">true</property> <property name="hibernate.connection.characterEncoding">UTF-8</property> <property name="hibernate.connection.charSet">UTF-8</property> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.show_sql">true</property> <property name="hibernate.current_session_context_class">thread</property> <mapping class="ru.syndicategames.mud.dao.Chat"/> </session-factory> В MySQL Workbench (так работает) A: В логе ясно указано для кого запрещен доступ: a0080939_zzoorm'@'178.215.96.5. В MySQL можно создавать более одного пользователя с одним и тем же именем, но с разным адресом. Соответственно, для каждого из пользователей могут быть разные права доступа. При подключении к БД обычно указывается только имя пользователя, а адрес подставляется неявно, что может привести к путанице. Подключитесь к БД под root-ом и выполните команду: select user, host from mysql.user; Скорее всего, Вы увидите несколько пользователей с одним и тем же именем a0080939_zzoorm, но с разным значением host. Если Вы осуществляете вход в командную строку mysql, находясь на сервере, то используется адрес localhost. Если же ваше программное обеспечение подключается к MySQL с другого сервера, то используется адрес этого сервера. Вы можете разрешить пользователю a0080939_zzoorm подключаться с любого адреса, и выдать полные права следующей командой: GRANT ALL PRIVILEGES ON *.* TO a0080939_zzoorm@"%" IDENTIFIED BY '12345' WITH GRANT OPTION; Либо дать права только на INSERT на все таблицы и только для пользователя a0080939_zzoorm с адресом 178.215.96.5: GRANT INSERT ON *.* TO a0080939_zzoorm@"178.215.96.5" IDENTIFIED BY '12345' WITH GRANT OPTION; Выдача полных прав и разрешение подключения с любого адреса заметно снижает безопасность вашей БД, будьте осторожны.
{ "pile_set_name": "StackExchange" }
Q: Way to save associations in mongoose without doing a save dance? Say I have this schema. var Topic = new Schema({ owner: { type: Schema.Types.ObjectId, ref: 'User' }, category: { type: Schema.Types.ObjectId, ref: 'Category' }, title: String, date: Date, lastPost: Date, likes: Number, posts: [{ type: Schema.Types.ObjectId, ref: 'Post' }] }); var Post = new Schema({ topic: { type: Schema.Types.ObjectId, ref: 'Topic' }, body: String, date: Date, owner: { type: Schema.Types.ObjectId, ref: 'User' } }); If I want to save the Topic then add the Topic to the topic association on the Post and then push to the posts array on the Topic object, I have to do this weird dance. exports.create = function (req, res) { var data = req.body.data; var topic = new Topic(); topic.title = data.topic.title; topic.date = new Date(); topic.lastPost = new Date(); topic.save(function (err, topicNew) { if (err) res.send(err, 500); var post = new Post(); post.body = data.post.body; post.topic = topicNew; topicNew.posts.push(post); topic.save(function (err, t) { if (err) res.send(err, 500); post.save(function (err, p) { if (err) res.send(err, 500); return res.json(t); }); }); }); }; I'm not seeing anything in the documentation that would help me around this. Thanks for any help. A: Instantiate both the topic and the post initially. Push the post into the topic before the first topic save. Then save the topic and if that succeeds save the post. MongoDB object IDs are created by the driver right when you do new Post() so you can save that in the topic.posts array before it's saved. That will make your 3-step dance a 2-step dance, but in the grand scheme of things this seems essentially par for the course so I wouldn't set my expectations much lower than this. Very few useful real world routes can be implemented with a single DB command. You can also use middleware functions as a way to get sequential async operations without nesting. You can use the req object to store intermediate results and pass them from one middleware to the next.
{ "pile_set_name": "StackExchange" }
Q: Is there a way to get the PREMATCH ($`) and POSTMATCH ($') from pcrecpp? Is there a way to obtain the C++ equivalent of Perl's PREMATCH ($`) and POSTMATCH ($') from pcrecpp? I would be happy with a string, a char *, or pairs indices/startpos+length that point at this. StringPiece seems like it might accomplish part of this, but I'm not certain how to get it. in perl: $_ = "Hello world"; if (/lo\s/) { $pre = $`; #should be "Hel" $post = $'; #should be "world" } in C++ I would have something like: string mystr = "Hello world"; //do I need to map this in a StringPiece? if (pcrecpp::RE("lo\s").PartialMatch(mystr)) { //should I use Consume or FindAndConsume? //What should I do here to get pre+post matches??? } pcre plainjane c seems to have the ability to return the vector with the matches including the "end" portion of the string, so I could theoretically extract such a pre/post variable, but that seems like a lot of work. I like the simplicty of the pcrecpp interface. Suggestions? Thanks! --Eric A: You could use FullMatch instead of PartialMatch and explicitly capture pre and post yourself, e.g. string pre, match, post; RE("(.*)(lo\\s)(.*)").FullMatch("Hello world", &pre, &match, &post);
{ "pile_set_name": "StackExchange" }
Q: What is the best location for a "read me" file on the target machine when deploying an ASP.NET application using an .MSI package? For an ASP.NET web application that is packaged and sold to customers for deployment, what would be the best location for a "read me" file with notes about setup and configuration on the target system? Requirements: The file should not be accessible by users of the web application, only the person doing setup and configuration. The file should be consumable by the MSI installer program, so that it can be displayed as part of the setup wizard UI. The solution should be simple and very low cost. (I don't want an elaborate solution for just a simple text file.) Some thoughts I have are to copy the file to *App_Data* or to bin as those are protected folders by default, and then pull the file in from one of those locations in the setup program. A: The readme should be a separate file that sits beside the MSI on the media you distribute the web app on. This is a standard practice dating from generations ago the dark ages. If you distribute as a download from the web then have a link for the MSI, and a link for the readme. You could also include the same file into the MSI, but arguably that is the wrong place for it as the user has yet to reach the configuration stage, and unless they print it they won't be able to refer to it later in the MSI process (if you have any configuration steps in the MSI). Having the instructions available via the web app is also arguably wrong, as the user may have to do some initial configuration in order to reach the page telling them how to configure the app.... So ship the instructions separately to the MSI, and make sure they look okay and are easily readable when printed out. Remember these pointers: Instructions are not always read Instructions are not always read at the time of installation Instructions are not always read by the same person that does the installation Instructions are not always read from the screen Instructions are not always read correctly, even when they are simple Instructions are not always read (I know that is a duplicate of the first point...) Don't forget to clearly distinguish between pre-install and post-install configuration instructions (even if they are in the same document) - you want to minimize the risk of the end user getting it wrong (which some of them will do no matter how hard you try).
{ "pile_set_name": "StackExchange" }
Q: How do I get the string with name of a class? What method do I call to get the name of a class? A: In [1]: class test(object): ...: pass ...: In [2]: test.__name__ Out[2]: 'test' A: It's not a method, it's a field. The field is called __name__. class.__name__ will give the name of the class as a string. object.__class__.__name__ will give the name of the class of an object. A: I agree with Mr.Shark, but if you have an instance of a class, you'll need to use its __class__ member: >>> class test(): ... pass ... >>> a_test = test() >>> >>> a_test.__name__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: test instance has no attribute '__name__' >>> >>> a_test.__class__ <class __main__.test at 0x009EEDE0>
{ "pile_set_name": "StackExchange" }
Q: SpeechSynthesizer in ASP.NET - async error I would like to be able to generate speech in my ASP.NET app by calling speak.aspx?text=Hello%20world. This would give a response in .wav format. So far I have a blank page with code behind: protected void Page_PreRender(object sender, EventArgs e) { using (var ss = new SpeechSynthesizer()) { MemoryStream str = new MemoryStream(); ss.SetOutputToWaveStream(str); ss.Speak(Server.UrlDecode(Request.QueryString["text"])); Response.AddHeader("Content-Type", "audio/wav"); str.WriteTo(Response.OutputStream); str.Close(); } } However this fails with message: InvalidOperationException: Asynchronous operations are not allowed in this context. Page starting an asynchronous operation has to have the Async attribute set to true and an asynchronous operation can only be started on a page prior to PreRenderComplete event. If I add Async="true" to the @Page directive, the code runs but a request for the page hangs indefinitely. Please could you let me know what's wrong, and show the correct code/approach to use? Note I can't just use the Google text-to-speech API since it only allows strings of 100 characters or less. Thank you. A: You should probably move the code above into the page_load method. there's no real reason for doing what you're doing in prerender. If you make the page async, then you need to change your programming style. see if this helps: Example of Asynchronous page processing in ASP.net webforms (.NET 2.0)
{ "pile_set_name": "StackExchange" }
Q: How to append json output in html table with editable tr using for loop I am working with codeigniter framework, so as per my requirement I am getting user data from database and encoding by json_encode, as in php we use foreach loop to echo data in table, but because of json this data should appended in table using for loop in jquery script and here I got stuck, so how to do this i.e user data appended in table one by one.with editable <tr> Controller code public function get_users() { $result = $this->db->get('users')->result(); header('Content-Type: application/json'); echo json_encode($result); } Jquery ajax to get output in json $.ajax({ url: "<?= base_url('controller/get_users') ?>", type: 'POST', data: {}, success: function (response) { console.log(response); var arr_output=JSON.stringify(response); // For loop code to append each user data in table with editable tr }, error: function (xhr) { alert(xhr.status); } }); Html table <table class="append_data"> <tr> <th>Name</th> <th>mobile not</th> <th>city</th> <th>edit</th>z </tr> </table> A: Do you mean to have something along the lines of this? Of course I've not included your source code in my answer, but it's simply enough to essentially copy and paste. In my example I'm making use of a funtional style approach, i.e. making use of currying, template literals and not to mention the map function. I guess all you'd really need to do in this scenario is pass the table and the data from the AJAX success function into the render function that I've provided, I'm just assuming that the data returned from the server is an array of objects. // The HTML table. const tbl = document.querySelector('.append_data'); // Some example data. const dummyData = [ {name: 'demo', mobile: '01748329', city: 'NY', edit: 'Something'}, {name: 'test', mobile: '12345789', city: 'WA', edit: 'Something Else'} ]; // A function to produce a HTML table row as a string. const template = d => `<tr> <td>${d.name}</td> <td>${d.mobile}</td> <td>${d.city}</td> <td>${d.edit}</td> </tr>`; // A function that takes a table, returns a function to accept an arrya of objects. // It will then add the relevant template(s) to the provided table. const render = tbl => d => tbl.innerHTML += d.map(i => template(i)).join(''); // Fire the render function. render(tbl)(dummyData); <table class="append_data"> <tr> <th>Name</th> <th>mobile</th> <th>city</th> <th>edit</th> </tr> </table> Edit I've also made use of arrow functions in my first example, here's a more beginner friendly implementation. // The HTML table. var tbl = document.querySelector('.append_data'); // Some example data. var dummyData = [ { name: 'demo', mobile: '01748329', city: 'NY', edit: 'Something' }, { name: 'test', mobile: '12345789', city: 'WA', edit: 'Something Else' } ]; // A function to produce a HTML table row as a string. var template = function template(d) { return '<tr>' + '<td>' + d.name + '</td>' + '<td>' + d.mobile + '</td>' + '<td>' + d.city + '</td>' + '<td>' + d.edit + '</td>' + '</tr>'; }; // A function that takes a table, returns a function to accept an arrya of objects. // It will then add the relevant template(s) to the provided table. var render = function render(tbl) { return function (d) { return tbl.innerHTML += d.map(function (i) { return template(i); }).join(''); }; }; // Fire the render function. render(tbl)(dummyData); <table class="append_data"> <tr> <th>Name</th> <th>mobile</th> <th>city</th> <th>edit</th> </tr> </table> Edit 2 If I'm not mistaken, you need to remove JSON.stringify in order for this to work correctly with the code I've provided. Within the code I've provided, you can see, my code expects an object to be provided, and if you're using JSON.stringify, then that essentially converts an object to a string... In which case, this code will not work... Maybe you should do a bit more reading into how to use methods like JSON.stringify & JSON.parse.
{ "pile_set_name": "StackExchange" }
Q: PHP equivalent of C# SHA1 Unicode hashing I have a running C# app with user authentication. I encrypt the passwords with the SHA1Managed class using Encoding.Unicode. Now, we are developing a new PHP web app and it seems that PHP sha1 method uses ASCII, so the encrypted passwords are slightly different. I cannot change my C# code as it is running in a number of different servers and all user passwords are already encrypted. This cannot be changed as it would imply asking all users their passwords (which is not viable). I have read the following solutions, however I am not able to solve my problem with them: C# SHA-1 vs. PHP SHA-1...Different Results? In this one they suggest using ASCIIEncoding. However, as I stated before, this cannot be changed. SHA1: different results for PHP and C# and Generate a PHP UTF-16 SHA1 hash to match C# method these one use base64 encoding, and that is not compatible with my C# encoding (I think) php equivalent to following sha1 encryption c# code this one also uses ASCIIEncoding. This is my C# code which CANNOT be changed: byte[] msgBytes = Encoding.Unicode.GetBytes(data); SHA1Managed sha1 = new SHA1Managed(); byte[] hashBytes = sha1.ComputeHash(msgBytes); StringBuilder hashRet = new StringBuilder(""); foreach (byte b in hashBytes) hashRet.AppendFormat("{0:X}", b); return hashRet.ToString(); This is the PHP code implemented so far: $str=mb_convert_encoding($password.$SALT, 'UTF-16LE'); $result=strtoupper(sha1($str)); The result obtained for an example string "1981" in C# D4CEDCADB729F37C30EBF41BC8F2929AF526AD3 In PHP I get: D4CEDCADB729F37C30EBF41BC8F29209AF526AD3 As can be seen, the only difference between each one is the 0 (zero) char. This happens in all strings, but the 0 appears in different locations. EDIT As https://stackoverflow.com/users/1839439/dharman stated, results can't be reproduced exactly as my string has a SALT. However, the results should be different even without the SALT. SOLUTION A user gave a suggestion which solved my problem. Unfortunately, he deleted the answer before I could post that it worked. The problem is in C# when I am parsing the bytes to string in the StringBuilder. The format is {0:X} and any leading 0 in the byte is ignored. For example a 01 would be parsed to 1, a 0D would be parsed to D. Therefore, I had to iterate over pairs of the result obtained in PHP, and remove leading zeros in there. The final code in PHP is the following: $str=mb_convert_encoding($password.$SALT, 'UTF-16LE'); $result=strtoupper(sha1($str)); $array = str_split($result, 2); foreach ($array as &$valor) { $valor = ltrim($valor, '0'); } unset($valor); $result = implode($array); A: You won't like my answer, but here it is. First of all the problem is not with PHP, PHP is simple and it is working correctly. You have got a serious bug in your C# code. It needs fixing one way or the other as soon as possible. This cannot be changed as it would imply asking all users their passwords (which is not viable). Unfortunately this is what must happen. You can do it without causing mass panic though. I assume your DB has some kind of way of knowing whether the password has been reset by administrators; if not then you need to add such column (preferably of type timestamp). Next time the user logs in, they must provide their password, because you have reset all of them, so take that password and rehash it properly and store the new hash in the database. It would be wise to use a new column for the new hash, or least have a way of identifying the corrected hashes. The column should be at least VARCHAR(60), but best would be to have it VARCHAR(255), which should accommodate all popular hashes. Of course, SHA1 is not a suitable hashing method for passwords, even if using salts. Since you need to rehash all passwords anyway a good idea would be to switch to bcrypt or a similar secure hashing function. Because your original application introduced this bug, you require some workarounds. Supporting the buggy behaviour in the new application does not seem like a good idea, so I would advise not to look for workarounds in PHP. You could of course try to mangle the passwords in the same way as you suggested in the question, but that is just going to drag the same bug into the new application without fixing it at all. Before you start doing anything, you should analyse how many of your passwords are damaged in the database. The correct SHA1 hash length should be 40 characters. If you can find the number of passwords which are less than 40, you are going to know which and how many passwords need to be fixed. Fixing the passwords is going to be difficult, but definitely worth it. A note on character encoding: PHP uses UTF-8 encoding most of the time. It is the most common encoding used on the web, and I would recommend to use that for your strings. When it comes to hashing it doesn't matter, in which encoding the string is, because hashes are calculated on bytes. This is the reason why you are casting your C# string to bytes in UTF-16 with Encoding.Unicode.GetBytes(data).
{ "pile_set_name": "StackExchange" }
Q: Shortest distance between two curves. Shortest distance will be at that point where the tangents at the curve are parallel . I found derivative of curve .but after that got stuck . A: HINT: $$f^{'}(x)=-4x^2(9x^3+3x+2) \text{ and } g^{'}(x)=e^x-e^{-x}$$ Slopes equal at $x=0$. Now, $f(0)=-6 \text{ and } g(0)=4$ Thus distance between points $(0,-6)$ and $(0,4)$ is $10$. Thus $\frac{\lambda}{2}=5$
{ "pile_set_name": "StackExchange" }
Q: Define folder depth for the verbose option in Copy-Item cmdlet I'm using the following command to copy a directory tree from one folder to another. Copy-Item $SOURCE $DEST -Filter {PSIsContainer} -Recurse -Force -Verbose The verbose option is correctly showing each folder that is copied. However, I would like to tell the Verbose option to only shows the first level of the subfolders that are copied. Hence the subfolders/subfolders/... etc wouldn't appear. Is it possible? A: Instead of using the -Verbose option, you could use the -PassThru option to process the successfully processed items via the pipeline. In the following example, I am assuming that $DEST is the existing directory in which the newly copied directory will appear. (You cannot call Get-Item on non-existant objects.) $SOURCE = Get-Item "foo" $DEST = Get-Item "bar" Copy-Item $SOURCE $DEST -Filter {PSIsContainer} -Recurse -Force -PassThru | Where-Object { # Get the parent object. The required member is different between # files and directories, which makes this a bit more complex than it # might have been. if ($_.GetType().Name -eq "DirectoryInfo") { $directory = $_.Parent } else { $directory = $_.Directory } # Select objects as required, in this case only allow through # objects where the second level parent is the pre-existing target # directory. $directory.Parent.FullName -eq $DEST.FullName }
{ "pile_set_name": "StackExchange" }
Q: Synchronous Multimaster Replication postgresql pgpool 2 We would like to build a system of two Postgresql 8.4 servers, with pgpool 2 in front that will make all writes go to both systems. In the event of a failure on one of the nodes, it will degrade and pgpool will direct all writes at the remaining node. From there we can manually re-sync everything and bring it all back up. Im currently doing some testing to this effect, and noticing some interesting things. My script has a simple loop that inserts rows in the db. When I shutdown the network interface on one server, the script pauses, pgpool waits for a reply from the down server, but doesnt degrade it, until somebody else tries to connect. Once somebody else create a new connection it will then return an error, and degrade the server. Then if I run the script again it will then direct the writes to the single remaining server. This seems like a little too much activity needed to degrade a server. Am I missing something? is this normal? Cheers Mark A: I believe the behavior you are describing is normal for pgpool - It degrades a server when it can't connect to it, but if a connection is already "established" it doesn't die as quickly as you might expect (this is likely to deal with transient network glitches and avoid degrading the environment for a "hiccup"). Out of curiosity, have you considered the streaming log replication available in Postgres 9? The latest implementation is near-real-time, and allows you to run queries on the slave servers. I believe synchronous replication is coming soon (or may have already been implemented). You would need a script or manual process to handle a failure (restart one of the "slave" servers as master and move a virtual IP over), but the implementation is not otherwise substantially different from what you'll get out of pgpool (re-sync when a server fails, etc.) The big advantage here is that you're using stuff that's built in to the Postgres server core, and it got some pretty extensive testing and documenting in the 9.0 beta process. It's my de-facto recommendation if you have no constraints that make it impractical.
{ "pile_set_name": "StackExchange" }
Q: Producer consumer multiple mutex needed to access critical section? There are 3 tools. A consumer needs 2 tools to modify the buffer. If consumer A takes the 2 tools and consumer B takes 1, consumer B will have to wait for another tool to be released. I am not sure if I'm thinking about this problem in the right way. The way I interpret it is that I need 3 mutex and the consumer has to lock 2 out 3, is this the right idea? I don't think I should be using semaphores because I don't want multiple threads accessing a shared resource at the same time. In a normal producer consumer problem there is just 1 mutex lock but here I need any 2 of the 3 how do I approach this? A: Yes, you can use 3 mutexes, but you must be careful to avoid deadlock. To do so, you must establish a well known order to acquire locks: for example, always acquire the lock for the tool with the lowest identifier first. In situations like these, avoiding deadlock is always a matter of acquiring locks in the same order. Some pseudo-code: pthread_mutex_t locks[3]; // 1 lock for each tool critical_function() { /* Acquire 2 tools, t1 and t2 */ first = min(t1, t2); second = max(t1, t2); locks[first].lock(); locks[second].lock(); /* Do work... */ locks[first].unlock(); locks[second].unlock(); } This assumes that you associate an ID to each tool in the range 0-2. But notice, if you will, that no more than one producer / consumer can be working at the same time, since there are 3 tools only. So you might want to use a single mutex instead, lock it, acquire 2 tools, do the work and release it - after all, there is not much possibility for parallelism, you will have at most one producer / consumer working, and the others waiting while holding 1 tool.
{ "pile_set_name": "StackExchange" }
Q: Class ::CakeTime in Model cakePHP i'm trying to access the class CakeTime in my Model and CakePHP doesn't seem to recognize it. Here's my code: <?php App::uses('AppModel', 'Model', 'CakeTime', 'Utility'); class Rex extends AppModel { public function month_entries($created){ //création des mois qui vont servir de critère de recherche $month=CakeTime::format("$created", '%Y-%m'); //on récupère le 1er du mois recherché $month=$month."-01"; //on se place au premier du mois suivant $month_plus_un=date('Y-m-d', strtotime("+1 month",strtotime($month))); $count = $this->Article->find('count', array( 'conditions' => array( 'Rex.date_incident >='=> $month, 'Rex.date_incident <'=> $month_plus_un ))); return $count; } I get an error as soon as i call CakeTime. am i missing something in syntax ? The documentation is unclear on how to call core library utilities in Model. Thanks! A: your App::uses(...) declaration is wrong, it shoulld be: App::uses(string $class, string $package) , so, change: App::uses('AppModel', 'Model', 'CakeTime', 'Utility'); to App::uses('AppModel', 'Model'); App::uses('CakeTime', 'Utility');
{ "pile_set_name": "StackExchange" }
Q: How to hide partition from pantheon files? In ubuntu i could hide a partition by going into the disks application, but in pantheon i cant find tha app, so how would i go about hiding an ntfs partition in pantheon files? ubuntu writes something in /etc/fstab file but i can't remember what it is, thanks in advance. A: Since you are already familiar with Disks and elementary OS is based on Ubuntu, why not use this application again? Just open Software Center, search for Disks and install it. start Disks select the partition, you want to hide click at the gear icon and select Edit Mount Options disable Automatic Mount Options make sure Mount at startup and Show in user interface are unchecked This partition won't be shown after the next login. A: Method 1:(tried) You can use GParted tool in elementary to achieve what you want. This is not provided by default. To install it type this in your terminal: sudo apt-get install gparted Steps to make ntfs partitions hidden: Open gparted from slingshot menu Right-Click on desired ntfs partitions Goto Manage Flags Check hidden flag Log in and out or Restart Remember to unmount the partition first. Method 2: Whichever filesystem you mount gets mounted to /media. In-order to hide it from pantheon files and nautilus, you need to mount it anywhere else than /media, optimal choice would be to mount it under /mnt. Following is the link which describes how to do it in nautilus, it should be similar for pantheon files as elementary is a derivative of Ubuntu.(didn't try this for myself) Link: hide ntfs partition in ubuntu nautilus (check the second and third answer, first answer uses disk utility in ubuntu)
{ "pile_set_name": "StackExchange" }
Q: If we use the C prefix for classes, should we use it for struct also? Assuming that a project has been using the C class prefix for a long time, and it would be a waste of time to change at a late stage, and that the person who originally wrote the style guide has been hit by a bus, and that there are no structs in the code already... It's a pretty trivial question, but if a C++ code style guide says "use C for class name prefix" then should this be taken to mean also use C for struct prefix also, or should we use something different, like S for example. class CFoo { }; struct CBar { }; ... or ... class CFoo { }; struct Bar { }; A: Simple answer - don't use the C prefix for classes. This is hungarian notation of the most pointless sort. It's probably time to re-write the style guide. Frankly (and speaking as someone who's written several of the things), most such guides are rubbish and/or were written long, long ago and never updated. A: If the style guide doesn't specify, I would (probably) use the "structs are classes with all members public"-rule to use C for structs too, yes. Or I would think "hah, here's a loophope to get around that silly initial rule, yay" and not use it. In other words, this is highly subjective. A: If the code style guide doesn't specify, find code that's been following the style guide and see what's already been done. If there is no code already following the style guide, come to an agreement with everyone involved in the project. If nobody else is involved in the project, just decide and be consistent.
{ "pile_set_name": "StackExchange" }
Q: Flash - comma , and period . keys don't trigger key down I've encountered a rather strange bug in Flash CS5.5: import flash.events.KeyboardEvent; import flash.events.Event; stage.addEventListener(KeyboardEvent.KEY_DOWN,onKeyDwn); function onKeyDwn(e:KeyboardEvent){ trace("Key down!"); } The keys command and period on a standard US keyboard does not trigger key down events for me. However, if I add a textbox and type in it, it works. A: You need to disable keyboard shortcuts in the flash player. Select Test Movie in Flash CS5.5, and when the flash player window appears make sure Control -> Disable Keyboard Shortcuts is checked in the menu.
{ "pile_set_name": "StackExchange" }
Q: Save Outlook attachment to a folder and Rename the file with date I'm trying to save the daily system generated report attached to the e-mail to a folder. Then, append the attachment filename with the date (modified date in the file). I am able to get the file saved to a folder. However, the renaming piece doesn't seem to work for me. Can someone please help why the renaming piece isn't working? Much Thanks! Public Sub saveAttachtoBIFolder(itm As Outlook.MailItem) Dim objAtt As Outlook.Attachment Dim saveFolder As String Dim fso As Object Dim oldName As Object Dim file As String Dim DateFormat As String Dim newName As Object saveFolder = "C:\BI Reports" Set fso = CreateObject("Scripting.FileSystemObject") On Error Resume Next For Each objAtt In itm.Attachments file = saveFolder & "\" & objAtt.DisplayName objAtt.SaveAsFile file Debug.Print "file="; file ' the full file path printed on immediate screen Set oldName = fso.GetFile(file) ' issue seems to start from here DateFormat = Format(oldName.DateLastModified, "yyyy-mm-dd ") newName = DateFormat & objAtt.DisplayName oldName.Name = newName Debug.Print "DateFormat="; DateFormat 'the date format printed on the immediate screen Set objAtt = Nothing Next Set fso = Nothing End Sub A: Your newName needs to be string NOT Object so Dim newName As String also I would assign objAtt.DisplayName to string variable See Example Set FSO = CreateObject("Scripting.FileSystemObject") For Each objAtt In itm.Attachments File = saveFolder & "\" & objAtt.DisplayName objAtt.SaveAsFile File Debug.Print File ' the full file path printed on immediate screen Set oldName = FSO.GetFile(File) ' issue seems to start from here Debug.Print oldName Dim newName As String Dim AtmtName As String AtmtName = objAtt.DisplayName Debug.Print AtmtName DateFormat = Format(oldName.DateLastModified, "yyyy-mm-dd ") Debug.Print DateFormat newName = DateFormat & " " & AtmtName oldName.Name = newName Debug.Print newName 'the date format printed on the immediate screen Next
{ "pile_set_name": "StackExchange" }
Q: Do older players tend to never bluff? In cash no limit Hold’em, is there a pattern or behavior that older players don’t tend to bluff? A: In general, if you are playing low stakes (1/2 or 1/3) I would assume that all unknown players don't bluff, unless they give you a reason to think otherwise. If they are a bad player, their bluffs will likely be obvious, if they are a decent/good player, they know not to bluff you until they have seen you play for a few hours. I personally would never try to bluff a player at low stakes unless: they showed me they are capable of folding top pair; this usually takes a few hours of play and sometimes multiple sessions to get a feel for they turned their range faceup and I am confident that they have very few calling hands. To analyze the stereotype that old players don't bluff, I would say that it is a dangerous to assume this without any other information, other than the villain is old. I have seen old people range from complete spewtard maniacs that blow through 10+ buyins in a few hours to old tight nits that play 1 hand an hour. There are much better indicators to go off of, such as whether or not they are drinking (this implies they are loose and trying to "have a good time"), their general attitude, opening size, etc. And honestly, I have yet to see an accurate "guide" for analyzing player stereotypes. I have concluded in my (limited) experience that this is something that is developed over years of play, and not something that can be simply read in a book.
{ "pile_set_name": "StackExchange" }
Q: Case for seeking quotation to digitize Greenwich Hospital School admission record of John Smyth in about 1755? My 4th great grandfather John Smyth was a Captain's Clerk on the HMS Firm when he married Sarah Osment on 13 Jul 1764 at Stoke Damerel, Devon, England. When he was buried at Bunhill Fields Burial Ground (London, England) on 23 Jan 1806, his age was given as 66 which suggests that he was born about 1740. In the National Archives I have found a record that I suspect may belong to him: Reference: ADM 73/347/31 Description: John Smyth. When admitted to Greenwich Hospital School: Not stated. Parents' names: Jeremiah and Mary Smyth. Applicant baptised 7 May 1741 in Oxwich, County Glamorgan. Date: 1728-1870 Held by: The National Archives, Kew Physical description: 6 document(s) The same page states: This record has not been digitised and cannot be downloaded. Request a quotation for a copy to be digitised or printed and sent to you. I am holding off on asking for that quotation while I try to be more sure that it is the correct John Smyth but if it is then the above documents could be something of a "treasure trove" for me because I found a Greenwich Hospital School admission documents set described at SOG-UK-L Archives as being: Later my 3x gt grandfather applied for 4 of his children to enter the Royal Hospital, Greenwich, as it was called by then. These application records were in ADM64 (1830s). Both sets of records included a lot of information including in my 3 x gt grandfather's application a baptismal certificate for him as well as his parent's marriage certficate (remember this was 1808). All records included details of the father's service up to the time of application including all ships they served on with beginning and end dates. In the case of the ones from the 1830s it also had the names of all other children in the family. From memory they were in alphabetical order, folded likes wills and piled into boxes. After serving on the HMS Firm I have been able to find records that John worked in three inter-related government offices (Sick and Hurt Office, Transport Office and Prisoners of War Department) from at least 1775 until his death in 1806 - see Finding records of Transport-office that John Smyth worked for in London prior to 1806? These all seem to be related to the Navy. I think my next step is to get a quotation for the contents of ADM 73/347/31 to be digitised but I am wondering if I have missed taking any obvious steps to establish the identity of my ancestor with this John Smyth first? I am very confident of my line to John Smyth who left an 1806 Will naming my 3rd great grandparents John Stacy and Sarah Osment Smyth. A: As I understand, you have no other specific record that would link John Smyth to Wales. With this in mind I first think you should not get too caught up in the spelling of the name - John Smyth, John Smith, John Smythe. While the Smyth spelling may be less common, the spellings may have been phonetically identical. You should be prepared to see variability in the way the names are spelled, particularly in the eighteenth century and earlier. Parish Registers My first source to explore before ordering the record would be the relevant parish registers. Fortunately in the school document abstract you are given a lot of detail about the person contained in the record - that he was a son of "Jeremiah and Mary Smyth" and "baptised 7 May 1741 in Oxwich, County Glamorgan". The parish registers for Oxwich are indexed on FreeREG for this period. I note the following baptisms in Oxwich Parish Church: 16 Jan 1737, Jeremiah, s. of Jeremiah & Mary Smith 25 Feb 1739, Matthew, s. of Jeremiah & Mary Smith 26 Apr 1741, John, s. of Jeremiah & Mary Smith 9 Aug 1743, Elizabeth, d. of Jeremiah & Mary Smith It is impossible to say without seeing the original PR and school record why the baptism date does match perfectly with the school record abstract. However, I think there can be little doubt the two records are for the same John. I cannot see any marriages or burials in Oxwich for this family, which suggests they may have moved to a different parish. I do note a John Smith having children there in the late 1750s and 1760s who you should keep in mind as a possible relative, and a couple of burials for John Smiths in 1786 and 1799. Whether or not John son of Jeremiah & Mary is your John is difficult to say without further information. John Smyth/Smith is a very common name. You could explore further parish records and wills for Jeremiah and Mary Smith, and the siblings of John in the hope that there is something that might definitively link him to your family. Did the rest of this family move to England? If so then it could be promising. Did the name Jeremiah show up in your Smyth family? If so, that could be a useful clue since it is a relatively uncommon name, although the absence of any Jeremiahs does not (of course) rule out a link. TNA Record It is difficult to proceed with this without seeing the school record. It may or may not include information that links him to your family, but without looking you can't know. If you do have a strong suspicion this is your John, and want to order a copy, you can get a quote through the National Archives. However keep in mind that the cost for them to copy the records for you, especially if they consist of many pages, can be prohibitively expensive. It may be more cost effective to hire a professional researcher to look at the record, copy or photograph the relevant/important pages - and they may also be able to look at other records of interest to you as well. There is a list of independent researchers available on the TNA website organized by research area, or if you're lucky you might be able to find someone willing to do a lookup for you on a site like RAOGK (although at a quick glance there aren't many England volunteers anymore).
{ "pile_set_name": "StackExchange" }
Q: What is the word for this type of shade mixed with spots of sunlight? What is the word/phrase which describes that type of shade which one can witness on a sunny day under a tree which doesn't have very thick foliage; the one which is a mixture of shadow spots formed by the leaves and light spots from the sunlight coming between them? A: What about dappled shade? dappled adjective Marked with spots or rounded patches. ‘Nam's yard sat soft-lit under a few swinging lanterns amid dappled shade from the trees.’ (ODO) The best place to be today was in the woods, in the dappled shade of Barming Woods, away from the 27.C heat! (Source)
{ "pile_set_name": "StackExchange" }
Q: applying function across multiple list in R I have a function that I would like called computeMASE to apply to 3 different lists forecast.list,train.list,test.list all of them have common values (ap,wi). I can use the function individually to the list as shown in the code below, but when I use the lapply and apply the function to get the data all at once, I'm unable to do. Please see below for a reproducible example. Please let me know how to solve this. Many Thanks library("forecast") ## Forecast Function for.x <- function(x){ fc <- forecast(ets(x),h=18)$mean return(fc) } ## MASE Function computeMASE <- function(forecast,train,test,period){ # forecast - forecasted values # train - data used for forecasting .. used to find scaling factor # test - actual data used for finding MASE.. same length as forecast # period - in case of seasonal data.. if not, use 1 forecast <- as.vector(forecast) train <- as.vector(train) test <- as.vector(test) n <- length(train) scalingFactor <- sum(abs(train[(period+1):n] - train[1:(n-period)])) / (n-period) et <- abs(test-forecast) qt <- et/scalingFactor meanMASE <- mean(qt) return(meanMASE) } ## Prepare Data train.list <- list(ap = ts(AirPassengers[1:(length(AirPassengers)-18)],start=start(AirPassengers),frequency=12), wi = ts(wineind[1:(length(wineind)-18)],end=end(wineind),frequency=12)) test.list <- list(ap = ts(AirPassengers[(length(AirPassengers)-17):length(AirPassengers)],end=end(AirPassengers),frequency=12), wi = ts(wineind[(length(wineind)-17):length(wineind)],end=end(wineind),frequency=12)) ## Create Forecast forecast.list <- lapply(train.list,for.x) ## Compute MASE for each list in the forecast k.ap <- computeMASE(forecast.list$ap,train.list$ap,test.list$ap,12) k.wi <- computeMASE(forecast.list$wi,train.list$wi,test.list$wi,12) ## How to apply compute MASE to all the elements in the list,? below does not work mapply(computeMASE(X,Y,Z,12),X=forecast.list,Y=train.list,Z=test.list) A: The first argument of mapply should be a function. You can "curry" the period argument mapply(function(x,y,z) computeMASE(x,y,z,12), forecast.list, train.list, test.list) or, provide it as another argument (with implicit recycling) mapply(computeMASE, forecast.list, train.list, test.list, 12)
{ "pile_set_name": "StackExchange" }
Q: Bases for the spaces$\mathcal{V}^1(X)$ and $\Omega^1(X)$ For precise definitions of the spaces referenced below, please refer to this question. For $X \subset \mathbb{R}^n$, I understand that the $n$-dimensional tangent space $T_pX$ has a natural/canonical basis $((e_1)_p, \dots, (e_n)_p)$ where each $(e_i)_p$ is the standard basis vector $e_i$ translated to the point $p$. I also understand from linear algebra that the $n$-dimensional cotangent space $T^*_pX$ has a basis which is dual to the canonical basis given above and that the elements of this basis are typically denoted by $(dx^1)_p, \dots, (dx^n)_p$. In the cotangent space, these basis elements can be interpreted as the differentials of the canonical projection functions. Now, in this context, I am trying to parse the following claim from a text where, for notional ease, the point $p$ is suppressed: The list $(e_1, \dots, e_n)$ is a module basis for $\mathcal{V}^1(X)$ and the list $(dx^1, \dots dx^n)$ is a module basis for $\Omega^1(X)$ I'm not sure how to make sense of this statement because, by definition, $$ \mathcal{V}^1(X) = C^1(X,\mathbb{R}^n) \;\;\; \text{and} \;\;\; \Omega^1(X)= C^1(X, (\mathbb{R}^n)^{\prime}) $$ where $(\mathbb{R}^n)^{\prime}$ denotes the continuous dual space of $\mathbb{R}^n$. These seem to me (and indeed, later in the text it is demonstrated) that both of these are infinite dimensional vector spaces. So my question is, Is there a way to interpret the above statement so it makes sense, particularly in light of the fact that these spaces are infinite dimensional (!) ? A: As $\mathbb{R}$-modules (real vector spaces) both $\mathcal{V}^1(X)$ and $\Omega^1(X)$ are usually infinite dimensional, but as modules over the ring $C^1(X,\mathbb{R})$ they are finitely generated and free (with basis as you've listed). It is easy to see why: take any $\omega \in \mathcal{V}^1(X)$, and note that there are $\{ \omega_1, \omega_2, \ldots, \omega_n \} \subseteq C^1(X,\mathbb{R})$ such that $\omega(x) = (\omega_1(x), \omega_2(x), \ldots, \omega_n(x))$ for all $x \in X$, i.e. $\omega = \sum_{i =1}^n \omega_i e_i$. The proof for $\Omega^1(X)$ is similar.
{ "pile_set_name": "StackExchange" }
Q: How can I change Access hyperlinks with VBA? I've done some extensive research and realize this is not an easy task. I need to change many hyperlinks in different tables from P:\Library\Folder... to I:\Folder... I think I can change the field type to long text, find and replace, change type back to hyperlink. A: Table Find/Replace dialog will work on Hyperlink field if there is no DisplayText component in hyperlink string. In either case, an SQL UPDATE action will work, like: CurrentDb.Execute "UPDATE table SET field = Replace([field], 'P:\Library\', 'I:\')" It is possible to have hyperlink functionality on form and report in ReportView without Hyperlink type field. Of course this will require alternate method than hyperlink field interface to enter file path into text field - probably with VBA executing File System Object dialog. Hyperlink click will not be possible in table but since users should not interact with tables and queries, just forms and reports, this should not be an issue.
{ "pile_set_name": "StackExchange" }
Q: Simple multi-client echo server I've been looking at some async comms in C#. As a proof of concept, I've written a simple multi-client echo server. The server allows multiple TCP clients to connect and listens for input from the clients. When it receives a complete line, it forwards the completed line to any other connected clients. For testing, I used multiple telnet clients to connect and monitor messages, as well as a simple client for sending test messages: TestClient using System.Net.Sockets; using System.Text; namespace Client { class Program { static void Main(string[] args) { using (TcpClient client = new TcpClient("127.0.0.1", 4040)) { SendMessage(client, "Hello\r\nThis is line two\r\nAnd line three\r\n"); string Line4 = "Finally, Line Four\r\n"; foreach(var character in Line4) { SendMessage(client, character.ToString()); } } } static void SendMessage(TcpClient client, string messageToSend) { var buffer = Encoding.ASCII.GetBytes(messageToSend); client.GetStream().Write(buffer, 0, buffer.Length); } } } The server itself listens on a known port and keeps running until a line is received from the console. It consists of 4 classes: LineBufferedClient - maintains state for the client, including async reads. ClientManager - maintains the list of connected clients. Server - responsible for listening for incoming connections and accepting them. Program - Simple wrapper that bootstraps the server and waits for console exit command. For the moment as it's a small POC, all the classes are in the same file: using System; using System.Collections.Generic; using System.Net; using System.Net.Sockets; using System.Text; using System.Threading; namespace ServerPOC { class LineBufferedClient { public LineBufferedClient(TcpClient client) { ReadBuffer = new byte[256]; CurrentLine = new StringBuilder(); Client = client; } public TcpClient Client { get; private set; } public Byte[] ReadBuffer { get; private set; } public StringBuilder CurrentLine { get; set; } } class ClientManager { List<LineBufferedClient> _clients = new List<LineBufferedClient>(); public void Add(TcpClient tcpClient) { var client = new LineBufferedClient(tcpClient); var result = tcpClient.GetStream().BeginRead(client.ReadBuffer, 0, client.ReadBuffer.Length, DataReceived, client); _clients.Add(client); } private void HandleCompleteLine(LineBufferedClient client, string line) { Console.WriteLine(line); var buffer = Encoding.ASCII.GetBytes(line + "\n"); _clients.ForEach((connectedClient) => { if (connectedClient != client) connectedClient.Client.GetStream().Write(buffer, 0, buffer.Length); }); } private void DataReceived(IAsyncResult ar) { var client = ar.AsyncState as LineBufferedClient; var bytesRead = client.Client.GetStream().EndRead(ar); if(bytesRead > 0) { var readString = Encoding.UTF8.GetString(client.ReadBuffer, 0, bytesRead); while(readString.Contains("\n")) { var indexOfNewLine = readString.IndexOf('\n'); var left = readString.Substring(0, indexOfNewLine); client.CurrentLine.Append(left); var line = client.CurrentLine.ToString(); client.CurrentLine.Clear(); if(indexOfNewLine != readString.Length-1) { readString = readString.Substring(indexOfNewLine + 1); } else { readString = string.Empty; } HandleCompleteLine(client, line); } if(!string.IsNullOrEmpty(readString)) { client.CurrentLine.Append(readString); } client.Client.GetStream().BeginRead(client.ReadBuffer, 0, 256, DataReceived, client); } else { _clients.Remove(client); } } } class Server { CancellationTokenSource _cts = new CancellationTokenSource(); private bool _shutdown = false; int _serverPort=0; private Thread _listenerThread; private ClientManager _clientManager; public Server(ClientManager clientManager) { _clientManager = clientManager; } public void Run(int serverPort) { _serverPort = serverPort; _listenerThread = new Thread(ListenLoop); _listenerThread.Start(); } public void ListenLoop() { TcpListener listener = new TcpListener(new IPEndPoint(IPAddress.Any, _serverPort)); listener.Start(); while (!_shutdown) { try { var acceptTask = listener.AcceptTcpClientAsync(); acceptTask.Wait(_cts.Token); var newClient = acceptTask.Result; _clientManager.Add(newClient); } catch (OperationCanceledException) { // NOP - Shutting down } } } public void Stop() { _shutdown = true; _cts.Cancel(); _listenerThread.Join(); } } class Program { static void Main(string[] args) { var clientManager = new ClientManager(); var server = new Server(clientManager); server.Run(4040); Console.WriteLine("Server running, press Enter to quit."); Console.ReadLine(); server.Stop(); } } } Any feedback's welcome. I'm particularly interested in feedback around any scalability issues this approach is likely to encounter, or if there is a more modern approach with C# for handling multiple clients. A: There are a couple of critical issues with the original code that need to be addressed: Exception handling If the client at the other end shuts down cleanly, then the current approach will usually work OK. The BeginRead operation completes, having read 0 bytes to indicate that the other side is no longer connected. However, it is possible that if processing takes a period of time, that another client will attempt to write to the socket prior to it being cleaned up. This can be simulated by adding a long running task stub to the HandleCompleteLine method: private void HandleCompleteLine(LineBufferedClient client, string line) { Console.WriteLine(line); Thread.Sleep(2000); // Simulate long running task var buffer = Encoding.ASCII.GetBytes(line + "\n"); Consequently the socket can get into an error state. At a minimum, the following two exceptions should be caught around both read and writes to the network stream: IOException - This is thrown, for example when an attempt to write to a socket takes place that has been closed at the other end. InvalidOperationException - This is thrown when an operation is attempted on a closed socket (if a socket throws an IOException it will be put into a closed state, so will subsequently throw InvalidOperationExceptions in response to read/write requests). Concurrency Not all DataReceived calls will take place on the same thread. It's possible that multiple calls (from different clients) could be being handled concurrently, particularly if the long running task described above is introduced. This means that shared/dependant resources need to be protected. With the current implementation the main concern is the _clients list. It's possible that items could be added to / removed from the list while other threads are doing the same thing / iterating over the list. The list could be protected using locking, or a concurrent collection like ConcurrentDictionary could be used instead. Making these changes leads to the following more error tolerant code for the ClientManager: class ClientManager { ConcurrentDictionary<LineBufferedClient, LineBufferedClient> _clients = new ConcurrentDictionary<LineBufferedClient, LineBufferedClient>(); public void Add(TcpClient tcpClient) { var client = new LineBufferedClient(tcpClient); var result = tcpClient.GetStream().BeginRead(client.ReadBuffer, 0, client.ReadBuffer.Length, DataReceived, client); if (!_clients.TryAdd(client, client)) { throw new InvalidOperationException("Tried to add connection twice"); } } private void HandleCompleteLine(LineBufferedClient client, string line) { Console.WriteLine(line); Thread.Sleep(2000); var buffer = Encoding.ASCII.GetBytes(line + "\n"); foreach(var entry in _clients) { var connectedClient = entry.Value; if (connectedClient != client) { try { connectedClient.Client.GetStream().Write(buffer, 0, buffer.Length); } catch(Exception ex) when (ex is InvalidOperationException || ex is System.IO.IOException) { RemoveClient(connectedClient); } } } } private void DataReceived(IAsyncResult ar) { var client = ar.AsyncState as LineBufferedClient; var bytesRead = client.Client.GetStream().EndRead(ar); if(bytesRead > 0) { var readString = Encoding.UTF8.GetString(client.ReadBuffer, 0, bytesRead); while(readString.Contains("\n")) { var indexOfNewLine = readString.IndexOf('\n'); var left = readString.Substring(0, indexOfNewLine); client.CurrentLine.Append(left); var line = client.CurrentLine.ToString(); client.CurrentLine.Clear(); if(indexOfNewLine != readString.Length-1) { readString = readString.Substring(indexOfNewLine + 1); } else { readString = string.Empty; } HandleCompleteLine(client, line); } if(!string.IsNullOrEmpty(readString)) { client.CurrentLine.Append(readString); } try { client.Client.GetStream().BeginRead(client.ReadBuffer, 0, 256, DataReceived, client); } catch (Exception ex) when (ex is InvalidOperationException || ex is System.IO.IOException) { RemoveClient(client); } } else { RemoveClient(client); } } private void RemoveClient(LineBufferedClient client) { LineBufferedClient ignored; _clients.TryRemove(client, out ignored); } } Break up the ClientManager Looking at the code above, it seems like the ClientManager is doing rather more than just managing the clients. It's involved maintaining the list of connected clients, transmitting to them, reading from them, orchestrating the buffering until a new line is received and performing processing on the received line. This isn't terrible for a proof of concept, however it feels like some of the responsibility should be shared around going forward. When integrating the POC, LineBuffering has been extracted into it's own class.
{ "pile_set_name": "StackExchange" }
Q: Testing if Quicktime and Java are installed? In Javascript how can I test if the user has the quicktime plugin and java plugins installed? A: For Java, you can use navigator.javaEnabled(). Or you can look here: http://www.pinlady.net/PluginDetect/JavaDetect.htm For QuickTime, you can do: var QtPlugin = navigator.plugins["Quicktime"]; if (QtPlugin) {//QuickTime is installed} See here: http://javascript.internet.com/miscellaneous/check-plugins.html
{ "pile_set_name": "StackExchange" }
Q: Element styled with CSS Flex box appears outside parent element and doesn't vertically align with parent I have a <section> element with a title, that contains a <div> which holds some text. I need the <div> to appear in the middle of the <section> tag, and the <section> should take up the rest of the space under the header. To the user, the <div> should appear in the centre of the space under the header. My following code does that to some degree, but it appears off-centre. I think thats's because I applied height: 100vh to the <section>, which makes that element longer than the rest of the page. How do I achieve this? I'm trying to create a generic set of styles for the div.message so that I can drop it in when needed and it will appear in the centre of the area below the header. header {} .content { height: 100vh; } .message { height: 100%; display: flex; flex-direction: row; flex-wrap: nowrap; justify-content: center; align-content: stretch; align-items: center; } .message .text { font-size: 20px; order: 0; flex: 0 1 auto; align-self: auto; } <header> <h1>Header area</h1> </header> <section class="content"> <h2>This is a section</h2> <div class="message"> <p class="text">This section is empty</p> </div> </section> JSFiddle A: Here is how I recommend you do, and get a good responsive layout: Add a wrapper, the container (could also use the body) Make the container a flex column container so the header and content will stack vertically Set flex-grow: 1 on content so if take the remaining space of its parent Make the content a flex column container Set flex-grow: 1 on message so if take the remaining space of its parent Make the message a flex row container (the default) Set justify-content: center; align-items: center; on message so its content centers Finally, we need to take the h2 out of flow or else the message won't fill its entire parent's height, and if not, the message won't center vertically in the section Note, as the h2 is positioned absolute the content could also be set as a flex row container, though I choose to use "column" to make it move obvious compared with the markup structure Updated fiddle Stack snippet html, body { margin: 0; } .container { display: flex; flex-direction: column; height: 100vh; } header {} .content { position: relative; flex-grow: 1; display: flex; flex-direction: column; } .content h2 { position: absolute; top: 0; left: 0; width: 100%; text-align: center; } .message { flex-grow: 1; display: flex; justify-content: center; align-items: center; } .message .text { font-size: 20px; } /* styles for this demo */ html, body { margin: 0; } .container { } header { border: 1px dotted red; } .content { border: 1px dotted red; } .message, .message .text { border: 1px dotted red; } <div class="container"> <header> <h1>Header area</h1> </header> <section class="content"> <h2>This is a section</h2> <div class="message"> <p class="text">This section is empty</p> </div> </section> </div> Based on how you intend to use message, you could also set the justify-content: center; align-items: center; to the content (and drop the flex properties on the message) Fiddle demo 2 Stack snippet html, body { margin: 0; } .container { display: flex; flex-direction: column; height: 100vh; } header {} .content { position: relative; flex-grow: 1; display: flex; flex-direction: column; justify-content: center; align-items: center; } .content h2 { position: absolute; top: 0; left: 0; width: 100%; text-align: center; } .message { } .message .text { font-size: 20px; } /* styles for this demo */ html, body { margin: 0; } .container { } header { border: 1px dotted red; } .content { border: 1px dotted red; } .message, .text { border: 1px dotted red; } <div class="container"> <header> <h1>Header area</h1> </header> <section class="content"> <h2>This is a section</h2> <div class="message"> <p class="text">This section is empty</p> </div> </section> </div> If the message is only a wrapper for the p, you could drop it all together. Fiddle demo 3
{ "pile_set_name": "StackExchange" }
Q: Auto reload included page after changing $_GET I need to reload included page in after user clicks the specific link, which changes the tournament id. info.php <?php $tournament = $_GET['tournament']; $user = $_SESSION['user_id']; $sql=$conn->query("SELECT *, SUM(points) as total FROM results WHERE tournament_id = '$tournament' and user_id = '$user' "); $data=mysqli_fetch_object($sql); echo 'Logged user '. $_SESSION['name'] .', score '. $data->total . 'in tournament '. $data->tournament_name; ?> in main layout I have: <div id="info"> <?php include "info.php"; ?> </div> <div id="main">Select tornament: <a href="competition.html?tournament=1">Tournament 1</a><br /> <a href="competition.html?tournament=2">Tournament 2</a><br /> </div> I need to automatically refresh div where page "info.php" loads to get logged user's points from chosen tournament. A: You cannot re-execute an included PHP script once the page is generated and loaded. So you have two solutions: Use AJAX to request the info.php from the server and then insert the received content in the DIV using JavaScript: document.getElementById("info").innerHTML=ReceivedContent; Use IFRAME instead of DIV and reload it when needed. HTML: <IFRAME id="info" src="info.php"></IFRAME> JavaScript: document.getElementById("info").contentWindow.location.reload();
{ "pile_set_name": "StackExchange" }
Q: Integration of Torque for a Circular Current Loop in Magnetic Field I am trying to derive the formula for Torque on a circular current loop inside a magnetic field. I know the formula is: $\tau = IAB\sin{\theta}$ Where I is the current, B is the magnetic field and A is the Area. My attempt so far: $d\vec{F} = I\,d\vec{s}\times \vec{B} = IB\,ds\cdot\sin{\alpha}$ Now, if the formula for Torque is: $\tau=bF\sin{\theta}$, and $b = r\sin{\alpha}$, then $d\tau = r\cdot sin{\alpha}\cdot IB\sin{\theta}ds\cdot \sin{\alpha} = rIBsin{\theta}\cdot\sin^2{\alpha}\,ds$ Ultimately, if I take the integral of this last equation, I cannot exactly understand how to integrate $\sin{\alpha}^2\,ds$. I guess that my underlying misunderstanding lies here: I can tell what the integral of $d\vec{s}\times \vec{B}$ will be, since I know the diameter of the circle. However, I think there is no way to express $\sin{\alpha}$ with respect to $ds$. Am I getting this wrong? Thank you A: You didn't use vector notations so it seems to be quite terrible. Also, you've used $M$ for torque (it should be $\tau$) rather than for magnetic moment (which are generally accepted symbols). Proof: A circular loop lies in $x-y$ plane with raduis $r$ and center at origin $O$. It is carrying a constant current in anti-clockwise direction. There is uniform magnetic field $\vec B$ directed along positive $x$-axis. Consider an element $d\vec s$ on the ring at an angle $\theta$ subtending an angle $d\theta$ at the origin. Torque on this element is given by $$\begin{align}d\tau&=\vec r\times d\vec F=\vec r\times(Id\vec s\times\vec B)\\ &=I(r\cos\theta\ \hat i+r\sin\theta\ \hat j)\times\bigg((-rd\theta\sin\theta\ \hat i+rd\theta\cos\theta\ \hat j)\times(B_0\ \hat i)\bigg)\\ \tau&=I\bigg(\int_0^{2\pi}B_0r^2\cos^2\theta\ d\theta\ (\hat j)-\int_0^{2\pi}B_0r^2\sin\theta\cos\theta\ d\theta\ (\hat i)\bigg)\\ &=I(\pi r^2)B_0\ \hat j=(I\pi r^2\ \hat k)(B_0\ \hat i)\\ &=\vec M\times\vec B \end{align}$$ Note: I've skipped the calculation part. Also, you can also take $\vec B=B_x\ \hat i+B_y\ \hat j +B_z\ \hat k$, I've taken only $x$-component for simplicity. The result will reamin the same. Same with the shape of conductor, doesn't matter whether square or circle.
{ "pile_set_name": "StackExchange" }
Q: Compiling and linking in AngularJS Even after reading this http://docs.angularjs.org/guide/compiler I don't understand the compilation process that AngularJS uses. I don't understand the difference between the two of them, nor when they are invoked. Could someone try to explain it somehow a little easier? A: Maybe it will help to identify how they are related: compile(): gives you access to the element before it has been inserted into the DOM. (performance advantage) returns a link() function. if you specify your own compile function, you can alternatively return an object with pre and post properties and define your pre-link and post-link functions, as described below. link(): There are actually two "linking" functions, pre and post. pre is called after your element has scope, but before actually linking to the document and calling post, angular will recursively compile any child elements. When your element's post link function has been called, all children directives have been compiled and linked. So you'd use pre if you needed to make any changes to the scope before children are compiled and inserted. post is the function returned from compile() by default and occurs after the element has been inserted into the DOM and the proper scope object has been initialized. At this point, any child directives have been compiled and inserted into your element. other notes: when you specify a link() parameter in the directive options: return { replace: ..., scope: ..., link: function(){...}, }; that function is returned by compile() ( unless you also specify a compile() function, in which case, the link property is ignored ), so if you use the link option, you are saying, "I only need to do stuff after the element has been added to the document and the scope is ready to use." The same is true when you return a function instead of an options object in your directive; that function is your link() function. When angular is connecting your directive to your document (when it sees your directive attribute or element in your HTML) it will call the compile() function for the instance it is creating. compile() returns a link() function or an object with pre and post link functions. So say there is a portion of my HTML like so: <div my-directive class='first'> <div my-directive class='second'> </div> </div> Let's assume you are assigning your own compile function: angular.module('myapp').directive('myDirective', function(){ return { replace: true, compile: function(){ console.log("i'm compile"); return { pre: function(){ console.log("i'm pre-link"); }, post: function(){ console.log("i'm post-link"); } }; } } }); when you're app runs you'll see (I added comments for clarity): "i'm compile" // class = first "i'm pre-link" // class = first "i'm compile" // class = second "i'm pre-link" // class = second "i'm post-link" // class = second "i'm post-link" // class = first
{ "pile_set_name": "StackExchange" }
Q: Is it possible to replace nested loops on different collections with Stream API I was wondering if it's possible to rewrite nested for loops using java.utils.stream in Java 8? Here is a sample data type I'm working with: class Folder { private final String name; private final Integer itemCound; Folder(String name, Integer itemCount) { this.name = name; this.itemCount = itemCount; } public String getName() { return this.name; } public Integer getItemCount() { return this.itemCount; } } Here's code in action: List<Folder> oldFolders = new ArrayList<>(); List<Folder> newFolders = new ArrayList<>(); // Fill folder collections with sample data... oldFolders.add(new Folder("folder1", 2)); oldFolders.add(new Folder("folder2", 4)); newFolders.add(new Folder("folder1", 0)); newFolders.add(new Folder("folder2", 100)); // This part should be rewrited using streams for (Folder newFolder : newFolders) { for (Folder oldFolder : oldFolders) { if (newFolder.getName().equals(oldFolder.getName()) && !newFolder.getItemCount().equals(oldFolder.getItemCount())) { // do stuff... } } } P.S: I've seen other questions on SO, but all of them had 1 collection or a collection with it's own nested collection instead of two different collections like in my example. Thanks in advance! A: That not much of an improvement to be fair unless if you can parallelize the first iteration (commented in this example) List<String> oldList = new ArrayList<>(); List<String> newList = new ArrayList<>(); oldList //.stream() //.parallel() .forEach(s1 -> newList .stream() .filter(s2 -> s1.equals(s2)) //could become a parameter Predicate .forEach(System.out::println) //could become a parameter Consumer ); Replacing the if with a filter and his Predicate then executing a method on it. This would give a solution that can be dynamic providing different Predicate and Consumer to the filter and forEach method. That would be the only reason to work on this conversion.
{ "pile_set_name": "StackExchange" }
Q: Show Node fields on Drupal Form I'm working on a form where I have added Company field as entity reference and which is type of auto complete. Is there any way, which can show other attribute of company content type like phone number and email in the form post selecting the company name. A: If your question pertains to Drupal 7 then you may want to take a look at the Entity Reference Autofill The Entity reference autofill module gives Entity reference fields an option to populate other form fields with data from selected referenced entities.
{ "pile_set_name": "StackExchange" }
Q: Ordering columns in datatable So, I have this function that takes in a datatable and orders the users by two columns. (Rank and OrderCount) Function DetermineBestUser(ByVal usertable As DataTable) Dim bestchoice As DataRow() For u = 0 To usertable.Rows.Count - 1 If Not DoesProcessorNeedOrders(usertable.Rows(u).Item("UserName"), usertable.Rows(u).Item("Amount")) Then usertable.Rows(u).Delete() End If Next bestchoice = usertable.Select("", "Rank ASC, OrderCount DESC") If IsDBNull(usertable) Then Console.WriteLine("No user is qualified for this order at this moment") End If Return bestchoice(0)(0).ToString End Function The problem is that sometimes this function works correctly and gives me the user with the HIGHEST Rank (1 or 2) and the LOWEST OrderCount (0 - 30+). However, sometimes it does not return the correct person. The only thing i've seen that fixes this is changing the "Ordercount DESC" to "OrderCount ASC"; however, this change only works for that specific order and then it's back to returning the wrong person. I have some test runs that will show this in more detail: R1 & R2 = Rank 1 or 2 / "OrderCount" Rank ASC, Ordercount ASC #1 dane-R2 / 12 jerm-R1 / 15 tulsa-R1 / 5 ---picks Jerm (should pick tulsa) #2 Dane-R2 / 14 Jerm-R2 / 15 Kate- R2 / 15 ---picks Dane #3 Dane-R2 / 15 Jerm-R2 / 5 Kate-R2 / 5 ---picks dane (should pick Jerm or Kate) Rank ASC, Ordercount DESC #1 dane-R2 / 12 jerm-R1 / 15 tulsa-R1 / 5 ---picks Tulsa #2 Dane-R2 / 14 Jerm-R2 / 15 Kate- R2 / 15 ---picks Jerm (should pick Dane) #3 Dane-R2 / 15 Jerm-R2 / 5 Kate-R2 / 5 ---picks Jerm A: It looks like it's treating your OrderCount as a string value, instead of a numeric value, and therefore sorting it lexically instead of numerically. What type is the column? You'll have the same problem with your Rank column, if you get enough ranks to get to two digits. Also, for example #2 from your second block, you have this: #2 Dane-R2 / 14 Jerm-R2 / 15 Kate- R2 / 15 ---picks Jerm (should pick Dane) In this case, Jerm is a correct pick. The Ranks all match, and so it falls to the OrderCount column, from which it should pick one of the 15's if you're sorting descending.
{ "pile_set_name": "StackExchange" }
Q: How to remove hardcoded widths and offsets from vertical drop-down menu After following a couple of tutorials I have managed to build up a CSS-only vertical drop-down menu. However, the widths and absolute offsets are hardcoded and I cannot get them to adjust automatically according to their contents. I wish to avoid hardcoding these because I wish to integrate it into a CMS where I don't know the actual lengths of the menu items. I have created a JSFiddle here showing the menu working: http://jsfiddle.net/nhfHw/2/ The top level items are currently hardcoded to a width of 100px (I wish to make this adjust according to the longest item at that level.) When I tried to remove that, it just expanded the next sub-level all over the screen. #navigation { font-family: Helvetica, Arial, sans-serif; font-size: 14px; color: #707070; line-height: 20px; width: 100px; /* I wish to remove this */ margin-top: 30px; } The x offset of the sub-levels is also hard-coded. I wish them to just adjust according to their parent's width. Their width is also hardcoded to 200px. li:hover .sub-level { background: #D0D0D0; display: block; position: absolute; left: 100px; /* I wish to remove this */ top: 0px; } li:hover .sub-level .sub-level { left: 210px; /* I wish to remove this */ top: 0px; } ul.sub-level li { border: none; float:left; width: 200px; /* I wish to remove this */ } I wish to avoid Javascript if possible. A: A combination of display: inline-block and white-space: nowrap I believe gets you what you're looking for: http://jsfiddle.net/nhfHw/14/ #nav { display: inline-block; font-family: Helvetica, Arial, sans-serif; font-size: 14px; color: #707070; line-height: 20px; margin-top: 30px; } #nav ul li { padding: 1px 5px; list-style: none; white-space: nowrap; display: block; position: relative; } #nav ul li:hover { background: #E0E0E0; } #nav ul li a { text-decoration: none; color: #707070; display: block; } #nav ul ul { display: none; border-left: 1px solid #ccc; position: absolute; left: 100%; top: 0px; } #nav ul li:hover > ul { display: block; }
{ "pile_set_name": "StackExchange" }
Q: Form refresh issue windows form ASP.net I am trying to refresh a form after every 30 minutes if the datagrid is empty. My code is as below: private void Form1_Load(object sender, EventArgs e) { BindDataGrid(); if (dataGrid_FileList.RowCount <=0) { Timer refreshTimer = new Timer(); refreshTimer.Interval = 30000; //30 seconds in milliseconds refreshTimer.Tick += new EventHandler(refreshTimer_Tick); refreshTimer.Start(); } } void refreshTimer_Tick(object sender, EventArgs e) { this.Controls.Clear(); this.InitializeComponent(); BindDataGrid(); if (dataGrid_FileList.RowCount>0) { InhouseDownloadeer_Shown(this, null); } } This code works well when RowCount of datagrid is <=0 but it's continuing even after the datagrid contains rows > 0. How can I prevent refreshTimer_Tick if the datagrid contains rows? A: You can also stop a Timer. Declare the timer variable outside the method. Timer refreshTimer = new Timer(); private void Form1_Load(object sender, EventArgs e) { refreshTimer.Interval = 30000; refreshTimer.Tick += new EventHandler(refreshTimer_Tick); } Now simply call refreshTimer.Stop(); when needed in another method. PS asp.net and winforms are not the same thing.
{ "pile_set_name": "StackExchange" }
Q: selecting parent's second sibling My markup is somthing like this. I need to grab the div's content and my current context is the img tag. <td> <IMG style="CURSOR: hand" border=0 alt=Selected align=absMiddle src="/_layouts/images/rbsel.gif"> </td> <td> </td> <td> <div> Some text </div> </td> I am trying to select the parent td of the IMG and then select the second sibling of the parent td in oder to get to the div. But it doesn't seem to work. Please help $("IMG[src*='/_layouts/images/rbsel.gif'].parent('td').nextAll().eq(1)").hide(); A: You'll get more success with this $("IMG[src*='/_layouts/images/rbsel.gif']").parent('td').nextAll().eq(1).hide(); rather than $("IMG[src*='/_layouts/images/rbsel.gif'].parent('td').nextAll().eq(1)").hide();
{ "pile_set_name": "StackExchange" }