text
stringlengths
64
89.7k
meta
dict
Q: Convex Optimization: Gradient of $\log \det (X)$ In Boyd's CVX book, there is a step by step analysis of the gradient of so called log det function Three confusions: Is the determinant for positive definite matrix exactly equivalent to the sum of eigen values is equal to the trace? There is the claim that because $\Delta x$ is small (what does it mean by small), therefore $\lambda_i$ are small, is there any justification to this claim? Because after all we are computing the eigenvalues of $X^{-1/2}\Delta X X^{-1/2}$ not simply $\Delta X$ By first order approximation of $\log(1+\lambda_i) \approx \lambda_i$ I am assuming first order Mac series? Thanks! A: If $A \in S_{++}$ then $\det A$ is the product of the eigenvalues and $\log \det A$ is the sum of their logarithms. If $\Delta X$ is small then so is $X^{-1/2} \Delta X X^{-1/2}$. This is just a multiplication by two fixed matrices after all. You got the MacLaurin series wrong, $\log (1+x) = x - \frac{x^2}{2} + \dots$ for $|x| < 1$. After that correction the argument is OK.
{ "pile_set_name": "StackExchange" }
Q: Replace image when mouse hovers I have this setup here for when my mouse hovers over another photo. For some reason the bootply http://www.bootply.com/HoZBYvFSpx isn't working with it, but here is the code there anyway. When I run it, it runs very slow and when I scroll by it, it slows the whole browser down. Is there another way to do this without bogging down my browser? A: The problem is that your pictures are 4752x3168px, thats is too big, please reduce them to max 1600px. And second if you want more to speed you page, make css to change picture Here is working fiddle http://jsfiddle.net/DRRnu/ Also if you want you can make some Jquery preloader for the images like this http://jsfiddle.net/DHMQM/29/ <div id="container"> <ul id="gallery" class="clearfix"> <li><p><a href="#"><img src="http://nettuts.s3.amazonaws.com/860_preloaderPlugin/images/1.jpg" /></a></p></li> <li><p><a href="#"><img src="http://nettuts.s3.amazonaws.com/860_preloaderPlugin/images/2.jpg" /></a></p> </li> <li><p><a href="#"><img src="http://nettuts.s3.amazonaws.com/860_preloaderPlugin/images/3.jpg" /></a></p> </li> <li><p><a href="#"><img src="http://nettuts.s3.amazonaws.com/860_preloaderPlugin/images/4.jpg" /></a></p></li> </ul> </div>
{ "pile_set_name": "StackExchange" }
Q: How to integrate SQLAlchemy and a subclassed Numpy.ndarray smoothly and in a pythonic way? I would like to store NumPy arrays with annotations (like name) via SQLAlchemy within a relational database. To do so, I separate the NumPy array from its data via a data transfer object (DTONumpy as part of MyNumpy). NumPy objects are collected with Container. What would be a nice and pythonic way to modify Container (from the example below) in a way that it provides as a list directly MyNumpy objects instead of DTONumpy which is provided by SQLAlchemy? Here is an illustration of the problem: import numpy as np import zlib import sqlalchemy as sa from sqlalchemy.orm import relationship, scoped_session, sessionmaker from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.types import TypeDecorator, CHAR DBSession = scoped_session(sessionmaker()) Base = declarative_base() #### New SQLAlchemy-Type ##################### class NumpyType (sa.types.TypeDecorator): impl = sa.types.LargeBinary def process_bind_param(self, value, dialect): return zlib.compress(value.dumps(), 9) def process_result_value(self, value, dialect): return np.loads(zlib.decompress(value)) ############################################## class DTONumpy(Base): __tablename__ = 'dtos_numpy' id = sa.Column(sa.Integer, primary_key=True) amount = sa.Column('amount', NumpyType) name = sa.Column('name', sa.String, default='') container_id = sa.Column(sa.ForeignKey('containers.id')) container_object = relationship( "Container", uselist=False, backref='dto_numpy_objects' ) def __init__(self, amount, name=None): self.amount = np.array(amount) self.name = name class Container(Base): __tablename__ = 'containers' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String, unique=True) # HERE: how to access DTONumpy BUT as MyNumpy objects in a way that MyNumpy # is smoothly integrated into SQLAlchemy? class MyNumpy(np.ndarray): _DTO = DTONumpy def __new__(cls, amount, name=''): dto = cls._DTO(amount=amount, name=name) return cls.newByDTO(dto) @classmethod def newByDTO(cls, dto): obj = np.array(dto.amount).view(cls) obj.setflags(write=False) # Immutable obj._dto = dto return obj @property def name(self): return self._dto.name if __name__ == '__main__': engine = sa.create_engine('sqlite:///:memory:', echo=True) DBSession.configure(bind=engine) Base.metadata.create_all(engine) session = DBSession() mn1 = MyNumpy ([1,2,3], "good data") mn2 = MyNumpy ([2,3,4], "bad data") # Save MyNumpy objects c1 = Container() c1.name = "Test-Container" c1.dto_numpy_objects += [mn1._dto, mn2._dto] # not a good ui session.add(c1) session.commit() # Load MyNumpy objects c2 = session.query(Container).filter_by(name="Test-Container").first() # Ugly UI: mn3 = MyNumpy.newByDTO(c2.dto_numpy_objects[0]) mn4 = MyNumpy.newByDTO(c2.dto_numpy_objects[1]) name3 = mn3._dto.name name4 = mn4._dto.name Container should now provide a list of MyNumpy objects and MyNumpy a reference to the according Container object (the list and the reference would have to take the SQLAlchemy mapping into account): type (c2.my_numpy_objects[0]) == MyNumpy >>> True c2.my_numpy_objects.append(MyNumpy ([7,2,5,6], "new data") print c2.dto_numpy_objects[-1].name >>> "new data" A: Using the ListView-answer from that question, I came up with the following solution: First, modify Container by adding a ListView-property on top of the SQLAlchemy-property dto_numpy_objects: def __init__(self, name): self.name = name """ At this point, the following code doesn't work: --------------------- self.my_numpies = ListView( self.dto_numpy_objects, # see `DTO_Numpy.container_object` MyNumpy.newByDTO, MyNumpy.getDTO) --------------------- SQLAlchemy seems to change the `dto_numypy_object`-object after the init-call. Thus, `my_numpies._data` doesn't reference `dto_numpy_objects` anymore. One solution is to implement a property that initalizes `ListView` on first access. See below, property `Container.my_numpies`. """ @property def my_numpies(self): if not hasattr(self, '_my_numpies'): # The following part can not be exe self._my_numpies = ListView( self.dto_numpy_objects, # see `DTO_Numpy.container_object` MyNumpy.newByDTO, MyNumpy.getDTO) return self._my_numpies Second, add method getDTO which can be used as new2raw-converter MyNumpy: def getDTO(self): return self._dto In order to use the backref container_object also from MyNumpy implement it as a wrapper by adding the following method: def __getattr__(self, attr): return getattr(self._dto, attr) All together, the code looks like this: import numpy as np import zlib import sqlalchemy as sa from sqlalchemy.orm import relationship, scoped_session, sessionmaker from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.types import TypeDecorator, CHAR DBSession = scoped_session(sessionmaker()) Base = declarative_base() class ListView(list): def __init__(self, raw_list, raw2new, new2raw): self._data = raw_list self.converters = {'raw2new': raw2new, 'new2raw': new2raw} def __repr__(self): repr_list = [self.converters['raw2new'](item) for item in self._data] repr_str = "[" for element in repr_list: repr_str += element.__repr__() + ",\n " repr_str = repr_str[:-3] + "]" return repr_str def append(self, item): self._data.append(self.converters['new2raw'](item)) def pop(self, index): self._data.pop(index) def __getitem__(self, index): return self.converters['raw2new'](self._data[index]) def __setitem__(self, key, value): self._data.__setitem__(key, self.converters['new2raw'](value)) def __delitem__(self, key): return self._data.__delitem__(key) def __getslice__(self, i, j): return ListView(self._data.__getslice__(i,j), **self.converters) def __contains__(self, item): return self._data.__contains__(self.converters['new2raw'](item)) def __add__(self, other_list_view): assert self.converters == other_list_view.converters return ListView( self._data + other_list_view._data, **self.converters) def __len__(self): return len(self._data) def __iter__(self): return iter([self.converters['raw2new'](item) for item in self._data]) def __eq__(self, other): return self._data == other._data #### New SQLAlchemy-Type ##################### class NumpyType (sa.types.TypeDecorator): impl = sa.types.LargeBinary def process_bind_param(self, value, dialect): return zlib.compress(value.dumps(), 9) def process_result_value(self, value, dialect): return np.loads(zlib.decompress(value)) ############################################## class DTONumpy(Base): __tablename__ = 'dtos_numpy' id = sa.Column(sa.Integer, primary_key=True) amount = sa.Column('amount', NumpyType) name = sa.Column('name', sa.String, default='') container_id = sa.Column(sa.ForeignKey('containers.id')) container_object = relationship( "Container", uselist=False, backref='dto_numpy_objects' ) def __init__(self, amount, name=None): self.amount = np.array(amount) self.name = name def reprInitParams(self): return "(%r, %r)" %(self.amount, self.name) def __repr__(self): return "%s%s" %( self.__class__.__name__, self.reprInitParams()) class Container(Base): __tablename__ = 'containers' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String, unique=True) def __init__(self, name): self.name = name super(Container, self).__init__() @property def my_numpies(self): if not hasattr(self, '_my_numpies'): # The following part can not be exe self._my_numpies = ListView( self.dto_numpy_objects, # see `DTO_Numpy.container_object` MyNumpy.newByDTO, MyNumpy.getDTO) return self._my_numpies class MyNumpy(np.ndarray): _DTO = DTONumpy def __new__(cls, amount, name=''): dto = cls._DTO(amount=amount, name=name) return cls.newByDTO(dto) @classmethod def newByDTO(cls, dto): obj = np.array(dto.amount).view(cls) obj.setflags(write=False) # Immutable obj._dto = dto return obj @property def name(self): return self._dto.name def getDTO(self): return self._dto def __getattr__(self, attr): return getattr(self._dto, attr) def __repr__(self): return "%s%s" %( self.__class__.__name__, self._dto.reprInitParams()) if __name__ == '__main__': engine = sa.create_engine('sqlite:///:memory:', echo=True) DBSession.configure(bind=engine) Base.metadata.create_all(engine) session = DBSession() mn1 = MyNumpy ([1,2,3], "good data") mn2 = MyNumpy ([2,3,4], "bad data") # Save MyNumpy-Objects c1 = Container("Test-Container") c1.my_numpies.append(mn1) c1.my_numpies.append(mn2) session.add(c1) session.commit() # Load MyNumpy-Objects c2 = session.query(Container).filter_by(name="Test-Container").first() mn3 = c1.my_numpies[0] mn4 = c1.my_numpies[1] For better representation I added DTONumpy.reprInitParams DTONumpy.__repr__ MyNumpy.__repr__ One thing that still doesn't work: c1.my_numpies += [mn1, mn2.dto]
{ "pile_set_name": "StackExchange" }
Q: Law of Demeter can easily be bypassed? Is it always possible to work around the Law of Demeter simply by creating more methods? Some people mention that this is not valid (http://wiki.c2.com/?LawOfDemeterIsHardToUnderstand), but it seems like it should be valid, since the Law of Demeter allows sending messages to any parameter. For example, this violates the Law of Demeter (assuming m1 does not return a1) but can be refactored: class C1 { m1 (a1: C2) { a1.m1().m2() // not allowed } } The access to m2 can be refactored into another method: class C1 { m1 (a1: C2) { m2(a1.m1()) // allowed } m2 (a1: C3) { a1.m2() } } A: "When your method takes parameters, your method can call methods on those parameters directly." I'd say the way it's written leaves to interpretation whether the whole method's execution stack should be considered or not (to me it's implied it's to be considered). If it is then it's a violation no matter the added level of indirection (m2) and if it's not then the code doesn't violate the rule. However, this is a guideline and it won't help developers who don't understand the goal it serves: increase encapsulation and limit coupling. There's certainly a multitude of creative ways to create awful models that still complies to rules & definitions of most design principles, but that's obviously not the point. That's like stating private modifiers can easily be by-passed using reflection: that's true, but it's pretty hard to do that by mistake. In terms of encapsulation and coupling, there's no distinction between the code you posted and a1.m1().m2(): C1 is coupled to C2 AND C3 which was acquired through C2. The fact C2 clients (e.g. C1) need to dig it's internals is a good indicator that it's encapsulation could be improved. Another principle which goes hand in hand with the Law of Demeter is the Tell Don't Ask principle, which states that objects should be told what to do rather than asked for their state. This also favors better encapsulation and therefore reduces coupling. One important note to make though is that there are two types of objects, data structures and "real objects" (data & behaviors). Data structures (e.g. DTOs -- getter/setter bags) aren't subject to most design principles such as the Law of Demeter as their role is solely to represent and share data. However, such data structures are usually found at system boundaries (e.g. application services, HTTP APIs, etc.) and having many of such objects in a model's core is an object-oriented code smell.
{ "pile_set_name": "StackExchange" }
Q: How can I get value in values of item of checkboxlist ? c# I want to match the values from reader and checkbox to change selected values of item of checkboxlist. But it does not work and I don't know what to do? Thanks. while (reader.Read()) { CheckBoxList1.Items.FindByValue(reader["malzeme_id"].ToString()).Selected = true; } I tried also, while (reader.Read()) { for (int i = 0; i < CheckBoxList1.Items.Count; i++) { if (CheckBoxList1.Items[i].Value.Equals(reader["malzeme_id"].ToString())) { CheckBoxList1.Items[i].Selected = Convert.ToBoolean( reader["isSelected"]); } } A: This is the first thing i found when I googled how to programaticly select a item in the list. Assuming that the items in your CheckedListBox are strings: for (int i = 0; i < checkedListBox1.Items.Count; i++) { if ((string)checkedListBox1.Items[i] == value) { checkedListBox1.SetItemChecked(i, true); } } Or int index = checkedListBox1.Items.IndexOf(value); if (index >= 0) { checkedListBox1.SetItemChecked(index, true); } This awnser was found on this post, and posted by wdavo.
{ "pile_set_name": "StackExchange" }
Q: What exactly triggers a row enhance awakening? The row enhance awakening says it gives a 10% damage boost when you match a row of 6 orbs of its color. Do the orbs really have to be in a row? What if I match 6 orbs of its color in an L-shape, or in a 2x3 block? Assuming they don't have to be in a row, does it have to be exactly 6 orbs, or will any match of 6+ orbs work? What if The whole board is that color so I clear it all in a single match, e.g. by using Raphael then any heartbreaker? Will that trigger row enhances 0 times (since the match was >6 orbs), 1 time (for 1 match), or 5 times (for 5 rows)? A: I'll address your questions point-by-point: The orbs do indeed need to be in a row. Just having a chunk of 6 orbs together isn't enough, it needs to span the screen from the left side to the right side. While they do need to form a row, you can match additional orbs with it as well as long as the row itself exists within your match. It still counts as a row if, for example, you make a giant L shape consisting of the left column and bottom row. Extra orbs with the row will increase the damage by a slight amount as well. Clearing the entire board as one color counts as a single row - a giant one with 30 orbs. Note that this does do a lot more damage than just a single row, about ~3.86 times more in fact. However, whenever possible you'd want to match multiple rows since that will tend to do more damage (eg. 2 separate rows instead of one clump of 12 orbs). There's a good in-depth explanation here on the calculations behind how the awakenings contribute to your damage, but the general gist of it is that you'll need at least 3 of them before it becomes advantageous to match rows instead of separate groups of three.
{ "pile_set_name": "StackExchange" }
Q: FXML set percentWidth on target node directly Is it possible to set the percentWidth in the target node directly? I tried but im getting many errors if I do so: <GridPane> <Button fx:id="node" id="node" text="fooBar" GridPane.columnIndex="1" GridPane.hgrow="always" GridPane.percentWidth="25"/> </GridPane> that is instead of: <columnConstraints> <ColumnConstraints percentWidth="25" /> </columnConstraints> A: No. There is no static setPercentWidth(Node, ...) method in GridPane (see docs). It wouldn't really make sense to allow this anyway; it would enable you to set different percent widths on different nodes even if they were in the same column. Percent width is inherently a property of a column.
{ "pile_set_name": "StackExchange" }
Q: Wrong consistency when making "after eight" chocolate I am following a very simple recipe for making peppermint-stuffed chocolate, like the "After Eight" kind. I have a problem with the uniformity of the mint mass and am looking for advice. A few spoons of egg white is hand-mixed with 60 g flormelis (icing sugar). Then a few drops of peppermint extract is mixed in. That's it. The texture tastes good but is very "corny". You can taste and feel the small icing sugar grains, as if it is not fully dissolved. It tastes very strongly of icing sugar and then the aftertaste is the perfect peppermint taste. I am looking for a method or recipe to make a smooth version of mint filling for the chocolates. A: Recipe requests are off-topic but I can solve your problem nonetheless: After Eights are filled with soft fondant, which is sugar, often glucose syrup and water boiled to the soft ball stage, then whiped. If you google "poured fondant" you should find enough recipes online.
{ "pile_set_name": "StackExchange" }
Q: convert class string to class I have the code below in my ASP.NET app, I would like to convert converterName variable to Class and pass it to FillRequest<T> method. Is it possible? var converterName = HttpContext.Current.Items["ConverterName"] as string; FillRequest<Web2ImageEntity>(Request.Params); Alternatively I could do var converterName = HttpContext.Current.Items["ConverterName"] as string; if (converterName == "Web2ImageEntity") FillRequest<Web2ImageEntity>(Request.Params); but I have about 20 entity classes and I would like to find a way to write code as short as possible. A: That would not be possible as the generic type needs to be specified at the compile time. What you can do is change the FillRequest method to be something like below and then use reflection to do the desired task FillRequest(string[] params,Type converter) { //Create object from converter type and call the req method } Or make the FillRequest take a Interface FillRequest(string[] params, IConverter c) { //call c methods to convert } Calling this would be something like: var type = Type.GetType(converterName); FillRequest(Request.Params,(IConverter)Activator.CreateInstance(type));
{ "pile_set_name": "StackExchange" }
Q: Concat two columns with Pandas Hey I'm trying to combine two columns in Pandas, but for some reason I'm having trouble doing so. Here is my data: OrderId OrderDate UserId TotalCharges CommonId PupId PickupDate Month Year 0 262 1/11/2009 47 $ 50.67 TRQKD 2 1/12/2009 1 2009 1 278 1/20/2009 47 $ 26.60 4HH2S 3 1/20/2009 1 2009 2 294 2/3/2009 47 $ 38.71 3TRDC 2 2/4/2009 2 2009 3 301 2/6/2009 47 $ 53.38 NGAZJ 2 2/9/2009 2 2009 4 302 2/6/2009 47 $ 14.28 FFYHD 2 2/9/2009 2 2009 I want to take the columns "Month" and "Year" so they make a new column which has the format: "2009-1" "2009-1" "2009-1" "2009-2" "2009-2" When I try this: df['OrderPeriod'] = df[['Year', 'Month']].apply(lambda x: '-'.join(x), axis=1) I get this error: TypeError Traceback (most recent call last) <ipython-input-24-ebbfd07772c4> in <module>() ----> 1 df['OrderPeriod'] = df[['Year', 'Month']].apply(lambda x: '-'.join(x), axis=1) /Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/frame.pyc in apply(self, func, axis, broadcast, raw, reduce, args, **kwds) 3970 if reduce is None: 3971 reduce = True -> 3972 return self._apply_standard(f, axis, reduce=reduce) 3973 else: 3974 return self._apply_broadcast(f, axis) /Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/frame.pyc in _apply_standard(self, func, axis, ignore_failures, reduce) 4062 try: 4063 for i, v in enumerate(series_gen): -> 4064 results[i] = func(v) 4065 keys.append(v.name) 4066 except Exception as e: <ipython-input-24-ebbfd07772c4> in <lambda>(x) ----> 1 df['OrderPeriod'] = df[['Year', 'Month']].apply(lambda x: '-'.join(x), axis=1) TypeError: ('sequence item 0: expected string, numpy.int32 found', u'occurred at index 0') I'm really at a loss at how to concatenate these two columns. Help? A: You need to convert the values to string in order to use join. df['OrderPeriod'] = df[['Year', 'Month']]\ .apply(lambda x: '-'.join(str(value) for value in x), axis=1)
{ "pile_set_name": "StackExchange" }
Q: Redirect from CustomProductDisplayCmd to 404 page if unavailable product My custom implementation of a ProductDisplayCmd looks like this... public void performExecute( ) throws ECException { super.performExecute(); (my code here) Now, if a product is unavailable, the super throws an ECApplicationException with this message: com.ibm.commerce.exception.ECApplicationException: The catalog entry number "253739" and part number "9788703055992" is not valid for the current contract. With a SEO enabled URL, I get redirected to our custom 404 page ("Gee sorry, that product is no longer available. Try one of our fantastic alternatives...") http://bktestapp01.tm.dom/shop/sbk/bent-isager-nielsen-efterforskerne With the old-style URL, i instead get an error page due to an untrapped exception. http://bktestapp01.tm.dom/webapp/wcs/stores/servlet/ProductDisplay?langId=100&storeId=10651&catalogId=10013&club=SBK&productId=253739 Since I can catch the exception, I suppose I have the option of manually redirecting to the 404 page, but is that the way to go? In particular: The exception type does not seem to tell me exactly what is wrong, so I might accidentally make a 404 out of another kind of error. A: Here's what I ended up with: Catch the exception from super, then decide if the reason it was thrown is that the product is unavailable. If so, then redirect to the 404 page, else re-throw exception. Implementation: public void performExecute( ) throws ECException { try { super.performExecute(); } catch (final ECApplicationException e) { // Let's see if the problem is something that should really be causing a redirect makeProductHelperAndRedirectTo404IfProductNotAvailable(e); // If we get here, noting else was thrown log.error("The reason super.performExecute threw an ECException is unknown and so we can't recover. Re-throwing it."); throw e; } ...and in the makeProductblablabla method: private ProductDataHelper makeProductHelperAndRedirectTo404IfProductNotAvailable(final ECException cause) throws ECSystemException, ECApplicationException { final ProductDataHelper productHelper; try { log.trace("Trying to determine if the reason super.performExecute threw an ECException is that the product is unavailable in the store. The execption is attached to this logline.", cause); productHelper = makeProductHelper(getProductId()); if (productHelper != null) { if (!productHelper.isActiveInClub()) { log.trace("Decided that the reason super.performExecute threw an ECException is that the product is unavailable in the store. The execption is attached to this logline. NB! That exception is DISCARDED", cause); final String pn = productHelper.getISBN(); final ECApplicationException systemException = new ECApplicationException(ECMessage._ERR_PROD_NOT_EXISTING, this.getClass().getName(), "productIsPublished", new Object[]{ pn }); systemException.setErrorTaskName("ProductDisplayErrorView"); throw systemException; } } return productHelper; } catch (RemoteException e) { log.error("I was trying to determine if the reason super.performExecute threw an ECException is that the product is unavailable in the store. The original ECException is attached to this logline. NB! That exception is DISCARDED", cause); throw new ECSystemException(ECMessage._ERR_GENERIC, super.getClass().getName(), "performExecute",ECMessageHelper.generateMsgParms(e.getMessage()), e); } catch (NamingException e) { log.error("I was trying to determine if the reason super.performExecute threw an ECException is that the product is unavailable in the store. The original ECException is attached to this logline. NB! That exception is DISCARDED", cause); throw new ECSystemException(ECMessage._ERR_GENERIC, super.getClass().getName(), "performExecute",ECMessageHelper.generateMsgParms(e.getMessage()), e); } catch (FinderException e) { log.error("I was trying to determine if the reason super.performExecute threw an ECException is that the product is unavailable in the store. The original ECException is attached to this logline. NB! That exception is DISCARDED", cause); throw new ECSystemException(ECMessage._ERR_GENERIC, super.getClass().getName(), "performExecute",ECMessageHelper.generateMsgParms(e.getMessage()), e); } catch (CreateException e) { log.error("I was trying to determine if the reason super.performExecute threw an ECException is that the product is unavailable in the store. The original ECException is attached to this logline. NB! That exception is DISCARDED", cause); throw new ECSystemException(ECMessage._ERR_GENERIC, super.getClass().getName(), "performExecute",ECMessageHelper.generateMsgParms(e.getMessage()), e); } }
{ "pile_set_name": "StackExchange" }
Q: checkBox is checked ? Looping inside checkbox elements int count = listView.getChildCount(); for (int i = 0; i < count; i++) { View child = list.getChildAt(i); //check that child.. } I wanted to use the following code to see if the total number of checkBox(es) were checked or not. Like if I have 3 checkBoxes I would want something equivalent to: if(!list.getChildAt(0) && !list.getChildAt(1) && !list.getChildAt(2)){ // do something with all unchecked checkbox } How do I loop through like this, because for one I am not sure about the number of contents in my checkbox. A: Just modify the if statement to return the state of the checkbox. int count = listView.getChildCount(); boolean allUnchecked = true; for (int i = 0; i < count; i++) { Object child = (Object) listView.getChildAt(i); if (child instanceof CheckBox) { CheckBox checkBoxChild = (CheckBox) child; if (checkBoxChild.isChecked()) { allUnchecked = false; //one is checked, sufficient to say that not all is unchecked break; //get out the for loop } } } allUnchecked will be true if all checkBoxes are unchecked, false otherwise I'm not an Android developer and I can't find the docs for getChildAt so I don't know what it returns. If its an Object you can omit the cast. Its good to check for null return of the getChildAt too. Ps: this is not a good code, you can take it like a pseudo-code to know how to implement the logic of how is checked or not, getting the list of the CheckBoxes is your task :)
{ "pile_set_name": "StackExchange" }
Q: Erlang: Finding my IP Address I'm attempting to complete a Load Balancer / Login Server / Game Server setup using Redis for some parts. Load balancing is one of them. In my Redis load balancing instance I'm using ordered sets. The key is the application name, the members are the IP addresses of the game servers. Herein lies my issue. I would like to use a public method within erlang. I cannot find anything that fits my needs. I'm wondering if I'm over looking something. {ok, L} = inet:getif(), IP = element(1, hd(L)), Gives me what I'm looking for. I believe currently it's {192,168,0,14}. But the function is not "public." {ok, Socket} = gen_tcp:listen(?PORT_LISTEN_GAME, [{active,once}, {reuseaddr, true}]), {ok, {IP, _} = inet:sockname(Socket), Gives me {0,0,0,0}. I've tried inet:getaddr("owl") which gives me {127,0,1,1}. Am I limited to sending messages via TCP and using inet:peername(Socket)? Seems like a lot to get something so simple. All the different parts of my app are running on the same computer for testing. Is it going to give me back {127,0,0,1}? That wouldn't work. I need to send the IP back to the user (my mobile phone) so they can link up with the proper server. Loopback wouldn't do.... Current Code I would like to thank all the responses. Yes, I noticed Lol4t0's comment just after the New Year. So I changed my code to reflect that. Posting this for the slow people like myself. I have to wrack my brain for a bit to get these things to click. hd([Addr || {_, Opts} <- Addrs, {addr, Addr} <- Opts, {flags, Flags} <- Opts, lists:member(loopback,Flags) =/= true]). A: We've been successfully using this function to get the first non-local IPv4 address: local_ip_v4() -> {ok, Addrs} = inet:getifaddrs(), hd([ Addr || {_, Opts} <- Addrs, {addr, Addr} <- Opts, size(Addr) == 4, Addr =/= {127,0,0,1} ]). It can of course be changed to return IPv6 as well if that is what you want.
{ "pile_set_name": "StackExchange" }
Q: Error : There is already an open DataReader associated with this Command which must be closed first I am creating a register form and I have to check if the email is valid and already used in my database. I figured out one method but I have errors in my code. If it's showing "something wrong" and after I try to use a new email address I have an error at DataReader and if I use directly a new email address it's showing an error at ExecuteNonQuery. Any thoughts? My methods of validating the email address work ok separately. Thanks. public bool IsValidEmail(string emailadress) { try { MailAddress m = new MailAddress(emailadress); return true; } catch { return false; } } public bool good(string emailadress) { SqlCommand cmd = new SqlCommand("SELECT Email FROM Clients", con); SqlDataReader dr = cmd.ExecuteReader();//error is here while (dr.Read()) { if (dr[0].ToString() == textBox6.Text) { return false; } } return true; } private void button1_Click(object sender, EventArgs e) { if (IsValidEmail(textBox6.Text) && good(textBox6.Text)) { SqlCommand cmd1 = new SqlCommand("insert into Clienti(Name , Prename , Adress , Pass , Email ) values (@name , @prename ,@adress , @pass , @email)", con); cmd1.Parameters.AddWithValue("name", textBox1.Text); cmd1.Parameters.AddWithValue("prename", textBox2.Text); cmd1.Parameters.AddWithValue("adress", textBox3.Text); cmd1.Parameters.AddWithValue("pass", textBox4.Text); cmd1.Parameters.AddWithValue("email", textBox6.Text); cmd1.ExecuteNonQuery(); // error happens here cmd1.Dispose(); MessageBox.Show("success"); } else MessageBox.Show("something went wrong"); } A: An SqlDataReader should be closed after its use, otherwise it keeps the connection locked and you cannot use that connection until you close it. However you don't need an SqlDataReader to discover if you have already registered an email address public bool good(string emailadress) { string query = @"IF EXISTS(SELECT 1 FROM Clients WHERE Email = @email) SELECT 1 ELSE SELECT 0"; SqlCommand cmd = new SqlCommand(query, con); cmd.Parameters.Add("@email", SqlDbType.NVarChar).Value = emailadress; int result = (int)cmd.ExecuteScalar(); return (result == 1); } Keep in mind that disposable objects like the SqlCommand, the SqlDataReader but of uttermost importance, the SqlConnection should be created at the moment you need them and disposed immediately to free valuable resouces. The code above with a connection created on the spot would be: public bool good(string emailadress) { string query = @"IF EXISTS(SELECT 1 FROM Clients WHERE Email = @email) SELECT 1 ELSE SELECT 0"; using(SqlConnection con = new SqlConnection(....here put your connection string...)) using(SqlCommand cmd = new SqlCommand(query, con)) { con.Open(); cmd.Parameters.Add("@email", SqlDbType.NVarChar).Value = emailadress; int result = (int)cmd.ExecuteScalar(); return (result == 1); } }
{ "pile_set_name": "StackExchange" }
Q: default copy constructor Can the (implicit)default copy constructor be called for a class that has already user-defined constructor but that is not the copy constructor? If it is possible then, suppose we define the copy constructor for the class explicitly, now can the (implicit)default constructor be called? A: First, let's clarify our vocabulary a bit. A default constructor is a constructor which can be called without any arguments. A copy constructor is a constructor which can be called with a single argument of the same type. Given this, a "default copy constructor" would be a constructor with a signature something like: class MyClass { public: static MyClass ourDefaultInstance; // default copy constructor... MyClass( MyClass const& other = ourDefaultInstance ); }; Somehow, I don't think that this is what you meant. I think what you're asking about is an implicitly declared or an implicitly defined copy constructor; a copy constructor whose declaration or definition is provided implicitly by the compiler. The compiler will always provide the declaration unless you provide a declaration of something that can be considered a copy constructor. Providing other constructors will not prevent the compiler from implicitly declaring a copy constructor. This is different from the default constructor—any user defined constructor will prevent the compiler from implicitly declaring a default constructor. This means that if you have a user defined copy constructor, the compiler will not implicitly declare a default constructor. The second important point is that you do not call constructors. The compiler calls them in certain well defined contexts: variable definition and type conversion, mainly. The compiler can only call constructors that are declared (including those that are implicitly declared). So if you have a user defined constructor (copy or otherwise), and do not define a default constructor, the compiler cannot call the constructor except in contexts where it has arguments to call it with. To summarize what I think your questions are: the compiler will provide an implicit copy constructor even if the class has other user defined constructors, provided none of those constructors can be considered copy constructors. And if you provide a user defined copy constructor, the compiler will not provide an implicitly declared default copy constructor. A: http://www.cplusplus.com/articles/y8hv0pDG/ The default copy constructor exists if you have not defined one. So yes you can call the default copy constructor, if you haven't defined a copy constructor, however if you do define a copy constructor in your class, you will not be able to call the default one. A: There is no such thing as a default copy constructor. There are default constructors and copy constructors and they are different things. The implicitly defined copy constructor (which I think is what you mean by "default copy constructor") will copy non-static members of class type using their copy constructor, not their default constructor. The implicitly defined copy constructor is used when you don't define your own copy constructor.
{ "pile_set_name": "StackExchange" }
Q: Javascript Factorialize returns incorrect result Just wondering if anyone can tell me why this returns 100 and not 120? It should calculate the total number of the factor. function factorialize(num) { for(var i = 1; i <= num; i++ ) { var fact = i+i; total = fact * fact; } return total; } factorialize(5); A: If you are trying to calculate the factorial, use this code: function factorialize(num) { var total = 1; // Initialize the total. 0! = 1. for(var i = 1; i <= num; i++ ) { total = total * i; // Add the current index to the factors by multiplying it by the current total. } return total; }
{ "pile_set_name": "StackExchange" }
Q: Issue with SELECT from database with php <?php try{ include("dbconnectie.php"); $query = $db->prepare("SELECT * FROM shop WHERE id_u = :id"); $query->bindParam("id", $_SESSION['id_u']); $query->execute(); $result = $query->fetchALL(PDO::FETCH_ASSOC); echo "<table>"; foreach($result as &$data) { echo "<tr>"; echo "<td>" . $data["brand"] . "</td>"; echo "<td>" . $data["model"] . "</td>"; echo "<td>" . $data["cond"] . "</td>"; echo "<td>" . $data["price"] . "</td>"; echo "</tr>"; } echo "</table>"; } catch(PDOException $e) { die("Error!: " . $e->getMessage()); } ?> <html> <body> <div class="poster"> <img src="<?php echo $data['img_url']; ?>" width='400' height='300' ></img> </div> </body> </html> So in a different file, $_SESSION['id_u'] was defined as id_u which is in the 'account' table in my database. Now in the 'shop' table I have every sell placement written down with the corresponding user id: "id_u" Now what I'm trying to do it Select all the sell placements that are put under that user id, but it's not working. Now for some reason it just shows a big border with nothing but a broken image icon. Not even the corresponding text. A: Do a session_start(); at the top.
{ "pile_set_name": "StackExchange" }
Q: Personal Valgrind Anomaly First I would like to thank in you in advance for any help in this matter. The Valgrind output pasted below is stemming from the following single line of C code. for( j=i;j<list->size-1;j++ ) s3->delete_tail( s3 ); However, if I change the line to lets say, for( j=i;j>=0;j-- ) s3->delete_tail( s3 ); which is just a change in the parameters of the for loop, the errors in the Valgrind output below are not reported. I do not want to be naive about this and think that it has something to do with the for loop. I have tested the delete tail at various points in the program deleting various amounts of data with no errors being reported. So my hunch is that the problem lies somewhere else in my program. I have been looking for hours but can't seem to find it. I am a new programmer so this can definitively attributed to my lack of experience. To provide a little more context, here is the surrounding code. MatrixList* gen_parens( MatrixList *list, MatrixList *ret ) { if( list->size==1 ) { } if( list->size==2 ) { } int i=0; //for( i=0;i<list->size;i++ ) { MatrixList *s3 = (MatrixList*)malloc(sizeof(MatrixList)); MatrixList *s2 = (MatrixList*)malloc(sizeof(MatrixList)); set_list_functions( s3 ); set_list_functions( s2 ); list->clone( list, s3 ); list->clone( list, s2 ); int j=0; //for( j=i;j<list->size-1;j++ ) s3->delete_tail( s3 ); for( j=i;j>=0;j-- ) s3->delete_tail( s3 ); for( j=i;j>=0;j-- ) s2->delete_head( s2 ); s3->print( s3 ); s2->print( s2 ); s3->release( s3 ); free( s3 ); s2->release( s2 ); free( s2 ); ret=(MatrixList*)malloc(sizeof(MatrixList)); set_list_functions( ret ); //} return ( ret ); } A link to the source if you need it to help is located at http://matthewh.me/Scripts/c++/matrix_chain/ with password=guest,user=guest. The Valgrind output with the detected errors from the first line of code in this post is: [mehoggan@desktop matrix_chain]$ valgrind --leak-check=full -v ./main ==3317== Memcheck, a memory error detector ==3317== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==3317== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info ==3317== Command: ./main ==3317== --3317-- Valgrind options: --3317-- --leak-check=full --3317-- -v --3317-- Contents of /proc/version: --3317-- Linux version 2.6.35.13-92.fc14.i686.PAE ([email protected]) (gcc version 4.5.1 20100924 (Red Hat 4.5.1-4) (GCC) ) #1 SMP Sat May 21 17:33:09 UTC 2011 --3317-- Arch and hwcaps: X86, x86-sse1-sse2 --3317-- Page sizes: currently 4096, max supported 4096 --3317-- Valgrind library directory: /usr/lib/valgrind --3317-- Reading syms from /lib/ld-2.13.so (0x799000) --3317-- Reading debug info from /usr/lib/debug/lib/ld-2.13.so.debug .. --3317-- Reading syms from /home/mehoggan/Subversion/Scripts/c++/matrix_chain/main (0x8048000) --3317-- Reading syms from /usr/lib/valgrind/memcheck-x86-linux (0x38000000) --3317-- object doesn't have a dynamic symbol table --3317-- Reading suppressions file: /usr/lib/valgrind/default.supp --3317-- REDIR: 0x7b0080 (index) redirected to 0x3803dd33 (vgPlain_x86_linux_REDIR_FOR_index) --3317-- Reading syms from /usr/lib/valgrind/vgpreload_core-x86-linux.so (0x4001000) --3317-- Reading syms from /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so (0x4003000) ==3317== WARNING: new redirection conflicts with existing -- ignoring it --3317-- new: 0x007b0080 (index ) R-> 0x04006bb0 index --3317-- REDIR: 0x7b0240 (strlen) redirected to 0x4006fe0 (strlen) --3317-- Reading syms from /lib/libc-2.13.so (0x7ba000) --3317-- Reading debug info from /usr/lib/debug/lib/libc-2.13.so.debug .. --3317-- REDIR: 0x8306d0 (rindex) redirected to 0x40069f0 (rindex) --3317-- REDIR: 0x82c010 (malloc) redirected to 0x40066f9 (malloc) --3317-- REDIR: 0x8302b0 (strlen) redirected to 0x400146d (_vgnU_ifunc_wrapper) --3317-- REDIR: 0x837510 (__strlen_sse2_bsf) redirected to 0x4006fa0 (strlen) --3317-- REDIR: 0x82fd70 (strcpy) redirected to 0x4007020 (strcpy) ==3317== Conditional jump or move depends on uninitialised value(s) ==3317== at 0x8048EFB: gen_parens (matrix_list.c:187) ==3317== by 0x80486EC: main (main.c:36) ==3317== Deleteing Tail --3317-- REDIR: 0x82c530 (free) redirected to 0x4005a85 (free) Deleteing Tail Deleteing Tail Deleteing Tail Deleteing Tail --3317-- REDIR: 0x833000 (strchrnul) redirected to 0x4008a20 (strchrnul) A(1X3) B(3X5), C(5X7), D(7X9), E(9X2), F(2X6) Nothing to clean up!!! ==3317== ==3317== HEAP SUMMARY: ==3317== in use at exit: 0 bytes in 0 blocks ==3317== total heap usage: 58 allocs, 58 frees, 824 bytes allocated ==3317== ==3317== All heap blocks were freed -- no leaks are possible ==3317== ==3317== Use --track-origins=yes to see where uninitialised values come from ==3317== ERROR SUMMARY: 6 errors from 1 contexts (suppressed: 12 from 8) ==3317== ==3317== 6 errors in context 1 of 1: ==3317== Conditional jump or move depends on uninitialised value(s) ==3317== at 0x8048EFB: gen_parens (matrix_list.c:187) ==3317== by 0x80486EC: main (main.c:36) ==3317== --3317-- --3317-- used_suppression: 12 dl-hack3-cond-1 ==3317== ==3317== ERROR SUMMARY: 6 errors from 1 contexts (suppressed: 12 from 8) UPDATE I think I found part of the problem, going to keep testing. But I was not initializing the size of the list in main.c. I added the following lines to the function void set_list_functions( MatrixList *list ); inside of matrix_list.c void set_list_functions( MatrixList *list ) { list->head = NULL; list->tail = NULL; list->append = append; list->print = print; list->reverse_print = reverse_print; list->delete = delete; list->delete_head = delete_head; list->delete_tail = delete_tail; list->release = release; list->clone = clone; ***list->size = 0;*** } A: I read through a portion of your repository code and tried to keep track of what you are doing. There is a major problem here: void clone_list( MatrixList *from, MatrixList *to ) { if( from->head == NULL ) { to = NULL; } else { *to = *from; to->head = 0; to->tail = 0; to->size = 0; Node *old; for( old=from->head;old!= NULL;old=old->next ) { Matrix *m_copy = clone_matrix(old->M); to->append( to,m_copy ); } } } The first if part with to = NULL will change nothing of your return value, because you only assign it locally. If you need to change the pointer, then make the parameter a MatrixList **to and then do a *to = NULL. But I think it may also make sense to only write do a to->size = 0 (and check to for beeing NULL before). If from was NULL, then list->size will be undefined later, because you dont initialize it before, what means it could have every possible value.
{ "pile_set_name": "StackExchange" }
Q: Does 3>&1 imply 4>&3 5>&3 etc.? I'd expect echo foo | tee /proc/self/fd/{3..6} 3>&1 to fail with errors like /proc/self/fd/4: No such file or directory etc., but to my surprise, it outputs foo foo foo foo foo It's like 3>&1 causes all following descriptors to be redirected to stdout, except it doesn't work if I change 3 to something else, like $ echo foo | tee /proc/self/fd/{3..6} 4>&1 tee: /proc/self/fd/3: No such file or directory tee: /proc/self/fd/5: No such file or directory tee: /proc/self/fd/6: No such file or directory foo foo $ echo foo | tee /proc/self/fd/{4..6} 4>&1 tee: /proc/self/fd/5: No such file or directory tee: /proc/self/fd/6: No such file or directory foo foo Is there an explanation for this behavior? A: strace shows this sequence of system calls: $ strace -o strace.log tee /proc/self/fd/{3..6} 3>&1 ... $ cat strace.log ... openat(AT_FDCWD, "/proc/self/fd/3", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 4 openat(AT_FDCWD, "/proc/self/fd/4", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 5 openat(AT_FDCWD, "/proc/self/fd/5", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 6 openat(AT_FDCWD, "/proc/self/fd/6", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 7 ... The first line opens /proc/self/fd/3 and assigns it the next available fd number, 4. /proc/self/fd/3 is a special path. Opening it has an effect similar to duping fd 3: fd 4 points to the same place as fd 3, the tty. The same thing happens for each successive openat() call. When the dust settles fds 4, 5, 6, and 7 are all duplicates of fd 3. 1 → tty 3 → tty 4 → tty 5 → tty 6 → tty 7 → tty Note that the 3>&1 redirection isn't important. What's important is that we're asking tee to open /proc/self/fd/N where N is already in use. We should get the same result if we get rid of 3>&1 and have tee start at /proc/self/fd/2 instead. Let's see: $ echo foo | tee /proc/self/fd/{2..6} foo foo foo foo foo foo Confirmed! Same result. We can also repeat the same fd number over and over. We get the same result when we hit fd 6. By the time it reaches the last one it has opened enough descriptors to make the jump to 6 possible. $ echo foo | tee /proc/self/fd/{2,2,2,2,6} foo foo foo foo foo foo
{ "pile_set_name": "StackExchange" }
Q: Count distinct on elastic search How to achieve count distinct function on elastic search type using sql4es driver? Select distinct inv_number , count(1) from invoices; But it returns the total count of the particular invoice number. A: Elasticsearch doesn't support deterministic DISTINCT counts (source). It supports only approximate distinct counters like "cardinality". One way to count distincts in a deterministic way is to aggregate them using "terms" and count buckets from result.
{ "pile_set_name": "StackExchange" }
Q: Dissipative time-stepping scheme for first order in time system When solving semi-discrete equations (originating from finite element models, for example), which are second-order in time of the form \begin{equation} M\ddot d + C\dot d + Kd = F, \end{equation} where $d$ is the solution vector, $M$, $C$ and $K$ are matrices, and $F$ is a vector, one can make use of methods that damp out spurious high-frequency oscillations such as damped versions of the Newmark method, HHT-$\alpha$, etc. If one wishes to solve instead a system of the form \begin{equation} M\dot d + Kd = F, \end{equation} the obvious choice would be to use a generalized trapezoidal method. However, I am looking for a method that exhibits damping out of spurious high-frequency oscillations, as in the second-order case. I don't need a highly accurate method, but it would preferably be an explicit one. A: There is an excellent family of time integration schemes that fit your description called Generalized Single Step Single Solve (GS4). The original work on the implicit methods for first order systems can be found in [1]. Here is the implicit algorithm: \begin{equation} C \widetilde{\dot{d}} + K \widetilde{d} = \widetilde{F} \end{equation} where the variables have been approximated between the $n$th and $(n+1)$th timestep as \begin{align} & \widetilde{\dot{d}} = \dot{d}_n + \Lambda_6 W_1 ( \dot{d} _{n+1}- \dot{d} _n) \\ & \widetilde{d} = d_n + \Lambda_4 W_1\Delta t \dot{d}_n + \Lambda_5 W_2\Delta t( \dot{d} _{n+1}- \dot{d} _n) \\ & \widetilde{F} =(1-W_1)q_n+W_1 F_{n+1} \end{align} Now one can solve for $ \Delta\dot{d} = \dot{d} _{n+1}- \dot{d} _n$ using \begin{align} ( \Lambda_6 W_1 C &+ \Lambda_5 W_2 \Delta t K) \Delta \dot{d } \\ = & -C \dot{d} _n - K ( d _{n} +\Lambda_4 W_1 \Delta t \dot{d} _n) \\ &+ (1-W_1)F_n+W_1 F_{n+1} \end{align} with the updates \begin{equation} \dot{d} _{n+1} = \dot{d} _n + \Delta \dot{d } \end{equation} \begin{equation} d _{n+1} = d _{n} + \lambda_4 \Delta t \dot{d} _{n} + \lambda_5 \Delta t \Delta \dot{d } \end{equation} where \begin{align} &\Lambda_4 W_1 = \frac{1}{1+\rho_{\infty}} \nonumber \\ &\Lambda_5 W_2 = \frac{1}{(1+\rho_{\infty})(1+\rho_{\infty}^{s})}\nonumber \\ &\Lambda_6 W_1 = \frac{3+\rho_{\infty}+\rho_{\infty}^{s} - \rho_{\infty}\rho_{\infty}^{s}}{2(1+\rho_{\infty})(1+\rho_{\infty}^{s})} \\ &W_1 = \frac{1}{1+\rho_{\infty}} \nonumber \\ &\lambda_4 = 1, \quad \nonumber \lambda_5 = \frac{1}{1+\rho_{\infty}^{s}} \nonumber \end{align} Admittedly, this may seem like a whole load of parameters and nonsense, but all parameters only depend upon two chosen values $(\rho_{\infty}, \rho_{\infty}^{s})$. The beauty is once you have it programmed you can choose from a whole family of algorithms just by choosing you values for $(\rho_{\infty}, \rho_{\infty}^{s})$. These parameters correspond to the eigenvalues of amplification matrix of the single DOF problem. Thus, you can choose the amount of damping (numerical dissipation) simply by choosing them. Some noteable choices: $(\rho_{\infty}, \rho_{\infty}^{s}) = (1,1)$ gives the Crank-Nicolson method (no damping, not for you) and $(\rho_{\infty}, \rho_{\infty}^{s}) = (0,0)$ gives an algorithm equivalent to the highly dissipative Gear's method (aka two-step backwards difference formula). Any choice will give you a second-order, unconditionally stable algorithm. Note the restriction on your choices: $0 \leq \rho_{\infty}^{s} \leq \rho_{\infty} \leq 1$. Now if you want an explicit algorithm, some algorithms have been developed using the same approach that led to the mess above. I don't think they have been published any place highly visible yet but early work can be found in a master's thesis here [2]. The easiest thing to do to get an explicit scheme, with the nice dissipation properties of the above algorithm, is to turn it into a predictor-corrector method. You, of course, lose the unconditional stability, but you will still have a second-order time integrator. To do so you can replace the $\widetilde{d}$ above with: \begin{align} \widetilde{d} = d_n + \Lambda_4 W_1\Delta t \dot{d}_n \end{align} and lump the $C$ matrix, then march to your hearts content. Everything else stays the same but the restrictions to the $\rho$ parameters above are lifted (they can be anything). The stability of the algorithm and the amount of dissipation still depends upon this choice. Hover around 0 and you should be okay. [1] http://onlinelibrary.wiley.com/doi/10.1002/nme.3228/full [2] http://conservancy.umn.edu/handle/11299/162393
{ "pile_set_name": "StackExchange" }
Q: TokenMismatchException in VerifyCsrfToken.php line 46 ocassionally showing I already set the token in the form: <form action="{{ route('user.store') }}" method="post"> <input type="hidden" name="_token" value="{!! csrf_token() !!}"> <legend>Agregar nuevo usuario.</legend> <div class="form-group"> <label>C&oacute;digo empresa</label> <input type="number" class="form-control input-sm" name="enterprise" id="enterprise"> </div> <div class="form-group"> <label>Nombre</label> <input type="text" class="form-control input-sm" name="name" id="name"> </div> <div class="form-group"> <label>Email</label> <input type="email" class="form-control input-sm" name="email" id="email"> </div> <div class="form-group"> <label>Usuario</label> <input type="text" class="form-control input-sm" name="username" id="username"> </div> <div class="form-group"> <label>Password</label> <input type="password" class="form-control input-sm" name="password" id="password"> </div> <div class="form-group"> <label class="checkbox-inline"> <input type="checkbox" name="create_content" id="create_content"> Crea contenido </label> <label class="checkbox-inline"> <input type="checkbox" name="active" id="active"> Activo </label> </div> <button type="submit" class="btn btn-sm btn-primary" id="btn_Crear">Create</button> </form> Occasionally I'm receiving the TokenmismathException, and I'm not able to post anymore, If I comment out the line //'App\Http\Middleware\VerifyCsrfToken', in the Kernel.php file and try to post, it works, And if I uncomment the same line again 'App\Http\Middleware\VerifyCsrfToken',, now I don't receive the TokenmismatchException, until it stops working. I'm not using ajax Does anyone know why this is happening. A: We had the exact same problem and never found a good solution. We did find a workaround although. In your .env file, set the Session storage to Redis (yap, you have to install Redis on your server than). This worked for us, never encountered the same problem again. Note, this works for us, but it of course is not a solution, merely a work-around until someone found the right solution.
{ "pile_set_name": "StackExchange" }
Q: Is there a read only memory in the stack for const variable declared in a function? I know global const is stored in .rodata Also, I know variables declared in functions are stored in the stack. However since const is supposed to be only read only, is there a special section in stack for them? how are accesses to them controlled? A: What you really should know: If an object is declared as const, the compiler will not easily let you attempt to modify it, and if you get around the compiler, then any attempt to modify the object is undefined behaviour. That's it. Nothing else. Forget about .rodata or anything you learned, what counts is that an attempt to modify a const object is undefined behaviour. What I mean by "the compiler doesn't let you" and getting around it: const int x = 5; x = 6; // Not allowed by compiler int* p = &x; *p = 6; // Not allowed by compiler int* p = (int*)&x; *p = 6; // Allowed by compiler, undefined behaviour. Executing the last statement can crash, or change x to 6, or change x to 999, or leave x unchanged, or make it behave in a schizophrenic way where it is 5 at some times and 6 at other times, including x == x being false.
{ "pile_set_name": "StackExchange" }
Q: How to change the trigonometric identity to sec^2(x)? Evaluate the definite integral: $$\int_{\frac{\pi}{8}}^\frac{\pi}{4}(\csc(2\theta)-\cot(2\theta)\ d\theta$$ Finding the derivative gives me this, which is confirmed by the steps in Wolfram Alpha. This is the answer as the last step that I also got. $$-2\csc(2\theta)\cot(2\theta)+2\csc^2(2\theta)$$ But then at the top, Wolfram Alpha says the answer is this: $$\sec^2\theta$$ How did they get that? Edit: I just realized that I'm solving the question wrong; I'm supposed to find the antiderivative and not the derivate. Either way, I wanted to know how the identity was found. A: Steps are: \begin{equation} -2\csc(2\theta)\cot(2\theta) + 2\csc^{2}(2\theta) \end{equation} \begin{equation} -2* \frac{1}{2\sin(\theta)cos(\theta)} * \frac{cos(2\theta)}{sin(2\theta)} + 2\csc^{2}(2\theta) \end{equation} Simplify. \begin{equation} -2 * \frac{1-2\sin^{2}(\theta)}{\sin^{2}(2\theta)} + 2\csc^{2}(2\theta) \end{equation} Then \begin{equation} \frac{-2}{\sin^{2}(2\theta)} + \frac{1}{\cos^{2}(\theta)} + 2\csc^{2}(2\theta) \end{equation} \begin{equation} \frac{-2}{\sin^{2}(2\theta)} + \frac{1}{\cos^{2}(\theta)} + \frac{2}{\sin^{2}(2\theta)} \end{equation} Clearly, the -2 and +2 terms cancel, and you're left with $\frac{1}{\cos^{2}(\theta)}$ which is the same as $\sec^{2}(\theta)$
{ "pile_set_name": "StackExchange" }
Q: Erro unexpected end of file no cakephp Estou com um projeto em cakephp que está na versão 1.3.15 e estou tentando rodar no Xampp mas está dando esse erro como o print em anexo e essa linha corresponde ao fechamento da minha tag html. Alguém sabe como eu devo proceder? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="<?php echo Configure::read('CurrentLanguage.locale'); ?>"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link rel="apple-touch-icon" sizes="57x57" href="/img/logos/<?php echo $siteConfig['Configuration']['logo_path_color'] ?>/apple-touch-icon-iphone.png" /> <link rel="apple-touch-icon" sizes="72x72" href="/img/logos/<?php echo $siteConfig['Configuration']['logo_path_color'] ?>/apple-touch-icon-ipad.png" /> <link rel="apple-touch-icon" sizes="114x114" href="/img/logos/<?php echo $siteConfig['Configuration']['logo_path_color'] ?>/apple-touch-icon-iphone-retina.png" /> <title> <?php echo ( isset( $page_title ) ) ? $page_title . ' -' : ''; ?> Arquidiocese de São Sebastião do Rio de Janeiro - ArqRio </title> <meta name="revisit-after" content="3 days" /> <meta name="language" content="portuguese" /> <meta name="distribution" content="Global" /> <meta name="robots" content="index, follow" /> <meta name="rating" content="General" /> <meta name="dc.language" content="pt" /> <meta name="geo.country" content="Brasil" /> <meta name="author" content="Phocus Interact - www.phocus.com.br - (12) 3942.5384 - @phocusinteract" /> <meta name="google-site-verification" content="FBDcdDwU0aGZTYbNJu3SZ6beOFTLi7auLLnZIUmF9l4" /> <meta name="msapplication-TileColor" content="#875fb0"/> <meta name="msapplication-TileImage" content="<?php echo $this->base ?>/headerLogo.png"/> <meta name="title" content="<?php echo ( isset( $page_title ) ) ? $page_title . ' -' : ''; ?> Arquidiocese de São Sebastião do Rio de Janeiro - ArqRio" /> <?php if ( isset( $data['meta_tags'] ) ) { echo '<meta property="keywords" content="' . $data['meta_tags'] . '" />' . "\n"; } else { if($this->params['controller']=="news" && $this->params['action']=="index"){ ?> <meta property="keywords" content="Dom Orani, Arcebispo, Papa, Bento XVI, Arquidiocese, Rio de Janeiro, JMJ, Rio 2013, ArqRio, cultura, religião, internacional, São Sebastião, testemunho, fé, notícias, jornalismo" /> <? }else if($this->params['controller']=="article_interviews" && $this->params['action']=="index"){ ?> <meta property="keywords" content="artigos, formação, espiritual, colunistas, padres, arcebispo, Dom Orani, salvação, jovens, famílias, ano da fé, João Paulo II, Bento XVI" /> <? }else if($this->params['controller']=="events" && $this->params['action']=="index"){ ?> <meta property="keywords" content="eventos, festas, católico, arquidiocese, Rio de Janeiro, Niterói, comemoração, folia com cristo, São Sebastião, agenda, arcebispo, Dom Orani, bispos, missa, programação, novena, confraternização" /> <? }else if($this->params['controller']=="events" && $this->params['action']=="event_by_principal"){ ?> <meta property="keywords" content="eventos, festas, católico, arquidiocese, Rio de Janeiro, São Sebastião, agenda, arcebispo, Dom Orani, missa, programação, novena, confraternização" /> <? }else if($this->params['controller']=="pages" && $this->params['action']=="social"){ ?> <meta property="keywords" content=" JMJ, jornada mundial da juventude, jovens, sonho, união, evento, santo, papa, João Paulo II, Bento XVI, 2013, Brasil, discípulos, nações, cruz, Rio2013, redes sociais, facebook, twitter, pinterest, flickr" /> <? }else if($this->params['controller']=="pages" && $this->params['action']=="vicariatos"){ ?> <meta property="keywords" content="vicariatos, vigários, ArqRio, Arquidiocese, Rio de Janeiro, São Sebastião, Jacarepaguá, Leopoldina" /> <? }else if($this->params['controller']=="pages" && $this->params['action']=="quemsomos"){ ?> <meta property="keywords" content="Arquidiocese, São Sebastião, Rio de Janeiro, bispos, arcebispo, brasão das armas." /> <? }else if($this->params['controller']=="pages" && $this->params['action']=="oarcebispodomorani"){ ?> <meta property="keywords" content="arcebispo, Dom Orani, Tempesta, história, arquidiocese, bispo, Cisterciense, Rio de Janeiro, São Sebastião, portalum, continente digital do povo de Deus" /> <? }else if($this->params['controller']=="pages" && $this->params['action']=="search_results"){ ?> <meta property="keywords" content=" arcebispo, arquidiocese, bispo, capela, cristão, Dom Orani, fé, igreja, jesus, padre, paróquias, religião, rio de janeiro, são sebastião, vida, Deus, Cristo, catedral, comunhão" /> <? }else if($this->params['controller']=="churches" && $this->params['action']=="index"){ ?> <meta property="keywords" content="igrejas, capelas, bairros, rio de janeiro, históricas, copacabana, leblon, lapa, laranjeiras, aterro, glória, mangueira, manguinhos, tijuca, botafogo, flamengo, urca" /> <? }else if($this->params['controller']=="contacts" && $this->params['action']=="index"){ ?> <meta property="keywords" content="contato, fale conosco, sugestões, arquidiocese, dúvidas, arqRio" /> <? } else { echo '<meta property="keywords" content="' . $siteConfig['Configuration']['site_meta_tags'] . '" />' . "\n"; } } if ( isset( $data['meta_description'] ) ) { echo "\t" . '<meta property="description" content="' . $data['meta_description'] . '" />' . "\n"; echo "\t" . '<meta property="og:description" content="' . $data['meta_description'] . '" />' . "\n"; } else { if($this->params['controller']=="news" && $this->params['action']=="index"){ ?> <meta property="description" content="Central de últimas notícias sobre Dom Orani, Papa Bento XVI, Arquidiocese, JMJ Rio 2013, ArqRio, cultura, religião no Brasil e no mundo." /> <meta property="og:description" content="Central de últimas notícias sobre Dom Orani, Papa Bento XVI, Arquidiocese, JMJ Rio 2013, ArqRio, cultura, religião no Brasil e no mundo."/> <? }else if($this->params['controller']=="article_interviews" && $this->params['action']=="index"){ ?> <meta property="description" content="Central de formação e artigos direcionados à família, jovens e conteúdos espirituais. " /> <meta property="og:description" content="Central de formação e artigos direcionados à família, jovens e conteúdos espirituais. "/> <? }else if($this->params['controller']=="events" && $this->params['action']=="index"){ ?> <meta property="description" content="Agenda completa do Arcebispo Dom Orani e bispos eméritos. Além de toda programação de eventos católicos do Rio de Janeiro." /> <meta property="og:description" content="Agenda completa do Arcebispo Dom Orani e bispos eméritos. Além de toda programação de eventos católicos do Rio de Janeiro."/> <? }else if($this->params['controller']=="events" && $this->params['action']=="event_by_principal"){ ?> <meta property="description" content="eventos, festas, católico, arquidiocese, Rio de Janeiro, São Sebastião, agenda, arcebispo, Dom Orani, missa, programação, novena, confraternização" /> <meta property="og:description" content="eventos, festas, católico, arquidiocese, Rio de Janeiro, São Sebastião, agenda, arcebispo, Dom Orani, missa, programação, novena, confraternização"/> <? }else if($this->params['controller']=="pages" && $this->params['action']=="social"){ ?> <meta property="description" content="Mashup de tudo o que acontece nas redes sociais sobre a Jornada Mundial da Juventude no Rio de Janeiro em 2013." /> <meta property="og:description" content="Mashup de tudo o que acontece nas redes sociais sobre a Jornada Mundial da Juventude no Rio de Janeiro em 2013."/> <? }else if($this->params['controller']=="pages" && $this->params['action']=="vicariatos"){ ?> <meta property="description" content="Endereços e todas informações sobre os Vicariatos da Arquidiocese do Rio de Janeiro." /> <meta property="og:description" content="Endereços e todas informações sobre os Vicariatos da Arquidiocese do Rio de Janeiro."/> <? }else if($this->params['controller']=="pages" && $this->params['action']=="quemsomos"){ ?> <meta property="description" content="Conheça mais sobre a Arquidiocese de São Sebastião do Rio de Janeiro, sua estrutura, bispos, arcebispo e seu brasão." /> <meta property="og:description" content="Conheça mais sobre a Arquidiocese de São Sebastião do Rio de Janeiro, sua estrutura, bispos, arcebispo e seu brasão."/> <? }else if($this->params['controller']=="pages" && $this->params['action']=="oarcebispodomorani"){ ?> <meta property="description" content="Arcebispo Dom Orani João Tempesta - Que Todos Sejam Um. História e passagens sobre o Arcebispo da Arquidiocese do Rio de Janeiro." /> <meta property="og:description" content="Arcebispo Dom Orani João Tempesta - Que Todos Sejam Um. História e passagens sobre o Arcebispo da Arquidiocese do Rio de Janeiro."/> <? }else if($this->params['controller']=="pages" && $this->params['action']=="search_results"){ ?> <meta property="description" content="Busque e encontre tudo sobre conteúdo católico do Rio de Janeiro e do Brasil. O maior acervo de notícias, vídeos, conteúdo de formação e agenda católica você só encontra na ArqRio." /> <meta property="og:description" content="Busque e encontre tudo sobre conteúdo católico do Rio de Janeiro e do Brasil. O maior acervo de notícias, vídeos, conteúdo de formação e agenda católica você só encontra na ArqRio."/> <? }else if($this->params['controller']=="churches" && $this->params['action']=="index"){ ?> <meta property="description" content="Encontre qualquer igreja ou capela do Rio de Janeiro. Lista com endereços, bairros e mapa de como chegar." /> <meta property="og:description" content="Encontre qualquer igreja ou capela do Rio de Janeiro. Lista com endereços, bairros e mapa de como chegar."/> <? }else if($this->params['controller']=="contacts" && $this->params['action']=="index"){ ?> <meta property="description" content="Dúvidas, Sugestões, Críticas ou Solicitações? Fale com a ArqRio." /> <meta property="og:description" content="Dúvidas, Sugestões, Críticas ou Solicitações? Fale com a ArqRio."/> <? } else { echo "\t" . '<meta property="description" content="' . $siteConfig['Configuration']['site_description'] . '" />' . "\n"; echo "\t" . '<meta property="og:description" content="' . $siteConfig['Configuration']['site_description'] . '" />' . "\n"; } } ?> <?php if ( $this->params['controller'] == 'pages' && $this->params['action'] == 'index' ) { ?> <meta property="og:image " content="<?php echo $this->base ?>/headerLogo.png"/> <?php } ?> <meta property="og:title" content=" <?php echo ( isset( $page_title ) ) ? $page_title . ' -' : ''; ?> Arquidiocese de São Sebastião do Rio de Janeiro - ArqRio "/> <meta property="og:site_name" content="ArqRio - Arquidiocese de São Sebastião do Rio de Janeiro" /> <meta property="og:type" content="website" /> <meta property="og:url" content="<?php echo URL . $this->params['url']['url']; ?>" /> <meta name="p:domain_verify" content="7263cb0399209d3d50f273146b7ac27e" /> <meta property="fb:admins" content="1086424376" /> <meta http-equiv="content-language" content="<?php echo Configure::read('CurrentLanguage.locale'); ?>" /> <link rel="shortcut icon" type="image/x-icon" href="/img/logos/<?php echo $siteConfig['Configuration']['logo_path_color'] ?>/favicon.ico" /> <link rel="icon" type="image/png" href="/img/logos/<?php echo $siteConfig['Configuration']['logo_path_color'] ?>/favicon.png" /> <link href="http://fonts.googleapis.com/css?family=Titillium+Web" rel="stylesheet" type="text/css" /> <?php $version = ""; // $version = "?v=" . strtotime("now"); echo $html->css('main.css' . $version, 'stylesheet', array('media' => 'screen,print')); echo $html->css('tiles.css' . $version, 'stylesheet', array('media' => 'screen,print')); echo $html->css('dynamic_content.css' . $version, 'stylesheet', array('media' => 'screen,print')); if (isset($page_name)) { if( isset($page_additional_css) ) { foreach($page_additional_css as $current_css) { $current_css = is_array($current_css) ? $current_css : array('name' => $current_css, 'media' => 'screen'); echo $html->css($current_css['name'], 'stylesheet', array('media' => $current_css['media'])); } } $file = new File(CSS_URL . $page_name . '.css'); if ($file->exists()) { echo $html->css( $page_name . ".css" . $version ); } } if( isset( $this->params['url']['print'] ) ) { echo $html->css('print', 'stylesheet', array('media' => 'screen,print')); } else { echo $html->css('print', 'stylesheet', array('media' => 'print')); } ?> <script type="text/javascript"> var baseURL = '<?php echo $this->base; ?>'; var baseURLAllPath = '<?php echo URL; ?>'; var baseURLLanguage = baseURL + '/<?php echo Configure::read('CurrentLanguage.short'); ?>/'; var featureOrder = <?php echo (isset($featuresOrder["FeatureConfiguration"]["data"])) ? $featuresOrder["FeatureConfiguration"]["data"] : '""' ?>; </script> <?php echo $javascript->link('plugins/jquery-1.8.2.min'); echo $javascript->link('plugins/jquery.event.drag-2.2'); echo $javascript->link('plugins/jquery.easing.1.3'); echo $javascript->link('plugins/jquery.mousewheel'); echo "<!--[if lt IE 9]>"; echo $javascript->link('plugins/html5shiv'); echo "<![endif]-->"; echo $javascript->link('plugins/jquery.cycle.all'); echo $javascript->link('main.js' . $version); if( isset($page_additional_js) ) { foreach($page_additional_js as $current_js) { echo $javascript->link($current_js . ( strpos($current_js, "http") === false ? ".js" . $version : "" ) ); } } if (isset($page_name)) { $file = new File(JS_URL . $page_name . ".js"); if ($file->exists()) { echo $javascript->link($page_name . ".js" . $version); } } ?> <?php echo $javascript->codeBlock($siteConfig['Configuration']['analytics_code']); ?> <?php // FOURSQUARE ?> <script type="text/javascript">(function(){window.___fourSq={'uid':'15351651'};var s=document.createElement('script');s.type='text/javascript';s.src='http://platform.foursquare.com/js/widgets.js';s.async=true;var ph=document.getElementsByTagName('script')[0];ph.parentNode.insertBefore(s,ph);})();</script> </head> <body <?php if( isset($page_category) ) echo "class=\"$page_category\""; ?>> <?php echo $this->element('social_codes'); ?> <?php echo $this->element('header'); ?> <section class="content"> <?php echo $content_for_layout; ?> </section> <?php echo $this->element('footer'); ?> </body> </html> A: Normalizado após alterar as tag's de abertura do PHP de '
{ "pile_set_name": "StackExchange" }
Q: What does a number in a module name mean in Intellij IDEA project view? I believe that is "static web" module. Please notice "6" number on the screenshot. I don't know when it appeared and what it means. A: It's a bookmark. To show and edit bookmarks use the following menu: Navigate > Bookmarks > Show bookmarks
{ "pile_set_name": "StackExchange" }
Q: how to script out the user defined table types? I can Get name and definition of all table types using either of the following scripts: SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED IF OBJECT_ID('TEMPDB..#RADHE') IS NOT NULL DROP TABLE #RADHE CREATE TABLE #RADHE ( RADHE SYSNAME, COLUMN_NAME SYSNAME, TYPE_COLUMN SYSNAME, PRIMARY KEY CLUSTERED (RADHE,COLUMN_NAME) ) DECLARE @sql nvarchar(max) = N'', @stub nvarchar(max) = N'SELECT [RADHE]=N''$--RADHE--$'', COLUMN_NAME=name, TYPE_COLUMN=system_type_name FROM sys.dm_exec_describe_first_result_set(''DECLARE @tvp $--RADHE--$; SELECT * FROM @tvp;'',null,null) ORDER BY column_ordinal;'; SELECT @sql += REPLACE(@stub, N'$--RADHE--$', QUOTENAME(s.name) + N'.' + QUOTENAME(t.name)) FROM sys.table_types AS t INNER JOIN sys.schemas AS s ON t.[schema_id] = s.[schema_id]; INSERT INTO #RADHE EXEC sys.sp_executesql @sql; SELECT * FROM #RADHE SELECT tt.name AS table_type_name, c.name AS column_name, c.column_id, t.name AS type_name, c.max_length, c.precision, c.scale, c.collation_name, c.is_nullable FROM sys.columns As c JOIN sys.table_types AS tt ON c.object_id = tt.type_table_object_id JOIN sys.types AS t ON t.user_type_id = c.user_type_id ORDER BY tt.name, c.column_id and I can even GRANT REFERENCE on all user defined types using the following script: SELECT t.name, 'GRANT REFERENCES ON TYPE::' + SCHEMA_NAME(t.schema_id) + '.' + t.name + ' TO public;' AS command_to_run FROM sys.types AS t where 1=1 AND T.is_table_type = 1 But is there a way to script out all table types in a database? I am looking to script out this table type, please note the constraints and index created with it: use TableBackups go IF EXISTS(SELECT * FROM SYS.table_types tt WHERE tt.NAME=N'DistCritGroupData' AND SCHEMA_NAME(tt.SCHEMA_ID) = N'dbo') DROP TYPE DBO.DistCritGroupData CREATE TYPE [dbo].[DistCritGroupData] AS TABLE ( [DistCritTypeId] [int] NOT NULL UNIQUE, [ItemAction] [int] NOT NULL, [ObjectId] [int] NOT NULL, [OperatorType] [int] NOT NULL, PRIMARY KEY NONCLUSTERED ( [DistCritTypeId] ASC ), INDEX CIX CLUSTERED (ObjectId, OperatorType) ); A: Honestly, the amount of time you'll spend writing a version of this that accounts for all possible combinations of indexes, constraints, and default values, and troubleshoot all the combinations you don't expect from your first use case, I don't think you'll ever get that time back no matter how many table types you have to script, which you can always do from Object Explorer (or Object Explorer Details, for multiple): Just for fun, I ran a trace to see what Management Studio sends to SQL Server in order to generate the create script for that table type, and it was about as pretty as I expected: exec sp_executesql N'SELECT SCHEMA_NAME(tt.schema_id) AS [Schema], tt.name AS [Name] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id WHERE (tt.name=@_msparam_0 and SCHEMA_NAME(tt.schema_id)=@_msparam_1)', N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)', @_msparam_0=N'DistCritGroupData',@_msparam_1=N'dbo' exec sp_executesql N'SELECT clmns.column_id AS [ID], clmns.name AS [Name], clmns.is_ansi_padded AS [AnsiPaddingStatus], ISNULL(clmns.collation_name, N'''') AS [Collation], clmns.column_encryption_key_id AS [ColumnEncryptionKeyID], ceks.name AS [ColumnEncryptionKeyName], clmns.is_computed AS [Computed], ISNULL(cc.definition,N'''') AS [ComputedText], s1clmns.name AS [DataTypeSchema], (case when clmns.default_object_id = 0 then N'''' when d.parent_object_id > 0 then N'''' else d.name end) AS [Default], ISNULL(dc.Name, N'''') AS [DefaultConstraintName], (case when clmns.default_object_id = 0 then N'''' when d.parent_object_id > 0 then N'''' else schema_name(d.schema_id) end) AS [DefaultSchema], clmns.encryption_algorithm_name AS [EncryptionAlgorithm], CAST(clmns.encryption_type AS int) AS [EncryptionType], clmns.generated_always_type AS [GeneratedAlwaysType], ISNULL(clmns.graph_type, 0) AS [GraphType], clmns.is_identity AS [Identity], CAST(ISNULL(ic.seed_value,0) AS numeric(38)) AS [IdentitySeedAsDecimal], CAST(ISNULL(ic.increment_value,0) AS numeric(38)) AS [IdentityIncrementAsDecimal], CAST(0 AS bit) AS [IsClassified], CAST(clmns.is_column_set AS bit) AS [IsColumnSet], CAST(clmns.is_filestream AS bit) AS [IsFileStream], CAST(ISNULL((select TOP 1 1 from sys.foreign_key_columns AS colfk where colfk.parent_column_id = clmns.column_id and colfk.parent_object_id = clmns.object_id), 0) AS bit) AS [IsForeignKey], CAST(clmns.is_masked AS bit) AS [IsMasked], CAST(ISNULL(cc.is_persisted, 0) AS bit) AS [IsPersisted], CAST(clmns.is_sparse AS bit) AS [IsSparse], CAST(CASE WHEN baset.name IN (N''nchar'', N''nvarchar'') AND clmns.max_length <> -1 THEN clmns.max_length/2 ELSE clmns.max_length END AS int) AS [Length], ISNULL((SELECT ms.masking_function FROM sys.masked_columns ms WHERE ms.object_id = clmns.object_id AND ms.column_id = clmns.column_id), N'''') AS [MaskingFunction], ISNULL(ic.is_not_for_replication, 0) AS [NotForReplication], clmns.is_nullable AS [Nullable], CAST(clmns.scale AS int) AS [NumericScale], CAST(clmns.precision AS int) AS [NumericPrecision], CAST(clmns.is_rowguidcol AS bit) AS [RowGuidCol], (case when clmns.rule_object_id = 0 then N'''' else r.name end) AS [Rule], (case when clmns.rule_object_id = 0 then N'''' else schema_name(r.schema_id) end) AS [RuleSchema], ISNULL(baset.name, N'''') AS [SystemType], ISNULL(xscclmns.name, N'''') AS [XmlSchemaNamespace], ISNULL(s2clmns.name, N'''') AS [XmlSchemaNamespaceSchema], ISNULL( (case clmns.is_xml_document when 1 then 2 else 1 end), 0) AS [XmlDocumentConstraint], usrt.name AS [DataType] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.all_columns AS clmns ON clmns.object_id=tt.type_table_object_id LEFT OUTER JOIN sys.column_encryption_keys AS ceks ON (ceks.column_encryption_key_id = clmns.column_encryption_key_id) LEFT OUTER JOIN sys.computed_columns AS cc ON cc.object_id = clmns.object_id and cc.column_id = clmns.column_id LEFT OUTER JOIN sys.types AS usrt ON usrt.user_type_id = clmns.user_type_id LEFT OUTER JOIN sys.schemas AS s1clmns ON s1clmns.schema_id = usrt.schema_id LEFT OUTER JOIN sys.objects AS d ON d.object_id = clmns.default_object_id LEFT OUTER JOIN sys.default_constraints as dc ON clmns.default_object_id = dc.object_id LEFT OUTER JOIN sys.identity_columns AS ic ON ic.object_id = clmns.object_id and ic.column_id = clmns.column_id LEFT OUTER JOIN sys.types AS baset ON (baset.user_type_id = clmns.system_type_id and baset.user_type_id = baset.system_type_id) or ((baset.system_type_id = clmns.system_type_id) and (baset.user_type_id = clmns.user_type_id) and (baset.is_user_defined = 0) and (baset.is_assembly_type = 1)) LEFT OUTER JOIN sys.objects AS r ON r.object_id = clmns.rule_object_id LEFT OUTER JOIN sys.xml_schema_collections AS xscclmns ON xscclmns.xml_collection_id = clmns.xml_collection_id LEFT OUTER JOIN sys.schemas AS s2clmns ON s2clmns.schema_id = xscclmns.schema_id WHERE (tt.name=@_msparam_0 and SCHEMA_NAME(tt.schema_id)=@_msparam_1) ORDER BY [ID] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)',@_msparam_0=N'DistCritGroupData',@_msparam_1=N'dbo' exec sp_executesql N'SELECT p.name AS [Name], CAST(p.value AS sql_variant) AS [Value] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.all_columns AS clmns ON clmns.object_id=tt.type_table_object_id INNER JOIN sys.extended_properties AS p ON p.major_id=tt.user_type_id AND p.minor_id=clmns.column_id AND p.class=8 WHERE (clmns.name=@_msparam_0)and((tt.name=@_msparam_1 and SCHEMA_NAME(tt.schema_id)=@_msparam_2)) ORDER BY [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)',@_msparam_0=N'DistCritTypeId',@_msparam_1=N'DistCritGroupData',@_msparam_2=N'dbo' exec sp_executesql N'SELECT p.name AS [Name], CAST(p.value AS sql_variant) AS [Value] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.all_columns AS clmns ON clmns.object_id=tt.type_table_object_id INNER JOIN sys.extended_properties AS p ON p.major_id=tt.user_type_id AND p.minor_id=clmns.column_id AND p.class=8 WHERE (clmns.name=@_msparam_0)and((tt.name=@_msparam_1 and SCHEMA_NAME(tt.schema_id)=@_msparam_2)) ORDER BY [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)',@_msparam_0=N'ItemAction',@_msparam_1=N'DistCritGroupData',@_msparam_2=N'dbo' exec sp_executesql N'SELECT p.name AS [Name], CAST(p.value AS sql_variant) AS [Value] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.all_columns AS clmns ON clmns.object_id=tt.type_table_object_id INNER JOIN sys.extended_properties AS p ON p.major_id=tt.user_type_id AND p.minor_id=clmns.column_id AND p.class=8 WHERE (clmns.name=@_msparam_0)and((tt.name=@_msparam_1 and SCHEMA_NAME(tt.schema_id)=@_msparam_2)) ORDER BY [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)',@_msparam_0=N'ObjectId',@_msparam_1=N'DistCritGroupData',@_msparam_2=N'dbo' exec sp_executesql N'SELECT p.name AS [Name], CAST(p.value AS sql_variant) AS [Value] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.all_columns AS clmns ON clmns.object_id=tt.type_table_object_id INNER JOIN sys.extended_properties AS p ON p.major_id=tt.user_type_id AND p.minor_id=clmns.column_id AND p.class=8 WHERE (clmns.name=@_msparam_0)and((tt.name=@_msparam_1 and SCHEMA_NAME(tt.schema_id)=@_msparam_2)) ORDER BY [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)',@_msparam_0=N'OperatorType',@_msparam_1=N'DistCritGroupData',@_msparam_2=N'dbo' exec sp_executesql N'SELECT p.name AS [Name], CAST(p.value AS sql_variant) AS [Value] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.extended_properties AS p ON p.major_id=tt.user_type_id AND p.minor_id=0 AND p.class=6 WHERE (tt.name=@_msparam_0 and SCHEMA_NAME(tt.schema_id)=@_msparam_1) ORDER BY [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)',@_msparam_0=N'DistCritGroupData',@_msparam_1=N'dbo' exec sp_executesql N'SELECT i.name AS [Name], CAST(ISNULL(si.bounding_box_xmax,0) AS float(53)) AS [BoundingBoxXMax], CAST(ISNULL(si.bounding_box_xmin,0) AS float(53)) AS [BoundingBoxXMin], CAST(ISNULL(si.bounding_box_ymax,0) AS float(53)) AS [BoundingBoxYMax], CAST(ISNULL(si.bounding_box_ymin,0) AS float(53)) AS [BoundingBoxYMin], CAST(case when (i.type=7) then hi.bucket_count else 0 end AS int) AS [BucketCount], CAST(ISNULL(si.cells_per_object,0) AS int) AS [CellsPerObject], CAST(i.compression_delay AS int) AS [CompressionDelay], ~i.allow_page_locks AS [DisallowPageLocks], ~i.allow_row_locks AS [DisallowRowLocks], CASE WHEN ((SELECT tbli.is_memory_optimized FROM sys.tables tbli WHERE tbli.object_id = i.object_id)=1 or (SELECT tti.is_memory_optimized FROM sys.table_types tti WHERE tti.type_table_object_id = i.object_id)=1) THEN ISNULL((SELECT ds.name FROM sys.data_spaces AS ds WHERE ds.type=''FX''), N'''') ELSE CASE WHEN ''FG''=dsi.type THEN dsi.name ELSE N'''' END END AS [FileGroup], CASE WHEN ''FD''=dstbl.type THEN dstbl.name ELSE N'''' END AS [FileStreamFileGroup], CASE WHEN ''PS''=dstbl.type THEN dstbl.name ELSE N'''' END AS [FileStreamPartitionScheme], i.fill_factor AS [FillFactor], ISNULL(i.filter_definition, N'''') AS [FilterDefinition], i.ignore_dup_key AS [IgnoreDuplicateKeys], ISNULL(indexedpaths.name, N'''') AS [IndexedXmlPathName], i.is_primary_key + 2*i.is_unique_constraint AS [IndexKeyType], CAST( CASE i.type WHEN 1 THEN 0 WHEN 4 THEN 4 WHEN 3 THEN CASE xi.xml_index_type WHEN 0 THEN 2 WHEN 1 THEN 3 WHEN 2 THEN 7 WHEN 3 THEN 8 END WHEN 4 THEN 4 WHEN 6 THEN 5 WHEN 7 THEN 6 WHEN 5 THEN 9 ELSE 1 END AS tinyint) AS [IndexType], CAST(CASE i.index_id WHEN 1 THEN 1 ELSE 0 END AS bit) AS [IsClustered], i.is_disabled AS [IsDisabled], CAST(CASE WHEN filetableobj.object_id IS NULL THEN 0 ELSE 1 END AS bit) AS [IsFileTableDefined], CAST(ISNULL(k.is_system_named, 0) AS bit) AS [IsSystemNamed], CAST(OBJECTPROPERTY(i.object_id,N''IsMSShipped'') AS bit) AS [IsSystemObject], i.is_unique AS [IsUnique], CAST(ISNULL(si.level_1_grid,0) AS smallint) AS [Level1Grid], CAST(ISNULL(si.level_2_grid,0) AS smallint) AS [Level2Grid], CAST(ISNULL(si.level_3_grid,0) AS smallint) AS [Level3Grid], CAST(ISNULL(si.level_4_grid,0) AS smallint) AS [Level4Grid], ISNULL(s.no_recompute,0) AS [NoAutomaticRecomputation], CAST(ISNULL(INDEXPROPERTY(i.object_id, i.name, N''IsPadIndex''), 0) AS bit) AS [PadIndex], ISNULL(xi2.name, N'''') AS [ParentXmlIndex], CASE WHEN ''PS''=dsi.type THEN dsi.name ELSE N'''' END AS [PartitionScheme], case UPPER(ISNULL(xi.secondary_type,'''')) when ''P'' then 1 when ''V'' then 2 when ''R'' then 3 else 0 end AS [SecondaryXmlIndexType], CAST(ISNULL(spi.spatial_index_type,0) AS tinyint) AS [SpatialIndexType] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) LEFT OUTER JOIN sys.spatial_index_tessellations as si ON i.object_id = si.object_id and i.index_id = si.index_id LEFT OUTER JOIN sys.hash_indexes AS hi ON i.object_id = hi.object_id AND i.index_id = hi.index_id LEFT OUTER JOIN sys.data_spaces AS dsi ON dsi.data_space_id = i.data_space_id LEFT OUTER JOIN sys.tables AS t ON t.object_id = i.object_id LEFT OUTER JOIN sys.data_spaces AS dstbl ON dstbl.data_space_id = t.Filestream_data_space_id and (i.index_id < 2 or (i.type = 7 and i.index_id < 3)) LEFT OUTER JOIN sys.xml_indexes AS xi ON xi.object_id = i.object_id AND xi.index_id = i.index_id LEFT OUTER JOIN sys.selective_xml_index_paths AS indexedpaths ON xi.object_id = indexedpaths.object_id AND xi.using_xml_index_id = indexedpaths.index_id AND xi.path_id = indexedpaths.path_id LEFT OUTER JOIN sys.filetable_system_defined_objects AS filetableobj ON i.object_id = filetableobj.object_id LEFT OUTER JOIN sys.key_constraints AS k ON k.parent_object_id = i.object_id AND k.unique_index_id = i.index_id LEFT OUTER JOIN sys.stats AS s ON s.stats_id = i.index_id AND s.object_id = i.object_id LEFT OUTER JOIN sys.xml_indexes AS xi2 ON xi2.object_id = xi.object_id AND xi2.index_id = xi.using_xml_index_id LEFT OUTER JOIN sys.spatial_indexes AS spi ON i.object_id = spi.object_id and i.index_id = spi.index_id WHERE (tt.name=@_msparam_2 and SCHEMA_NAME(tt.schema_id)=@_msparam_3) ORDER BY [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'DistCritGroupData',@_msparam_3=N'dbo' exec sp_executesql N'SELECT cstr.name AS [Name], cstr.is_not_for_replication AS [NotForReplication], ~cstr.is_not_trusted AS [IsChecked], ~cstr.is_disabled AS [IsEnabled], CAST(cstr.is_system_named AS bit) AS [IsSystemNamed], CAST(CASE WHEN filetableobj.object_id IS NULL THEN 0 ELSE 1 END AS bit) AS [IsFileTableDefined], cstr.definition AS [Text] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.check_constraints AS cstr ON cstr.parent_object_id=tt.type_table_object_id LEFT OUTER JOIN sys.filetable_system_defined_objects AS filetableobj ON filetableobj.object_id = cstr.object_id WHERE (tt.name=@_msparam_0 and SCHEMA_NAME(tt.schema_id)=@_msparam_1) ORDER BY [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)',@_msparam_0=N'DistCritGroupData',@_msparam_1=N'dbo' exec sp_executesql N'SELECT ISNULL(s1tt.name, N'''') AS [Owner], CAST(case when tt.principal_id is null then 1 else 0 end AS bit) AS [IsSchemaOwned], tt.name AS [Name], tt.type_table_object_id AS [ID], SCHEMA_NAME(tt.schema_id) AS [Schema], obj.create_date AS [CreateDate], obj.modify_date AS [DateLastModified], tt.max_length AS [MaxLength], tt.is_nullable AS [Nullable], ISNULL(tt.collation_name, N'''') AS [Collation], CAST(case when tt.is_user_defined = 1 then 1 else 0 end AS bit) AS [IsUserDefined], CAST(tt.is_memory_optimized AS bit) AS [IsMemoryOptimized] FROM sys.table_types AS tt LEFT OUTER JOIN sys.database_principals AS s1tt ON s1tt.principal_id = ISNULL(tt.principal_id, (TYPEPROPERTY(QUOTENAME(SCHEMA_NAME(tt.schema_id)) + ''.'' + QUOTENAME(tt.name), ''OwnerId''))) INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id LEFT OUTER JOIN sys.objects AS obj ON obj.object_id = tt.type_table_object_id WHERE (tt.name=@_msparam_0 and SCHEMA_NAME(tt.schema_id)=@_msparam_1)',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)',@_msparam_0=N'DistCritGroupData',@_msparam_1=N'dbo' exec sp_executesql N'SELECT (case ic.key_ordinal when 0 then ic.index_column_id else ic.key_ordinal end) AS [ID], clmns.name AS [Name] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) INNER JOIN sys.index_columns AS ic ON (ic.column_id > 0 and (ic.key_ordinal > 0 or ic.partition_ordinal = 0 or ic.is_included_column != 0)) AND (ic.index_id=CAST(i.index_id AS int) AND ic.object_id=i.object_id) INNER JOIN sys.columns AS clmns ON clmns.object_id = ic.object_id and clmns.column_id = ic.column_id WHERE (i.name=@_msparam_2)and((tt.name=@_msparam_3 and SCHEMA_NAME(tt.schema_id)=@_msparam_4)) ORDER BY [ID] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000),@_msparam_4 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'CIX',@_msparam_3=N'DistCritGroupData',@_msparam_4=N'dbo' exec sp_executesql N'SELECT clmns.name AS [Name], (case ic.key_ordinal when 0 then ic.index_column_id else ic.key_ordinal end) AS [ID], ic.is_descending_key AS [Descending], ic.is_included_column AS [IsIncluded], CAST(COLUMNPROPERTY(ic.object_id, clmns.name, N''IsComputed'') AS bit) AS [IsComputed] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) INNER JOIN sys.index_columns AS ic ON (ic.column_id > 0 and (ic.key_ordinal > 0 or ic.partition_ordinal = 0 or ic.is_included_column != 0)) AND (ic.index_id=CAST(i.index_id AS int) AND ic.object_id=i.object_id) INNER JOIN sys.columns AS clmns ON clmns.object_id = ic.object_id and clmns.column_id = ic.column_id WHERE (clmns.name=@_msparam_2)and((i.name=@_msparam_3)and((tt.name=@_msparam_4 and SCHEMA_NAME(tt.schema_id)=@_msparam_5)))',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000),@_msparam_4 nvarchar(4000),@_msparam_5 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'ObjectId',@_msparam_3=N'CIX',@_msparam_4=N'DistCritGroupData',@_msparam_5=N'dbo' exec sp_executesql N'SELECT clmns.name AS [Name], (case ic.key_ordinal when 0 then ic.index_column_id else ic.key_ordinal end) AS [ID], ic.is_descending_key AS [Descending], ic.is_included_column AS [IsIncluded], CAST(COLUMNPROPERTY(ic.object_id, clmns.name, N''IsComputed'') AS bit) AS [IsComputed] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) INNER JOIN sys.index_columns AS ic ON (ic.column_id > 0 and (ic.key_ordinal > 0 or ic.partition_ordinal = 0 or ic.is_included_column != 0)) AND (ic.index_id=CAST(i.index_id AS int) AND ic.object_id=i.object_id) INNER JOIN sys.columns AS clmns ON clmns.object_id = ic.object_id and clmns.column_id = ic.column_id WHERE (clmns.name=@_msparam_2)and((i.name=@_msparam_3)and((tt.name=@_msparam_4 and SCHEMA_NAME(tt.schema_id)=@_msparam_5)))',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000),@_msparam_4 nvarchar(4000),@_msparam_5 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'OperatorType',@_msparam_3=N'CIX',@_msparam_4=N'DistCritGroupData',@_msparam_5=N'dbo' exec sp_executesql N'SELECT (case ic.key_ordinal when 0 then ic.index_column_id else ic.key_ordinal end) AS [ID], clmns.name AS [Name] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) INNER JOIN sys.index_columns AS ic ON (ic.column_id > 0 and (ic.key_ordinal > 0 or ic.partition_ordinal = 0 or ic.is_included_column != 0)) AND (ic.index_id=CAST(i.index_id AS int) AND ic.object_id=i.object_id) INNER JOIN sys.columns AS clmns ON clmns.object_id = ic.object_id and clmns.column_id = ic.column_id WHERE (i.name=@_msparam_2)and((tt.name=@_msparam_3 and SCHEMA_NAME(tt.schema_id)=@_msparam_4)) ORDER BY [ID] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000),@_msparam_4 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'PK__TT_DistC__199F41EAA68706DB',@_msparam_3=N'DistCritGroupData',@_msparam_4=N'dbo' exec sp_executesql N'SELECT clmns.name AS [Name], (case ic.key_ordinal when 0 then ic.index_column_id else ic.key_ordinal end) AS [ID], ic.is_descending_key AS [Descending], ic.is_included_column AS [IsIncluded], CAST(COLUMNPROPERTY(ic.object_id, clmns.name, N''IsComputed'') AS bit) AS [IsComputed] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) INNER JOIN sys.index_columns AS ic ON (ic.column_id > 0 and (ic.key_ordinal > 0 or ic.partition_ordinal = 0 or ic.is_included_column != 0)) AND (ic.index_id=CAST(i.index_id AS int) AND ic.object_id=i.object_id) INNER JOIN sys.columns AS clmns ON clmns.object_id = ic.object_id and clmns.column_id = ic.column_id WHERE (clmns.name=@_msparam_2)and((i.name=@_msparam_3)and((tt.name=@_msparam_4 and SCHEMA_NAME(tt.schema_id)=@_msparam_5)))',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000),@_msparam_4 nvarchar(4000),@_msparam_5 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'DistCritTypeId',@_msparam_3=N'PK__TT_DistC__199F41EAA68706DB',@_msparam_4=N'DistCritGroupData',@_msparam_5=N'dbo' exec sp_executesql N'SELECT (case ic.key_ordinal when 0 then ic.index_column_id else ic.key_ordinal end) AS [ID], clmns.name AS [Name] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) INNER JOIN sys.index_columns AS ic ON (ic.column_id > 0 and (ic.key_ordinal > 0 or ic.partition_ordinal = 0 or ic.is_included_column != 0)) AND (ic.index_id=CAST(i.index_id AS int) AND ic.object_id=i.object_id) INNER JOIN sys.columns AS clmns ON clmns.object_id = ic.object_id and clmns.column_id = ic.column_id WHERE (i.name=@_msparam_2)and((tt.name=@_msparam_3 and SCHEMA_NAME(tt.schema_id)=@_msparam_4)) ORDER BY [ID] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000),@_msparam_4 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'UQ__TT_DistC__199F41EAC5355FAA',@_msparam_3=N'DistCritGroupData',@_msparam_4=N'dbo' exec sp_executesql N'SELECT clmns.name AS [Name], (case ic.key_ordinal when 0 then ic.index_column_id else ic.key_ordinal end) AS [ID], ic.is_descending_key AS [Descending], ic.is_included_column AS [IsIncluded], CAST(COLUMNPROPERTY(ic.object_id, clmns.name, N''IsComputed'') AS bit) AS [IsComputed] FROM sys.table_types AS tt INNER JOIN sys.schemas AS stt ON stt.schema_id = tt.schema_id INNER JOIN sys.indexes AS i ON (i.index_id > @_msparam_0 and i.is_hypothetical = @_msparam_1) AND (i.object_id=tt.type_table_object_id) INNER JOIN sys.index_columns AS ic ON (ic.column_id > 0 and (ic.key_ordinal > 0 or ic.partition_ordinal = 0 or ic.is_included_column != 0)) AND (ic.index_id=CAST(i.index_id AS int) AND ic.object_id=i.object_id) INNER JOIN sys.columns AS clmns ON clmns.object_id = ic.object_id and clmns.column_id = ic.column_id WHERE (clmns.name=@_msparam_2)and((i.name=@_msparam_3)and((tt.name=@_msparam_4 and SCHEMA_NAME(tt.schema_id)=@_msparam_5)))',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000),@_msparam_4 nvarchar(4000),@_msparam_5 nvarchar(4000)',@_msparam_0=N'0',@_msparam_1=N'0',@_msparam_2=N'DistCritTypeId',@_msparam_3=N'UQ__TT_DistC__199F41EAC5355FAA',@_msparam_4=N'DistCritGroupData',@_msparam_5=N'dbo' I mean, that is just awful. And all it does is return all of those resultsets to SSMS; then there is some C# and/or SMO plumbing that actually steps through all the possible combinations and generates the script. This is hidden away in application code that is much harder to trace (especially without violating EULA), and so your ability to determine all of the issues they encountered when generating the script is pretty limited. So I think if your goal is to have a nice, handy little piece of T-SQL code that is faster than right-clicking a table name in Object Explorer, you're going to be out of luck. If I didn't have SSMS (or the plan was to create 60,000 table types), I would rather keep track of these creations using a DDL trigger than try to reverse engineer the metadata to generate the original statement. I wrote this tip a long time ago, but the same concepts apply today (you could maybe just filter the types of objects you keep track of): SQL Server DDL Triggers to Track All Database Changes Borrowing from that, you could create this table: CREATE TABLE dbo.TableTypeCreationEvents ( EventDate datetime NOT NULL DEFAULT sysutcdatetime(), EventDDL nvarchar(max), SchemaName nvarchar(128), ObjectName nvarchar(128) ); And then this trigger: CREATE TRIGGER DDLCaptureTableTypeCreations ON DATABASE FOR CREATE_TYPE AS BEGIN SET NOCOUNT ON; DECLARE @EventData xml = EVENTDATA(); DECLARE @sch sysname = @EventData.value(N'(/EVENT_INSTANCE/SchemaName)[1]', N'nvarchar(128)'), @obj sysname = @EventData.value(N'(/EVENT_INSTANCE/ObjectName)[1]', N'nvarchar(128)'), @s nvarchar(max) = @EventData.value(N'(/EVENT_INSTANCE/TSQLCommand)[1]', N'nvarchar(max)'); IF EXISTS (SELECT 1 FROM sys.table_types WHERE name = @obj AND [schema_id] = SCHEMA_ID(@sch)) BEGIN INSERT dbo.TableTypeCreationEvents(EventDDL,SchemaName,ObjectName) SELECT @s,@sch,@tab; END END GO Now, some time has passed, and you want the script for the most recent version of the type dbo.DistCritGroupData? No problem: SELECT TOP (1) EventDDL FROM dbo.TableTypeCreationEvents WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'DistCritGroupData' ORDER BY EventDate DESC; This will return exactly the same script that you ran when you created the type, and since there is no ALTER TYPE, you don't have to worry about catching a table type up to any changes that have been made since its creation. A: Here my script to create user-defined table and scalar types: -- http://www.sqlines.com/sql-server-to-oracle/create_type SELECT sch.name AS UDT_SCHEMA_NAME ,userDefinedTypes.name AS UDT_TYPE_NAME , N'IF NOT EXISTS (SELECT * FROM sys.types st JOIN sys.schemas ss ON st.schema_id = ss.schema_id WHERE st.name = N''' + REPLACE(userDefinedTypes.name, '''', '''''') + N''' AND ss.name = N''' + REPLACE(sch.name, '''', '''''') + N''') ' + NCHAR(13) + NCHAR(10) + CASE WHEN userDefinedTypeProperties.IsTableType = 1 THEN N'CREATE TYPE ' + QUOTENAME(sch.name) + '.' + QUOTENAME(userDefinedTypes.name) + ' AS TABLE ( ' + tAllColumns.column_definition + N' ); ' ELSE + N'CREATE TYPE ' + QUOTENAME(sch.name) + '.' + QUOTENAME(userDefinedTypes.name) + N' FROM ' + tBaseTypeComputation.baseTypeName + CASE WHEN userDefinedTypeProperties.is_nullable = 0 THEN N' NOT NULL' ELSE N'' END + N'; ' END AS SqlCreateUdt FROM sys.types AS userDefinedTypes INNER JOIN sys.schemas AS sch ON sch.schema_id = userDefinedTypes.schema_id LEFT JOIN sys.table_types AS userDefinedTableTypes ON userDefinedTableTypes.user_type_id = userDefinedTypes.user_type_id LEFT JOIN sys.types AS systemType ON systemType.system_type_id = userDefinedTypes.system_type_id AND systemType.is_user_defined = 0 OUTER APPLY ( SELECT userDefinedTypes.is_nullable ,userDefinedTypes.precision AS NUMERIC_PRECISION ,userDefinedTypes.scale AS NUMERIC_SCALE ,userDefinedTypes.max_length AS CHARACTER_MAXIMUM_LENGTH ,CASE WHEN userDefinedTableTypes.user_type_id IS NULL THEN 0 ELSE 1 END AS IsTableType ,CONVERT(smallint, CASE -- datetime/smalldatetime WHEN userDefinedTypes.system_type_id IN (40, 41, 42, 43, 58, 61) THEN ODBCSCALE(userDefinedTypes.system_type_id, userDefinedTypes.scale) END ) AS DATETIME_PRECISION ) AS userDefinedTypeProperties OUTER APPLY ( SELECT systemType.name + CASE WHEN systemType.name IN ('char', 'varchar', 'nchar', 'nvarchar', 'binary', 'varbinary') THEN N'(' + CASE WHEN userDefinedTypeProperties.CHARACTER_MAXIMUM_LENGTH = -1 THEN 'MAX' ELSE CONVERT ( varchar(4) ,userDefinedTypeProperties.CHARACTER_MAXIMUM_LENGTH ) END + N')' WHEN systemType.name IN ('decimal', 'numeric') THEN N'(' + CONVERT(varchar(4), userDefinedTypeProperties.NUMERIC_PRECISION) + N', ' + CONVERT(varchar(4), userDefinedTypeProperties.NUMERIC_SCALE) + N')' WHEN systemType.name IN ('time', 'datetime2', 'datetimeoffset') THEN N'(' + CAST(userDefinedTypeProperties.DATETIME_PRECISION AS national character varying(36)) + N')' ELSE N'' END AS baseTypeName ) AS tBaseTypeComputation OUTER APPLY ( SELECT ( SELECT -- ,clmns.is_nullable -- ,tComputedProperties.ORDINAL_POSITION -- ,tComputedProperties.COLUMN_DEFAULT CASE WHEN tComputedProperties.ORDINAL_POSITION = 1 THEN N' ' ELSE N',' END + QUOTENAME(clmns.name) + N' ' + tComputedProperties.DATA_TYPE + CASE WHEN tComputedProperties.DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar', 'binary', 'varbinary') THEN N'(' + CASE WHEN tComputedProperties.CHARACTER_MAXIMUM_LENGTH = -1 THEN 'MAX' ELSE CONVERT ( varchar(4) ,tComputedProperties.CHARACTER_MAXIMUM_LENGTH ) END + N')' WHEN tComputedProperties.DATA_TYPE IN ('decimal', 'numeric') THEN N'(' + CONVERT(varchar(4), tComputedProperties.NUMERIC_PRECISION) + N', ' + CONVERT(varchar(4), tComputedProperties.NUMERIC_SCALE) + N')' WHEN tComputedProperties.DATA_TYPE IN ('time', 'datetime2', 'datetimeoffset') THEN N'(' + CAST(tComputedProperties.DATETIME_PRECISION AS national character varying(36)) + N')' ELSE N'' END + CASE WHEN tComputedProperties.is_nullable = 0 THEN N' NOT NULL' ELSE N'' END + NCHAR(13) + NCHAR(10) AS [text()] FROM sys.columns AS clmns INNER JOIN sys.types AS t ON t.system_type_id = clmns.system_type_id LEFT JOIN sys.types ut ON ut.user_type_id = clmns.user_type_id OUTER APPLY ( SELECT 33 As bb ,COLUMNPROPERTY(clmns.object_id, clmns.name, 'ordinal') AS ORDINAL_POSITION ,COLUMNPROPERTY(clmns.object_id, clmns.name, 'charmaxlen') AS CHARACTER_MAXIMUM_LENGTH ,COLUMNPROPERTY(clmns.object_id, clmns.name, 'octetmaxlen') AS CHARACTER_OCTET_LENGTH ,CONVERT(nvarchar(4000), OBJECT_DEFINITION(clmns.default_object_id)) AS COLUMN_DEFAULT ,clmns.is_nullable ,t.name AS DATA_TYPE ,CONVERT(tinyint, CASE -- int/decimal/numeric/real/float/money WHEN clmns.system_type_id IN (48, 52, 56, 59, 60, 62, 106, 108, 122, 127) THEN clmns.precision END ) AS NUMERIC_PRECISION ,CONVERT(int, CASE -- datetime/smalldatetime WHEN clmns.system_type_id IN (40, 41, 42, 43, 58, 61) THEN NULL ELSE ODBCSCALE(clmns.system_type_id, clmns.scale) END ) AS NUMERIC_SCALE ,CONVERT(smallint, CASE -- datetime/smalldatetime WHEN clmns.system_type_id IN (40, 41, 42, 43, 58, 61) THEN ODBCSCALE(clmns.system_type_id, clmns.scale) END ) AS DATETIME_PRECISION ) AS tComputedProperties WHERE clmns.object_id = userDefinedTableTypes.type_table_object_id ORDER BY tComputedProperties.ORDINAL_POSITION FOR XML PATH(''), TYPE ).value('.', 'nvarchar(MAX)') AS column_definition ) AS tAllColumns WHERE userDefinedTypes.is_user_defined = 1
{ "pile_set_name": "StackExchange" }
Q: why jenkins source code management configuration none? While i am trying to configure my project on jenkins. i need to provide git. but in source code management option default none. how to add git. kindly some one help me, thank you A: Read how to install a plugin: https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Howtoinstallplugins Install the git plugin: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin
{ "pile_set_name": "StackExchange" }
Q: Why $\sqrt{\sin^2 x}<0.5$ can be transformed in $|\sin x|<0.5$? Why $\sqrt{\sin^2 x}<0.5$ can be transformed in $|\sin x|<0.5$. Then $|\sin x|<0.5$ can be transformed in $-0.5<\sin x<0.5$? What is the proof of the inequality? A: It's hard for me to guess what you mean by "be mentioned", but: $$b>0\;\;\Longrightarrow\;\;|a|<b\Longleftrightarrow -b<a<b$$ So $$|\sin x|<\frac{1}{2}\Longleftrightarrow -\frac{1}{2}<\sin x<\frac{1}{2}\Longleftrightarrow \left\{\begin{array} {}-\frac{\pi}{6}<x<\frac{\pi}{6}&\text{or}\\{}\\\;\;\;\frac{5\pi}{6}<x<\frac{7\pi}{6}\end{array}\right.$$ If you prefer degrees over radians remember: $$\pi\,\text{rad.}=180^\circ\Longrightarrow \frac{\pi}{6}\text{rad.}=30^\circ\,\,\text{and etc.}$$
{ "pile_set_name": "StackExchange" }
Q: Sidekiq WARN: uninitialized constant ViewDBWorker Rails 4 I'm using Rails 4.1.5 and Sidekiq 3.3.0. I have set the recurrence of this worker to execute after every one minute. It works fine on local but throwing an error on production and staging environments. Although the worker is doing its job and saving records only throwing errors after one minute. 2016-01-21T10:50:00.594Z 5173 TID-osv5dzz5o WARN: uninitialized constant ViewDBWorker app/workers/view_db_worker.rb: class ViewDBWorker include Sidekiq::Worker include Sidetiq::Schedulable recurrence { hourly.minute_of_hour((0..59).to_a) } if Rails.env.staging? || Rails.env.production? def perform begin do_process() rescue => e puts "Error message: #{e.message}" end end def do_process() puts 'The worker !!' end end application.rb config.eager_load_paths += %W(#{config.root}/app/workers) A: You don't need to add directories to load paths in application.rb if you defined you worker classes in app/* folder of rails. Rails will automatically load those file. If you have defined your classes somewhere else e.g. in the lib/worker/* folder then you need to add this into your application.rb config.autoload_paths += %W(#{config.root}/lib/workers) config.eager_load_paths += %W(#{config.root}/lib/workers) Along with this your issue might be running processes of sidekiq. You can see your sidekiq processes(ps aux | grep sidekiq) and kill those. Then restart the worker. Reference: https://github.com/mperham/sidekiq/issues/1958 Check the answer of @pkoltermann on this issue
{ "pile_set_name": "StackExchange" }
Q: Sudden collapse of kableExtra in [R] I should probably point out that I am still fairly new to working with RMarkdown and the kableExtra R package, but I have a document that was knitable last week and now no longer knits despite no physical changes to the document. The error message I receive is the following Error in save_kable_latex(x, file, latex_header_includes, keep_tex) : We hit an error when trying to use magick to read the generated PDF file. You may check your magick installation and try to use the magick::image_read to read the PDF file manually. It's also possible that you didn't have ghostscript installed. Calls ... -> as_image -> save_kable -> save_kable_latex Execution halted I have tried everything that I can think of by re-installing the magick R package, installing ghostscript (through Homebrew), etc. And the code chunk given below seems to be where the issues are occurring, where tab2 is a data frame with some of its elements being a LaTeX expressions such as "\\sum_x f(x)*\\left ( pe(x) - lcl(x) \\right )". kable( tab2, format="latex", escape=FALSE, align="c", col.names=NULL ) %>% kable_styling( latex_options=c('hold_position') ) %>% footnote( general="Given x successes out of n trials, the holistic Jeffreys $100*(1-\\\\alpha)\\\\%$ Lower $\\\\textit{Credible}$ Limit is the value $p$ such that $\\\\int_0^p \\\\frac{t^{x+0.5-1}(1-t)^{n-x+0.5-1}}{B(x+0.5,n-x+0.5)} dt = \\\\alpha$ where B(a,b) is the Beta function given by $\\\\int_0^1 t^{(x-1)}(1-t)^{(y-1)} dt$.", general_title="", threeparttable = TRUE, footnote_as_chunk=TRUE, escape=FALSE ) %>% as_image( file="tab2.png", width=8, units="in" ) and printed to the PDF later on using the include_graphics() function on a new slide. Any assistance would be greatly appreciated as this is for a work presentation. EDIT #1 As requested, here is a Minimum Working Example prob.success <- sample( seq(.5,.99,.01), size=1 ) conf.alpha <- sample( seq(.5,.99,.01), size=1 ) tab1 <- data.frame( x=0:5, f=round(dbinom(0:5,5,prob.success),3) ) %>% mutate( pe=x/5, lcl=qbeta(1-conf.alpha,x+0.5,5-0:5+0.5) ) %>% mutate( lcl=pmin(pe,lcl) ) %>% mutate( delta=pe-lcl ) %>% mutate( f_delta=f*delta ) exp.expr <- "\\sum_x f(x)*\\left ( pe(x) - lcl(x) \\right )" exp.delta <- format( round(sum( tab1$f_delta ),4), nsmall=4 ) tab2 <- tab1 %>% mutate( x=as.character(x), f=format(round(f,4),nsmall=4) ) %>% mutate( pe=format(round(pe,4),nsmall=4) ) %>% mutate( lcl=format(round(lcl,4),nsmall=4) ) %>% mutate( delta=format(round(delta,4),nsmall=3) ) %>% mutate( f_delta=format(round(f_delta,4),nsmall=4) ) %>% rbind( ., data.frame(x="",f="",pe="",lcl="",delta="",f_delta="") ) %>% rbind( ., data.frame(x="", f="", pe="Exp", lcl="Diff", delta="=", f_delta=exp.expr) ) %>% rbind( ., data.frame(x="",f="",pe="",lcl="",delta="=",f_delta=exp.delta) ) %>% rbind( data.frame(x="x",f="f(x)",pe="pe(x)",lcl="lcl(x)",delta="pe(x)-lcl(x)", f_delta="f(x)\\times\\left(pe(x)-lcl(x)\\right)"), . ) EDIT #2 And these are the R packages used in the .Rmd file library( knitr ) library( tibble ) library( magrittr ) library( dplyr ) library( kableExtra ) library( stringr ) library( magick ) A: After several emails with the kableExtra author Hao Zhu, it was suggested that a HTML table (instead of LaTeX) should be used. As a result, the following code was able to render successfully. Many thanks to Hao. Changes to the original post include exp.expr <- "$\\sum_x f(x)*\\left ( pe(x) - lcl(x) \\right )$" tab2 <- tab1 %>% mutate( x=as.character(x), f=format(round(f,4),nsmall=4) ) %>% mutate( pe=format(round(pe,4),nsmall=4) ) %>% mutate( lcl=format(round(lcl,4),nsmall=4) ) %>% mutate( delta=format(round(delta,4),nsmall=3) ) %>% mutate( f_delta=format(round(f_delta,4),nsmall=4) ) %>% # rbind( ., data.frame(x="",f="",pe="",lcl="",delta="",f_delta="") ) %>% rbind( ., data.frame(x="", f="", pe="Exp", lcl="Diff", delta="=", f_delta=exp.expr) ) %>% rbind( ., data.frame(x="",f="",pe="",lcl="",delta="=",f_delta=exp.delta) ) # %>% # rbind( data.frame(x="x",f="f(x)",pe="pe(x)",lcl="lcl(x)",delta="pe(x)-lcl(x)", # f_delta="f(x)\\times\\left(pe(x)-lcl(x)\\right)"), . ) tab.cols <- c( "x", "f(x)", "pe(x)", "lcl(x)", "pe(x)-lcl(x)", "$f(x)\\times\\left(pe(x)-lcl(x)\\right)$" ) kable( tab2, format="html", escape=FALSE, align="c", col.names=tab.cols ) %>% kable_styling( "striped", full_width = F, position="center" ) %>% footnote( general="Given x successes out of n trials, the holistic Jeffreys $100*(1-\\alpha)\\%$ Lower *Credible* Limit is the value $p$ such that $\\int_0^p \\frac{t^{x+0.5-1}(1-t)^{n-x+0.5-1}}{B(x+0.5,n-x+0.5)} dt = \\alpha$ where B(a,b) is the Beta function given by $\\int_0^1 t^{(x-1)}(1-t)^{(y-1)} dt$.", general_title="Note:", footnote_as_chunk=TRUE, escape=FALSE )
{ "pile_set_name": "StackExchange" }
Q: Check database connectivity using Shell script I am trying to write a shell script to check database connectivity. Within my script I am using the command sqlplus uid/pwd@database-schemaname to connect to my Oracle database. Now I want to save the output generated by this command (before it drops to SQL prompt) in a temp file and then grep / find the string "Connected to" from that file to see if the connectivity is fine or not. Can anyone please help me to catch the output and get out of that prompt and test whether connectivity is fine? A: Use a script like this: #!/bin/sh echo "exit" | sqlplus -L uid/pwd@dbname | grep Connected > /dev/null if [ $? -eq 0 ] then echo "OK" else echo "NOT OK" fi echo "exit" assures that your program exits immediately (this gets piped to sqlplus). -L assures that sqlplus won't ask for password if credentials are not ok (which would make it get stuck as well). (> /dev/null just hides output from grep, which we don't need because the results are accessed via $? in this case)
{ "pile_set_name": "StackExchange" }
Q: How to get total count for each star rating? I'm using Woocommerce and I'm trying to get the total for each star rating using the post id (just like in the image below). Each rating is stored on my database as a number from 1 - 5 I just don't know how to go about retrieving the total count for each rating. Any help will be greatly appreciated. A: You can use the function get_rating_count() by specifying on each call the value needed. For example : global $product; $rating_1 = $product->get_rating_count(1); $rating_2 = $product->get_rating_count(2); $rating_3 = $product->get_rating_count(3); $rating_4 = $product->get_rating_count(4); $rating_5 = $product->get_rating_count(5); You can read more about the function
{ "pile_set_name": "StackExchange" }
Q: Deploying to iOS 3.0 with XCode 4.4 Does anyone know where I can get a simple project template for deploying apps to iOS 3.0 while using XCode 4.4? edit: Or any other way to create apps with a deployment target of 3.2 or lower with XCode 4.4? edit: And what is the current minimum deployment target the AppStore allows? A: Take a look at this post. They are doing what you require. I hope this helps. Here is a link to a post that will help
{ "pile_set_name": "StackExchange" }
Q: Woocommerce show product I am a little confused as to that the issue is here, I am trying to pick out a specific product by doing this: <?php ini_set('max_execution_time', 0); //I saw maximum execution time error on your image - this is for that $args = array( 'post_status' => 'publish', 'post_type' => 'product', 'meta_value' => 'yes', 'posts_per_page' => 10, 'product_cat' => 'grammar' ); $product_query = new WP_Query( $args ); ?> <?php while ( $product_query->have_posts() ) : $product_query->the_post(); global $product; ?> <?php the_title(); ?> <?php echo apply_filters( 'woocommerce_short_description', $product->post->post_excerpt ); ?> <?php endwhile; ?> But nothing is being produced. So I am creating a product under the relevant category and placing content into the description in both main and product short description but still nothing shows? A: The error is probably 'meta_value' => 'yes',. You need to specify a meta_key as well. $args = array( 'post_status' => 'publish', 'post_type' => 'product', 'meta_key' => 'my_meta_key', 'meta_value' => 'yes', 'posts_per_page' => 10, 'product_cat' => 'grammar' ); I have no idea what the meta key should be so change my_meta_key into what you want. Also make sure that the the value for product_cat is correct. It should be the slug of the category.
{ "pile_set_name": "StackExchange" }
Q: check if the given date(in string format) has past present date using JS I have a date range like 12/20/12-12/24/12(start date-end date) in string format . I need to check if start date has past the present date. Can it be done using JS or j-query? the date range is in string ? A: Use the power of the Date object! var myString = "12/20/2012-12/24/2012", parts = myString.split("-"); if (Date.parse(parts[0]) < Date.now()) { alert("start date has past the present date"); } You could as well write new Date(parts[0]) < new Date.
{ "pile_set_name": "StackExchange" }
Q: LINQ to SQL, ExecuteQuery, etc This might all be a bit subjective: Our organization has made a strong attempt to adopt LINQ to SQL as our primary data access method and for the most part this works well. (Let’s leave the EF out of the discussion.) Some of our developers find LINQ difficult and migrate back to traditional raw SQL via ExecuteQuery. We also utilize OpenQuery in some of our applications to access data on remote servers. OpenQuery cannot be executed via LINQ and always results in code being executed via ExecuteQuery. As an organization we have also made the decision to move away from Stored Procedures and again relying on LINQ. So, is it fair to say that some queries are just so complex that they can’t be performed with LINQ? We want to avoid business logic in the database so where do we go when you can’t use LINQ? What is the general feeling of ExecuteQuery as a better alternative to ADO.NET Command.Execute()? I think one can make an argument against stored procedures or a the very least that avoiding them is a valid choice but what about querying Views with LINQ as an alternative? Thoughts on where to land the plane on this? What are others doing? Thanks, A: I've yet to find a query that was so "complex" that it could not be expressed in Linq-to-Sql. In fact, I find the Linq-to-Sql code a bit easier to read than some traditional SQL Statements. That said, if this is a complex READ command, then I would suggest that you put it into a VIEW and access the view via LINQ. You would have a bit more control over the joins/and ensuring that the underlying SQL is defined for optimum efficiency. Additionally, you can call Stored Procedures and/or functions in Linq-To-Sql. And the same reasons that you would create a stored procedure in your traditional SQL process is why you would do so in Linq-To-Sql. Let's reiterate that: there is nothing stopping you from running a stored procedure from Linq-to-SQL. I do so when I need to impact a number of records (for example, making a batch change on records). A: Use the right tool for the right job. Linq-to-SQL covers a large portion of what you need to do on a daily basis in a typical line-of-business app - so use it there. Devs will get used to it, and will probably also begin to like it - it's really quite powerful and useful! But yes - there are defintily scenarios where a straight-up SQL query will be a lot easier to use - that's fine, no harm done - that's what ExecuteQuery is for. If you have a bunch of CTE's and complicated joins of various types - you might be able to express that in Linq-to-SQL, but it might be just too much effort and hassle to do it if you already have a T-SQL statement that works.... It takes time to get used to a new way of doing things - give it some time! I'm sure most of your dev will migrate to LINQ step by step. Encourage them, give them tips & tricks, help them where you can. But also accept that in same cases, tricky SQL statements might just be too tricky to rewrite in LINQ (if you already have them and they already work - just keep using them).
{ "pile_set_name": "StackExchange" }
Q: Overwrite default XMLRPC function from plugin I want to overwrite the "metaWeblog.newMediaObject" xmlrpc call so the file is saved remotely. From mw_newMediaObject in class-wp-xmlrpc-server.php, i see that there is a hook: do_action('xmlrpc_call', 'metaWeblog.newMediaObject'); So I should be able to do something in my plugin like: add_action ('xmlrpc_call', 'myWewMediaObject'); function myWewMediaObject ($method) { if ($method=='metaWeblog.newMediaObject') { //perform some custom action } } However, since the do_action call is at the beginning of the mw_newMediaObject function, I am unsure how to stop the execution after my plugin function exists. Please let me know if I am on the right track and if there is another way to do this. A: Actually, that hook just lets you tie in to the beginning of executing that function, it doesn't let you override anything. If you want to replace this function entirely, I recommend two different options: 1. Don't use it Just define your own XMLRPC method (myNamespace.newMediaObject) and call that instead. 2. Replace it You can tie in to the xmlrpc_methods filter the same way you would to add a new method and can replace the callback for metaWeblog.newMediaObject: add_filter( 'xmlrpc_methods', 'myMediaHandler' ); function myMediaHandler( $methods ) { $methods[ 'metaWeblog.newMediaObject' ] = 'myMediaObject'; return $methods; } function myMediaObject( $args ) { // ... custom functionality } Just be sure to maintain the same format with the $args array and to call/apply the same actions/filters in your custom method so that you don't run into any surprises.
{ "pile_set_name": "StackExchange" }
Q: Rails 3 Auto Select from Collect_select I have the following associations: Town -> has_many :outlets User -> belongs_to :town Outlet -> belongs_to :town, has_many :problems Problem -> belongs_to :outlet In the Outlets Page, I want to be able to click a button that will take me to the New Problem page. In the 'New Problem Page' I have the following collection_select: f.collection_select :outlet_id, Outlet.all, :id, :name However, if I clicked the button that took me here from the Outlet Page, I want the correct outlet to already be selected and greyed out so the user can't change it. I was thinking I could possibly do this with a custom route, that will accept the :outlet_id as a param, but I wasn't sure how to do this, or even if this is the best option. Any help is greatly appreciated. A: In your controller, you have something like @problem = Problem.new try @problem = Problem.new({ outlet_id: params[:outlet_id] }) And the form helpers should take care of it. You could also try to do it as a nested RESTful resource in which case the url would look something like /outlets/5/problems/new and rails would automatically set params[:outlet_id] to 5.. Your route for this would look like: resources :outlets do resources :problems end
{ "pile_set_name": "StackExchange" }
Q: What framework combination to choose for developing Windows 8 hybrid application? I am planning to develop a hybrid application for Windows tablet. Which framework or tool I should use. Currently, I am thinking to go for PhoneGap/Sencha combinations but not 100% confident about it as I haven't got much read about this combination on Windows 8 platform (Rt/pro). Please suggest. A: PhoneJS is a paid framework and still have some issues. I would like prepand you to use Cordova PhoneGap framework and use extension for VS2013 Also here tutorial about developing for windows http://msopentech.com/opentech-projects/apache-cordova/ and here http://blog.falafel.com/getting-started-with-cordova-and-multi-device-hybrid-app-in-visual-studio/
{ "pile_set_name": "StackExchange" }
Q: SQL How to select the value of the end of season(every three month) Supposed I have some data as below: code vol val num test_date ------------------------------------------ 1 00001 500 0.1 111 20180105 2 00001 1000 0.2 222 20180304 3 00001 200 0.1 111 20180330 4 00001 400 0.3 222 20180601 5 00001 200 0.2 333 20180630 My expected result is code vol val num test_date ------------------------------------------ 1 00001 200 0.1 111 20180330 2 00001 200 0.2 333 20180630 3 00001 200 0.2 333 20180928 -- Max(val) only 0928, there is no data in 20180930 4 00001 200 0.2 333 20181231 I would like to select the max(val) for the month in '3, 6, 9 12', how to query in MySQL, thanks so much for any advice. A: You can use quarter function available in mysql. select * from test t1 inner join ( select quarter(test_date) qtr, max(val) val from test group by quarter(test_date)) t2 on t2.val = t1.val and t2.qtr = quarter(t1.test_date) see dbfiddle.
{ "pile_set_name": "StackExchange" }
Q: Errors prevented isopacket load: While loading isopacket `constraint-solver`: Cannot call method 'slice' of null When launching my meteor app with the following command: MONGO_URL="mongodb://localhost:27017/vision-test" PORT=8282 ROOT_URL="sertal.esb.local:8383" meteor -p 8383 I get following error: Errors prevented isopacket load: While loading isopacket `constraint-solver`: packages/meteor/url_server.js:11:1: Cannot call method 'slice' of null at Meteor.absoluteUrl.options (packages/meteor/url_server.js:11:1) at <runJavaScript-2>:1109:4 at <runJavaScript-2>:1194:3 The same error appears when I use the parameters to run the built application. A: the ROOT_URL="sertal.esb.local:8383" parameter is missing the http:// part. The http:// or https:// part of the ROOT_URL is mandatory
{ "pile_set_name": "StackExchange" }
Q: Matrix over a finite field? I am trying to solve the following problem: Given is a $3\times 3$ matrix $M$ over $\mathbb{F}_{7}$, such that for every vectors $v,w\in \mathbb{F}_{7}^3\setminus \{0\}$ there exists an integer $n$ with $M^{n}v=w$. Find this $M$. Well, i think i have to look for matrices with full ranks, but i have no idea how and where to start... I've been thinking of idempotent matrices, but the problem is what to do with the vectors $v,w$. I can't just choose them to be (1,1,1). Does anyone have an idea how can this probem be solved? I will be glad to read your comments and remarks. Thank you in advance! A: Let $g$ be a primitive element of the field $E=\Bbb{F}_{7^3}$, and let $M$ represent multiplication by $g$ on $E$ viewed as a 3-dimensional vector space over $\Bbb{F}_7$. Show that this $M$ works.
{ "pile_set_name": "StackExchange" }
Q: Sqlite Transaction with read and write is it possible to use a transaction in android to read value that will be deleted later in the same transaction? Something like this: SQLiteDatabase db = helper.getReadableDatabase(); db.beginTransaction(); String whereClause = "COL1 = ? AND COL2 = ?"; // make a SELECT Query to read data Cursor c = db.rawQuery("SELECT * FROM Foo "+whereClause, whereValues); ... // read Cursor data etc. ... db = helper.getWriteableDatabase(); // Delete data db.delete("Foo", whereClause, whereValues); db.setTransactionSuccessful(); db.endTransaction(); So i want to get my data that Im going to delete, to notify UI components that the data has been deleted. I work with multiple threads and want to ensure that between quering and deleting no other data in the table has been changed. I could do that in java with synchronized etc. or lock the whole database (which I would only do as very last option). So Is it possible and save to use a transaction for this usecase? If yes how do I use getWriteableDatabase() and getReadableDatabase() in a write and read transaction? Does using a transaction guarantees that exactly those data will be deleted that has been queried before? And it's not possible that another datarecord has been inserted between the SELECT query and the DELETE (for instance by another Thread) ... I want to avoid that a data record will be deleted that has not been queried by the previous SELECT query (because it has been inserted AFTER the SELECT query and BEFORE the DELETE statement) Example: I have this table called Foo: ID | COL1 | COL2 ---|------|----- 1 | 1 | 1 ---|------|----- 2 | 0 | 1 ---|------|----- 3 | 1 | 1 Lets assume I use the transaction approach: I would execute SELECT * FROM Foo WHERE COL1 = 1 AND COL2 = 1. As result I will get the rows with the ID = 1 and ID = 3. Lets assume right now, before executing the DELETE statemet another Thread have inserted a row so that the table looks like this: ID | COL1 | COL2 ---|------|----- 1 | 1 | 1 ---|------|----- 2 | 0 | 1 ---|------|----- 3 | 1 | 1 ---|------|----- 4 | 1 | 1 Next the DELETE statement will be executed: DELETE FROM Foo WHERE COL1 = 1 AND COL2 = 1 So the rows with the ID = 1, ID = 3 and ID = 4 are deleted. But the resultset of the previous SELECT was the rows with the ID = 1 and ID = 3. So I have lost row with ID = 4 without notice that. And thats exactly what I'm trying to avoid. I guess that will be a usecase for a LOCK, but I don't like LOCKS ... A: Try this: long id; db = helper.getWriteableDatabase(); db.beginTransaction(); Cursor c = db.rawQuery("SELECT * FROM Foo " + whereClause, whereValues); try { while (cursor.moveToNext()) { id = cursor.getLong(cursor.getColumnIndex("_id")); db.delete("Foo", "_id = " + id,null); } db.setTransactionSuccessful(); }catch { //Error in between database transaction }finally { db.endTransaction(); } I am not sure if i can iterate a cursor and delete rows at same time. If not, you can save the id´s in an array first. A: You don't handle the transaction. And, if you use it like that, it's perfectly useless. You should commit (setTransactionSuccessful()) ONLY if no error occurs during the whole data manipulation (in a try...catch). Also note that a transaction is only useful if you do multiple data manipulations. A SELECT does not manipulate any data. So, after all this digression, the answer is... YES, YOU CAN.
{ "pile_set_name": "StackExchange" }
Q: Uniqueness of differential equation solutions I need to solve this DE $$y'' - 2x^{-1}y' + 2x^{-2}y = x \sin x \tag{*}$$ I found the complementary functions to be $x^2$ and $x$, and also noticed by guessing that the particular integral is $y = - x \sin x$ so the general solution is $$y = Ax + Bx^2 + -x \sin x$$ But how do I know this is the most general form of the solution? (I'm only told that it is). How do I know there aren't any other functions that satisfy $(*)$? Also why does the solution of an $n$th order homogeneous differential equation always have exactly $n$ arbitrary constants? A: $$(\ast)\iff\left(\frac{y}x\right)''=\sin x=\left(-\sin x\right)''\quad (x\ne0)$$
{ "pile_set_name": "StackExchange" }
Q: Using speedglm on a data frame with a deleted factor I am trying to use the speedglm package for R to estimate regression models. In general the results are the same as using base R's glm function, but speedglm delivers unexpected behavior when I completely remove a given factor level from a data.frame. For example, see the code below: dat1 <- data.frame(y=rnorm(100), x1=gl(5, 20)) dat2 <- subset(dat1, x1!=1) glm("y ~ x1", dat2, family="gaussian") Coefficients: (Intercept) x13 x14 x15 -0.2497 0.6268 0.3900 0.2811 speedglm(as.formula("y ~ x1"), dat2) Coefficients: (Intercept) x12 x13 x14 x15 0.03145 -0.28114 0.34563 0.10887 NA Here the two functions deliver different results because factor level x1==1 has been deleted from dat2. Had I used dat1 instead the results would have been identical. Is there a way to make speedglm act like glm when processing data like dat2? A: Droplevels I think is the key. str(droplevels(dat2)) vs. str(dat2) - even though x1==1 is dropped it's still listed in the factor levels So speedglm(as.formula("y ~ x1"), droplevels(dat2)) should equal glm("y ~ x1", dat2, family="gaussian")
{ "pile_set_name": "StackExchange" }
Q: What is the correct relationship between light intensity and wavelength? So we looked at the emission line spectra of noble gases in the lab today (hydrogen, neon, helium). And I noticed that the brightest spectral line in the helium spectrum is the yellow one and the faintest was the violet/blue. But isn't wavelength related to energy by $E = \frac{hc}{λ}$? Since the yellow light has greater wavelength than the violet one, its energy should be lower and therefore shouldn't it be less intense? Or is there another relationship between intensity and wavelength? A: The formula you cite gives wavelength related to the energy of an individual photon. The perceived brightness of a light source also depends on the number of photons which are present. The lower energy levels were brighter because they were being excited more frequently, because it's easier to drive an atom to less excited states. This means that more yellow photons were being emitted than others. (Addendum: I just also remembered that there could be some influence of human eye sensitivity here, but I don't know whether violet line in the series is far enough out that our eyes start to have trouble detecting it...)
{ "pile_set_name": "StackExchange" }
Q: Enviar notificación cuando se edite una pregunta que voté para cerrar Me ha pasado mas de una vez: Veo una pregunta con un contenido interesante, pero que falla en las formas: mal explicada, sin código de ejemplo, difícil de entender, ... En definitiva, una pregunta que estoy interesado en contestar. Voto para cerrar la pregunta, dejando un comentario del porqué de mi voto. Por algún motivo, me pierdo la edición de la pregunta. Es editada, se muestra en la lista de activas, pero yo no estoy conectado en ese momento, y, sencillamente, no puedo atender el monitor. A partir de aquí, la pregunta sigue su curso: puede reabrirse, puede recibir respuestas (alguna, tal vez, lo suficientemente similar a la mía como para que no tenga sentido el escribirla). Me siento con el deber moral, ya que voté por su cierre, de votar por su reapertura cuando el motivo que propició dicho cierre sea solventado. Es un tema que me atrae, y en el que poseo ciertos conocimientos que podrían ser de ayuda para el usuario y, por extensión, para la comunidad. Porqué negarlo: quiero optar a los 15 puntos de la ✓ respuesta aceptada (o algún que otro ▲upvote). Una posible solución sería implementar el mecanismo de notificar nuevas ediciones; técnicamente, es similar a las notificaciones ante mensajes nuevos (comentarios en las publicaciones o en el chat) que ya recibimos. Cuando menos, dicha característica debería estar disponible como ☑ checkbox en el diálogo de votar para cerrar. Idealmente, debería de estar disponible en todas las preguntas. Por supuesto, entiendo que habría que establecer un límite (diario y personal) en este último caso, y que ello puede implicar demasiados cambios en las bases de datos del backend. En el caso de implementar dicha característica solo en el diálogo de cierre, entiendo que las modificaciones en las bases de datos del sistema son mínimas: los votos de cierre diarios son limitados, por lo que nunca serán necesarias mas notificaciones que votos de cierre. Creo que con relativamente simples modificaciones en el backend sería suficiente. ¿ Que os parecería una característica de este tipo ? ¿ Sería posible implementarla ? A: Es una idea interesante, mientras tanto puedes recibir notificaciones de un post editado usando Stack Overflow Extras.
{ "pile_set_name": "StackExchange" }
Q: Change element content with onClick in React In my application I have multiple blocks generated dynamically and each one of them has an onClick event. My goal is to be able to change the contents of the div when the click happens. Is there a way to do this thru event.target property of the onClick event? Or should i create a ref for each div upon creation and then work with refs? Or should i create an array of Div elements in component state and search&modify the element later re-rendering all divs from array? A: Since blocks are generating dynamically, have onClick event on children components. const Parent = () => { return ( <Child content={content} contentAfterClick={content} /> <Child content={content} contentAfterClick={content} /> ) } class Child extends Component { constructor() { super(); this.state ={ read: false, }; } render() { if (this.state.read) { return( <div>{this.props.contentAfterClick}</div> ) } return ( <div onClick={() => this.setState({ read: true })}> <div>{this.props.content}</div> </div> ); }; }
{ "pile_set_name": "StackExchange" }
Q: Storing data to NSUserDefaults In my iPhone app, I have a class called Contact which consists of an ABRecordRef to serve as a reference to a particular contact. I need to store groups of these contacts in NSUserDefaults, but things aren't working out so well since Contact is a custom class. Any ideas of what to do in this case? A: You cannot use NSUserDefaults for a custom class. From the documentation: The NSUserDefaults class provides convenience methods for accessing common types such as floats, doubles, integers, Booleans, and URLs. A default object must be a property list, that is, an instance of (or for collections a combination of instances of): NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary. If you want to store any other type of object, you should typically archive it to create an instance of NSData. Try using NSData. For example, to load custom objects into an array, you can do NSUserDefaults *currentDefaults = [NSUserDefaults standardUserDefaults]; NSData *dataRepresentingSavedArray = [currentDefaults objectForKey:@"savedArray"]; if (dataRepresentingSavedArray != nil) { NSArray *oldSavedArray = [NSKeyedUnarchiver unarchiveObjectWithData:dataRepresentingSavedArray]; if (oldSavedArray != nil) objectArray = [[NSMutableArray alloc] initWithArray:oldSavedArray]; else objectArray = [[NSMutableArray alloc] init]; } To archive the data, use: [[NSUserDefaults standardUserDefaults] setObject:[NSKeyedArchiver archivedDataWithRootObject:objectArray] forKey:@"savedArray"]; This will all work so long as your custom object complies with the NSCoding protocol: - (void)encodeWithCoder:(NSCoder *)coder; { [coder encodeObject:label forKey:@"label"]; [coder encodeInteger:numberID forKey:@"numberID"]; } - (id)initWithCoder:(NSCoder *)coder; { self = [[CustomObject alloc] init]; if (self != nil) { label = [coder decodeObjectForKey:@"label"]; numberID = [coder decodeIntegerForKey:@"numberID"]; } return self; } ABRecord is an opaque C type, so it's not an object in the sense of Objective-C. That means you can not extend it, you can not add a category on it, you can not message it. The only thing you can do is call functions described in ABRecord Reference with the ABRecord as a parameter. You could do two things to be able to keep the information referenced by the ABRecord around: Get the ABRecords id by ABRecordGetRecordID(). The ABRecordID is defined as int32_t so you can cast it to an NSInteger and store it wherever you like. You can later get the record back from ABAddressBookGetPersonWithRecordID() or ABAddressBookGetGroupWithRecordID(). However, the record could be changed or even deleted by the user or another app meanwhile. Copy all values inside the record to a standard NSObject subclass and use NSCoding as discussed above to store it. You will then, of course, not benefit from changes or additions to the record the user could have made.
{ "pile_set_name": "StackExchange" }
Q: Do Indus texts potentially have the oldest Indo-European text that we know of? There are some texts left by an ancient civilization in India. They were written around 2700-1800 BCE. They have not been able to decipher them yet. Is it possible that the texts were Indo-European? Or, could some of the later texts be Indo European because they wasn’t contact yet? Just curious. If they are, it probably beats Linear A if it is. If it is, the language may be the closest thing to Proto-Indo-European there is that has been recorded. Note: religious items may show what they believed, and similarities with other IE religions, including Hinduism, could be used as evidence. A: It is not even established that those symbols represent a language, because statistically they do not "fit" within either syllabary, alphabet or logographic systems. If we aren't even sure they're language, we cannot possibly guess what family language they'd be, although the IVC is sometimes posited to have been migrated south forming the Dravidian cultures, in which case they wouldn't be IE. A: It is highly dubious that the Indus Signs might encode an Indo-European language, because, according to all probabilities, no Indo-European language was present in that area at the time this system was used. Attempts at "reading" Indo-Aryan in the Indus Signs are mostly motivated by political (nauseating...) agendas and they do not resist basic counter-check. The received scientific opinion is that speakers of Indo-Iranian languages arrived in the Indus Valley area, after 1500 BCE, that is to say after this civilisation had already entered a process of decay and collapse. The claim that the Indus Signs cannot encode language is doubtless false. The statistical profile of the Signs is consistent with a syllabary of the Japanese type, that is to say a syllabary with quite a lot of signs (< 150). Besides, it is possible that the system is a mix of phonetic symbols and ideograms, which makes deciphering even more difficult. Some seals clearly seem to be of accountancy nature, with signs and numbers. Even Sproat who wrote papers with Farmer and Witzel is ready to concede that some seals do look like accountancy. The main problem is that inscriptions are all short and we have no idea what the language(s) is/are. A similar issue exists with LinearA in Crete. The nature of the corpus makes deciphering difficult, and at the same time, it is hardly possible to prove that an attempt at deciphering is correct.
{ "pile_set_name": "StackExchange" }
Q: Selecting a table row who's siblings are hidden I have a html table with the following structure: <table> <tbody> <tr><th>heading1</th></tr> <tr style="display: none"> <td>data1</td></tr> <tr style="display: none"> <td>data1</td></tr> </tbody> <tbody> <tr><th>heading2</th></tr> <tr> <td>data1</td></tr> <tr style="display: none"> <td>data1</td></tr> </tbody> </table> I am looking for a way to select a row in that table that holds heading, but whose all siblings are hidden, in this example, heading2 wouldn't be selected because only one row is hidden. A: var tr = $('tr:first-child').filter(function() { return $(this).siblings(':hidden').length == $(this).siblings().length; }); Example: http://jsfiddle.net/u6Qj5/1/
{ "pile_set_name": "StackExchange" }
Q: Does "old" literally mean "old" in this context, or is it an intensifier?    "Haven't I told you he's not going?" he hissed. "He's going to Stonewall High and he'll be grateful for it. I've read those letters and he needs all sorts of rubbish –– spell books and wands and ––"    "If he wants ter go, a great Muggle like you won't stop him," growled Hagrid. "Stop Lily an' James Potter's son goin' ter Hogwarts! Yer mad. His name's been down ever since he was born. He's off ter the finest school of witchcraft and wizardry in the world. Seven years there and he won't know himself. He'll be with youngsters of his own sort, fer a change, an' he'll be under the greatest headmaster Hogwarts ever had Albus Dumbled––"    "I AM NOT PAYING FOR SOME CRACKPOT OLD FOOL TO TEACH HIM MAGIC TRICKS!" yelled Uncle Vernon. (Harry Potter and the Sorcerer's Stone) Old is used as an intensifier, say these websites: Webster's #5; Wiktionary #12. So I guess the example's old has the meaning after an adjective crackpot. But I’m not sure, ‘cause the websites seem to kind of restrict the boundary of the use. Can old be used as an intensifier after all sorts of adjectives or adjective phrases? A: I think old has a literal meaning. This definition seems closest. c : of long standing In other words the person has been a fool for a very long time, long enough to prove that that's all he'll ever be.
{ "pile_set_name": "StackExchange" }
Q: How to get user_authenticatedID in Application Insights for Azure Functions App? I have been going through several different resources to add this but it does not seem to be working. I would like to record the authenticated user as a request is made to my Azure function. I can obtain the authenticated user from the claims principal. I followed the docs to inject the Telemetry Configuration and instantiate the TelemetryClient with the configuration. I check the claims to see if they are null and if not I set TelemetryClient.Context.User.AuthenticatedId = claimsPrincipal.Identity.Name. However in the logs I am unable to see the field being populated. class{ private readonly TelemetryClient tc; public classConstructor(TelemetryConfiguration config){ tc = new TelemetryClient(congig) } public async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Anonymous, "Get", Route = "user/session")] HttpRequest req, ILogger log, ClaimsPrincipal claimsPrincipal){ //null check on claimsprincipal tc.Context.User.UserAuthenticatedId = claimsPrincipal?.Identity?.Name; } startup class{ builder.Services.AddSingleton<TelemetryConfiguration>(provider => var telemetryConfiguration = new TelemetryConfiguration(); telemetryConfiguration.instrumentationKey = "key" return telemetryConfiguration; } A: I'm not sure where you want to show TelemetryClient.Context.User.AuthenticatedId. If you want to see it in Logs, you need to add: log.LogInformation(tc.Context.User.AuthenticatedUserId); If you want to see it in application insights, you need to use TrackTrace or TrackEvent: tc.TrackEvent("---------track-event" + tc.Context.User.AuthenticatedUserId); tc.TrackTrace("---------track-trace" + tc.Context.User.AuthenticatedUserId); Then you can see them in application insights(shown as below screenshot) Here provide my code for your reference: using System; using System.IO; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using Newtonsoft.Json; using Microsoft.ApplicationInsights; using Microsoft.ApplicationInsights.Extensibility; using System.Security.Claims; using ikvm.runtime; using Microsoft.Azure.WebJobs.Hosting; using System.Linq; using Microsoft.Extensions.DependencyInjection; namespace FunctionApp6 { public class Function1 { private readonly TelemetryClient tc; public Function1(TelemetryConfiguration config) { this.tc = new TelemetryClient(config); } [FunctionName("Function1")] public async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log, ClaimsPrincipal claimsPrincipal) { tc.Context.User.AuthenticatedUserId = "---userid---"; log.LogInformation(tc.Context.User.AuthenticatedUserId); tc.TrackEvent("---------track-event" + tc.Context.User.AuthenticatedUserId); tc.TrackTrace("---------track-trace" + tc.Context.User.AuthenticatedUserId); string responseMessage = "This HTTP triggered function executed successfully"; return new OkObjectResult(responseMessage); } public class MyStartup : IWebJobsStartup { public void Configure(IWebJobsBuilder builder) { var configDescriptor = builder.Services.SingleOrDefault(tc => tc.ServiceType == typeof(TelemetryConfiguration)); if (configDescriptor?.ImplementationFactory != null) { var implFactory = configDescriptor.ImplementationFactory; builder.Services.Remove(configDescriptor); builder.Services.AddSingleton(provider => { if (implFactory.Invoke(provider) is TelemetryConfiguration config) { var newConfig = TelemetryConfiguration.CreateDefault(); newConfig.ApplicationIdProvider = config.ApplicationIdProvider; newConfig.InstrumentationKey = config.InstrumentationKey; return newConfig; } return null; }); } } } } }
{ "pile_set_name": "StackExchange" }
Q: Display WordPress Shortcode as Plain Text Is there any way to display WordPress shortcode as a plain text on the post? For example to show [gallery] just as this. Thank you A: I think the easiest way is to double the brackets, but you could also replace the brackets as following: [ with &#91; ] with &#93;
{ "pile_set_name": "StackExchange" }
Q: Method can be made static, but should it? Resharper likes to point out multiple functions per asp.net page that could be made static. Does it help me if I do make them static? Should I make them static and move them to a utility class? A: Performance, namespace pollution etc are all secondary in my view. Ask yourself what is logical. Is the method logically operating on an instance of the type, or is it related to the type itself? If it's the latter, make it a static method. Only move it into a utility class if it's related to a type which isn't under your control. Sometimes there are methods which logically act on an instance but don't happen to use any of the instance's state yet. For instance, if you were building a file system and you'd got the concept of a directory, but you hadn't implemented it yet, you could write a property returning the kind of the file system object, and it would always be just "file" - but it's logically related to the instance, and so should be an instance method. This is also important if you want to make the method virtual - your particular implementation may need no state, but derived classes might. (For instance, asking a collection whether or not it's read-only - you may not have implemented a read-only form of that collection yet, but it's clearly a property of the collection itself, not the type.) A: Static methods versus Instance methods 10.2.5 Static and instance members of the C# Language Specification explains the difference. Generally, static methods can provide a very small performance enhancement over instance methods, but only in somewhat extreme situations (see this answer for some more details on that). Rule CA1822 in FxCop or Code Analysis states: "After [marking members as static], the compiler will emit non-virtual call sites to these members which will prevent a check at runtime for each call that ensures the current object pointer is non-null. This can result in a measurable performance gain for performance-sensitive code. In some cases, the failure to access the current object instance represents a correctness issue." Utility Class You shouldn't move them to a utility class unless it makes sense in your design. If the static method relates to a particular type, like a ToRadians(double degrees) method relates to a class representing angles, it makes sense for that method to exist as a static member of that type (note, this is a convoluted example for the purposes of demonstration). A: Marking a method as static within a class makes it obvious that it doesn't use any instance members, which can be helpful to know when skimming through the code. You don't necessarily have to move it to another class unless it's meant to be shared by another class that's just as closely associated, concept-wise.
{ "pile_set_name": "StackExchange" }
Q: Constraints on sliding windows Let $L\subseteq \Sigma^*$ be a language of finite words and $n>0$ some integer. I would like to know if anything is known on the time and space complexity with respect to $n$ to check for membership in $L$ for a sliding window of size $n$ over an infinite stream. Here I mean the incremental complexity by moving the window along the stream and computing on the fly the membership of the window seen as a word to the language $L$. Exemple: It is simple to check with space complexity $\mathcal{O}(\log(n))$ and time complexity $\mathcal{O}(1)$ for the language $\Sigma^* ab \Sigma^*$ with $\Sigma=\{a,b\}$. You simply need to remember the last $ab$ factor seen in a register and to increment it by $1$ at every new symbol, which is constant in the appropriate RAM model. Case of interest (at least for me) includes: $L$ is the Parity language over $\{0, 1\}$, the number of $1$ is even, or in regexp: $(0^* + 10^*1)^*$ $L$ is the Majority language over $\{0,1\}$, there are more $1$ than $0$. $L$ is the language over $\{0,1,e\}$ such that there exists a $0$ before a $1$. $L$ is an arbitary regular language. In particular, is it needed to store all the window for those languages? It seems so but I fail to see a proof. Even if we allow a space $\mathcal{O}(n)$, can we reply in time $\mathcal{O}(1)$? A: It seems it would depend on your particular model, in particular what information you have access to. From what I infer, you are thinking of the following model: you have a memory $m$, for instance of size $O(\log n)$. at each step, you read a new letter $a\in\Sigma$ of your stream, and you are allowed to modify your memory $m$ you then have to say whether the word under you new sliding window shifted by $1$ is in $L$, based solely on $m$. In particular, you don't have access to the letter leaving the sliding window. If you had this information, the problem would be more symmetrical, and for instance languages Parity and Majority would be computable with constant and logarithmic memory respectively. Memory constraint In the setting above without this information, here is for instance a proof that a linear memory is needed for Parity, so basically you need to remember the whole word. Indeed, if you use sublinear memory, there are two different words $u$ and $v$ of length $n$ that give rise to the same memory state $m$. Then, let $d$ be the first position where $u$ and $v$ differ, and consider that your stream continues with $0^d$. After having read this suffix, you will have the same memory $m'$, so you must answer the same thing in both cases. But since the bit that got out of the window is not the same for $u$ and $v$, your answer is wrong for one of the two. Time constraint It is possible to recognize any regular language $L$ with quasi-constant amortized time. The time complexity is the one from a union-find-delete (UFD) data structure. For instance this paper shows that operations union, makeset and delete can be done in constant time, and the find operation in reverse Ackermann (so at most $5$ for any input value making sense in the physical word, see wikipedia on union-find). Let $A$ be a DFA recognizing $L$, with states $Q$, transition function $\delta:Q\times\Sigma\to Q$, and initial state $q_0.$ Let $k=|Q|$, since $L$ is fixed $k$ is a constant. Let $a_0a_2\dots a_{n-1}\in\Sigma^n$ be the word currently under the window. The idea is to maintain a memory structure $m$, giving for each word $u_i=a_i\dots a_{n-1}$ the state $q_i$ reached by $A$ after reading $u_i$. We will also maintain an index $\gamma\in[0,n-1]$, giving the identifier associated with $u_0$. The identifier for $u_j$ is $\gamma+j \mod n$. The UFD structure will store identifiers of all words $u_0\dots u_{n-1}$, and two words $u_i,u_j$ will be in the same partition if and only if $q_i=q_j$. The common state $q$ labels the partition. So in order to know whether the current word is in $L$, it suffices to identify the label $q$ of partition of the identifier $\gamma$ (corresponding to the word $u_0$), and check whether $q$ is accepting. When a new letter $a$ is read, we update the memory as follows: the identifier $\gamma$ is deleted from the UFD structure a new identifier $\gamma$ is added, and joins the partition labeled $q_0$. A new partition is created if none exists. For this we can use the help of an auxiliary memory of constant size, storing which states are currently labelling partitions, and giving a witness identifier for each one. the current index $\gamma$ is updated to $\gamma'=\gamma+1\mod n$. all labels of partitions are updated to their $a$-successors: $q'=\delta(q,a)$ for each label $q$. Partitions that get the same label are merged. All these operations are either constant time or in quasi-constant amortized time. A: Context Let $\mathcal{L}$ be a fixed regular language and let ($\mathcal{Q}, \Sigma, \delta, q_0, \mathcal{F})$ be an automaton recognizing $\mathcal{L}$. I will suppose in this post that we are working in the RAM model with cells of size logarithmic in the maximal size of the window, and that all the operations regarding the automaton are constant time. Claim We can maintain $\mathcal{L}$ in constant (non-amortized) time for a window of constant size. Let us first show how we can maintain in $O(\ln(n))$ time, then how this can be improved to constant amortized time and then finally let us prove how we can have the constant non-amortized result. Proof of the $O(\ln(n))$ bound We note $\bar{\delta}(w) : \mathcal{Q}\rightarrow \mathcal{Q}$ the function that associates $q$ with $\delta(q,w)$, which we call the effect of $w$. For the sake of simplicity we identify a letter $c$ with its effect $\bar{\delta}(c)$. To maintain whether the contents of the window belong to the language, it suffices to maintain the effect of the window, because a word $w$ belongs to $\mathcal{L}$ iff $(\bar{\delta}(w))(q_0) \in \mathcal{F}$. The idea here is to build a segment tree over the infinite word that allows us to query the effect of the sliding window in time $O(\ln(n))$. Note that $\bar{\delta}(w_1 w_2) = \bar{\delta}(w_2) \circ \bar{\delta}(w_1)$ therefore the effect of a word can, indeed, be computed with such a tree. As the stream is unbounded, we cannot use a segment tree directly, as both the memory will increase linearly and the depth of the tree will grow logarithmically in the size of the stream that has been read. To counter this problem we can use the following data structure. Let $k$ be such that $2^k \leq n < 2^{k+1}$. We will maintain three binary trees of depth $k$: $A, B$ and $C$. The binary tree $A$ will be rightmost full, $B$ will be either empty or full while $C$ will be leftmost full (but never full) as depicted here: To move the sliding window we need to remove a letter and add a new one. Let us first cover the removal: if $A$ is empty then we exchange $A$ and $B$. Then we remove the leftmost leaf from $A$. Adding a new letter on $C$ is always possible ($C$ is never full). If after this $C$ becomes full, we exchange $C$ and $B$. It is easy to see that with our choice of $k$ then when we switch $A$ and $B$ because $A$ is empty, then $B$ cannot also be empty, so it must be full. Likewise, when we exchange $B$ and $C$ because $C$ is full, then $B$ cannot also be full, so it is empty. Exchanging trees can be done in $O(1)$. Adding a letter or removing a letter at depth $k$ can be done in $O(k)$: we locate in $O(k)$ the place where the letter should be removed or inserted, we perform the change (accounting for the effect of the added or removed letter), and then we update the annotation of the effects of all parent nodes in $O(k)$ by going upwards in the tree. Therefore, we have an $O(\ln(n))$ algorithm. Note that reading the effect of the whole sliding window can be done by combining the effect of the roots of $A$, $B$ and $C$ and thus in $O(1)$. Note that this structure does not require the sliding window to be of constant size $n$ but only that we alternate between insertion and deletion (it will be useful for the next proof). Proof of the constant amortized bound The infinite stream will be split into "chunks" $C_1 \dots $ of size $K=O(\ln(n))$. At each step of the computation, the sliding window will start within a chunk $C_i$ and end within a chunk $C_{j}$ ($j-i$ will be roughly equal to $n/K$). We will maintain separately (1) the effect of the part coming from the first chunk $C_i$, (2) the effect from the part of the last chunk $C_{j}$ and (3) the combined effect of all the chunks from the middle $C_{i+1} \dots C_{j-1}$ (see figure below). The overall effect will simply be the combination of the effects of (1), (2), and (3). The effect of (1) can be maintained in the following way: each time our sliding window starts in a new chunk $C_i$ we compute for all suffixes $S$ of $C_i$ the effect of $S$. This can be done in time $O(|C_i|)$ because the effect of $aS$ (where $a$ is a letter and $S$ a word) can be computed by combining the effect of $a$ and $S$. Therefore to maintain (1) we pay a price $K$ each time we enter a new block of size $K$. Once the effects of all the suffixes have been computed we can answer in $O(1)$. In amortized complexity this gives us $O(1)$. The effect of (2) can be maintained in non-amortized $O(1)$ as we only add letters at the end of the current chunk, or restart from an empty chunk. The effect of (3) can be computed using a tree as seem above for the $O(\ln(n))$ bound. Clearly we only add and remove a new chunk every $K$ steps. And it takes $O(\ln(n))$ to make the insertion/deletion therefore we also have the expected amortized complexity. Proof of the constant non-amortized bound In the proof above, we have an algorithm that uses $O(1)$ computation at all steps plus $O(\ln(n))$ every $O(\ln(n))$ steps. The basic idea here is to use a small amount of computation at each of the steps to amortize the $O(\ln(n))$ cost. For the effect of (1), this is easy, when we are in a block $C_i$ we compute the effect of suffixes for the block $C_{i+1}$, for each new letter removed in $C_i$ we compute a new suffix of $C_{i+1}$. This is $O(1)$. For the effect of (2), there is nothing to do as it was already $O(1)$. For the effect of (3), it is more complex. In the algorithm that we used for the amortized complexity, each $O(\ln(n))$ steps we remove and add a chunk. When removing a chunk, we must locate in $O(\ln(n))$ where the chunk to remove is located (note that it may be in binary tree $B$ or even $C$), remove it, and update upwards the effect of the parent nodes. This modification of $O(\ln(n))$ can be performed by doing some $O(1)$ computation step each time the window is modified, resulting in $O(1)$ non-amortized time. Note that, while we perform these operations in place, the effects annotating the nodes of the affected tree $A$, $B$ or $C$ may no longer be valid (as they partly reflect the deletion), but this is not an issue: we only use the effect of the root of the tree to determine whether the current window is in $\mathcal{L}$ or not, and this annotation is modified last, when the deletion has actually taken place. When inserting a chunk, the same reasoning work, but there is an added complication: the value of the inserted chunk, and its effect, is only known at the very end of the insertion, so we cannot do this amortization as-is. To work around this problem, we split our sliding window into four blocks. As before we will have a block (1) covering the effect of the suffixes of the first chunk and (2) for the prefixes of the last chunk but (3) will now only cover $C_{i+1}$ up to $C_{j-2}$ and thus we will have a block (4) computing the effect of $C_{j-1}$. The effect of (4) can easily be maintained as it was already computed for (2) (the entire chunk is a suffix of itself). So this ensures that the chunk to insert in the tree, namely (4), is completely known, and we can do a similar amortization. The last subtlety to note is that the amortization of insertions and deletions may end up updating the same tree in-place (e.g., we insert a node in tree $C$, and we remove a node in tree $C$ which will be swapped with $B$ and $A$). However, one can show that this is no problem, because the modifications performed in-place in the amortization step are performed from bottom to top, so operations will be reflected in the right order. Alternatively, we can modify the structure of trees used in the $O(\ln(n))$ bound to use 5 trees instead of 3, and this can ensure that the computations when amortizing insertions and deletions will never take place in the same tree. Case of a variable-size window If we allow arbitrary insertions and deletions and allow the size of the sliding window to evolve freely, then the scheme can probably be adjusted. This would require the use of more trees in the $O(\ln(n))$ algorithm, with the possibility to merge together trees or split trees when the window size changes too much (i.e., its logarithm changes). What is more, the block size in the $O(1)$ amortized algorithm would also need to change; but as these window size changes are rare, we should be able to amortize these complete recomputations of the data structure. A: Here is a second, simpler and more general answer that was obtained after discussing with a3nm. Problem We fix a regular language $\mathcal{L}$ and we are interested in the following word problem. At start, we have an empty word and then we receive updates taking one of the following forms: Insert a letter at the beginning of the word Insert a letter at the end of the word Delete the letter at the beginning of the word Delete the letter at the end of the word After each update we want to know whether the current word is in the language $\mathcal{L}$. Clearly this problem is more general than the one proposed by C.P. Claim The problem above can be solved in constant (non amortized) update. Proof We note $\bar{\delta}(w) : \mathcal{Q}\rightarrow \mathcal{Q}$ the function that associates $q$ with $\delta(q,w)$, which we call the effect of $w$. For the sake of simplicity we identify a letter $c$ with its effect $\bar{\delta}(c)$. To maintain whether the contents of the window belong to the language, it suffices to maintain the effect of the window, because a word $w$ belongs to $\mathcal{L}$ iff $(\bar{\delta}(w))(q_0) \in \mathcal{F}$. I suppose that it is obvious to maintain the word after updates in a way that allows to query the letter at any position in the word (e.g. using an amortized circular buffer). We define the notion of guardian as a position within the word with a left span and right span. Let $w_1 \dots w_m$ be our word. A guardian placed at position $p$ with left span $l$ stores a list $b_1 \dots b_l$ of effects where the element $b_i$ corresponds to the effect of $w_{p-i} \dots w_{p-1}$. Similarly, if it has a right span $r$, then it stores a list $a_1 \dots a_r$ with $a_i$ corresponding to the effect of $w_p \dots w_{p+i}$. We say that a guardian is full whenever it spans the whole word. Properties of guardians: It takes a constant time to increase by a constant number the left and right spans of a guardian. If we have a guardian at some position in a word $w$ that is updated and if the guardian was not placed at a position that is deleted, we can get in constant time a guardian for the updated word that has the same span (possibly minus one if the span covered a letter that has been deleted). We can maintain after update a full guardian in constant time as long as its position is not deleted. Here is the sketch of our algorithm: let $N$ denotes the current size of the word. We will maintain a full guardian roughly at the middle of the word (between $N/4$ and $3N/4$). As soon as the full guardian escapes the middle, we start creating a second guardian at position $N/2$. After each update we increase the span of this second guardian by 8. This ensures that the second guardian will become full before our first guardian gets deleted and that the position of our second guardian will stay in the middle (between $3N/7$ and $4N/7$) as long as it is not full. When the second guardian is full, we replace the first guardian with the second. After each update, we have a full guardian that allows us to answer whether the word belongs to $\mathcal{L}$. Extension with infix queries With a3nm we also found out that we can solve an extension of this problem (insert & delete at the beginning and at the end of the word) plus queries of the form "given $(i,j)$, is the factor $w[i:j]$ of the current word $w$ within $\mathcal{L}$?" using Bojańczyk structure described in section 2.2 of its paper Factorization Forests. In his paper Bojańczyk does not describe how we can update the structure and it would require a little care to do it but it can be updated in constant time for a fixed language.
{ "pile_set_name": "StackExchange" }
Q: Python csv not writing to file I am trying to write to a .tsv file using python's CSV module, this is my code so far file_name = "test.tsv" TEMPLATE = "template.tsv" fil = open(file_name, "w") # Added suggested change template = csv.DictReader(open(TEMPLATE, 'r'), delimiter='\t') new_file = csv.DictWriter(fil, fieldnames=template.fieldnames, delimiter='\t') new_file.writeheader() basically TEMPLATE is a file that will contain the headers for the file, so i read the headers using DictReader and pass the fieldnames to DictWriter, as far as i know the code is fine, the file test.tsv is being created but for some reason the headers are not being written. Any help as to why this is happening is appreciated, thanks. A: DictReader's first argument should be a file object (create with open()), cf. http://docs.python.org/py3k/library/csv.html#csv.DictReader You forgot open() for the TEMPLATE file. import csv file_name = "test.tsv" TEMPLATE = "template.tsv" fil = open(file_name, "w") # you forgot this line, which will open the file template_file = open(TEMPLATE, 'r') template = csv.DictReader(template_file, delimiter='\t') new_file = csv.DictWriter(fil, fieldnames=template.fieldnames, delimiter='\t') new_file.writeheader()
{ "pile_set_name": "StackExchange" }
Q: Compute running sum in a window function I have issues with this running sum in Redshift (uses Postgres 8): select extract(month from registration_time) as month , extract(week from registration_time)%4+1 as week , extract(day from registration_time) as day , count(*) as count_of_users_registered , sum(count(*)) over (ORDER BY (1,2,3)) from loyalty.v_user group by 1,2,3 order by 1,2,3 ; The error I get is: ERROR: 42601: Aggregate window functions with an ORDER BY clause require a frame clause A: You can run a window functions on the result of aggregate function on the same query level. It's just much simpler to use a subquery in this case: SELECT *, sum(count_registered_users) OVER (ORDER BY month, week, day) AS running_sum FROM ( SELECT extract(month FROM registration_time)::int AS month , extract(week FROM registration_time)::int%4+1 AS week , extract(day FROM registration_time)::int AS day , count(*) AS count_registered_users FROM loyalty.v_user GROUP BY 1, 2, 3 ORDER BY 1, 2, 3 ) sub; I also fixed the syntax for expression computing week. extract() returns double precision, but the modulo operator % does not accept double precision numbers. I cast all three to integer while being at it. Like @a_horse commented, you cannot use positional references in the ORDER BY clause of a window function (unlike in the ORDER BY clause of the query). However, you cannot use over (order by registration_time) either in this query, since you are grouping by month, week, day. registration_time is neither aggregated nor in the GROUP BY clause as would be required. At that stage of the query evaluation, you cannot access the column any more. You could repeat the expressions of the first three SELECT items in the ORDER BY clause to make it work: SELECT extract(month FROM registration_time)::int AS month , extract(week FROM registration_time)::int%4+1 AS week , extract(day FROM registration_time)::int AS day , count(*) AS count_registered_users , sum(count(*)) OVER (ORDER BY extract(month FROM registration_time)::int , extract(week FROM registration_time)::int%4+1 , extract(day FROM registration_time)::int) AS running_sum FROM loyalty.v_user GROUP BY 1, 2, 3 ORDER BY 1, 2, 3; But that seems rather noisy. (Performance would be good, though.) Aside: I do wonder about the purpose behind week%4+1 ... The whole query might be simpler. Related: Get the distinct sum of a joined table column PostgreSQL: running count of rows for a query 'by minute'
{ "pile_set_name": "StackExchange" }
Q: Unbelievable strange file creation time problem I have a very strange problem indeed! I wonder if the problem is in the framework, OS or maybe it's just me, misunderstanding things... I have a file, which might be created a long time ago, I use the file, and then I want to archive it, by changing it's name. Then I want to create a new file, with the same name as the old file had, before it was renamed. Easy enough! The problem that really puzzles me, is that the newly created file gets wrong "created"-timestamp! That's a problem since it's that timestamp that I want to use for determing when to archive and create a new file. I've created a very small sample that shows the problem. For the sample to work, there must be a file 1.txt in the Files folder. Also, the file attribute must also be set back in time (with one of the tools available, I use Nomad.NET). static void Main(string[] args) { // Create a directory, if doesnt exist. string path = Path.GetDirectoryName(Application.ExecutablePath) + "\\Files"; Directory.CreateDirectory(path); // Create/attach to the 1.txt file string filename = path + "\\1.txt"; StreamWriter sw = File.AppendText(filename); sw.WriteLine("testing"); sw.Flush(); sw.Close(); // Rename it... File.Move(filename, path + "\\2.txt"); // Create a new 1.txt sw = File.AppendText(filename); FileInfo fi = new FileInfo(filename); // Observe, the old files creation date!! Console.WriteLine(String.Format("Date: {0}", fi.CreationTime.Date)); Console.ReadKey(); } A: This is the result of an arcane "feature" going way back to the old days of Windows. The core details are here: Windows NT Contains File System Tunneling Capabilities (Archive) Basically, this is on purpose. But it's configurable, and an anachronism in most of today's software. I think you can create a new filename first, then rename old->old.1, then new->old, and it'll "work". I don't remember honestly what we did when we ran into this last a few years back. A: I recently ran into the same problem described in the question. In our case, if our log file is older than a week, we delete it and start a new one. However, it's been keeping the same date created since 2008. One answer here describes renaming the old file and then creating a new one, hopefully picking up the proper Creation Date. However, that was unsuccessful for us, it kept the old date still. What we used was the File.SetCreationTime method, and as its name suggests, it easily let us control the creation date of the file, allowing us to set it to DateTime.Now. The rest of our logic worked correctly afterwards.
{ "pile_set_name": "StackExchange" }
Q: If formula show diffrent result I need formula to show me different result "if" O18>O16 =IF($O$18>$O$16;O19);IF($O$16>=A27;$O$17+A27;"") But it is not working. Thanks in advance. A: So you want IF O18>O16 THEN O19 ElSE IF O16 > A27 then O17+A27 ELSE "" ? =IF(O18>O16,O19,IF(O16>A27,O17+A27,""))
{ "pile_set_name": "StackExchange" }
Q: What's an alternative to GWL_USERDATA for storing an object pointer? In the Windows applications I work on, we have a custom framework that sits directly above Win32 (don't ask). When we create a window, our normal practice is to put this in the window's user data area via SetWindowLong(hwnd, GWL_USERDATA, this), which allows us to have an MFC-like callback or a tightly integrated WndProc, depending. The problem is that this will not work on Win64, since LONG is only 32-bits wide. What's a better solution to this problem that works on both 32- and 64-bit systems? A: SetWindowLongPtr was created to replace SetWindowLong in these instances. It's LONG_PTR parameter allows you to store a pointer for 32-bit or 64-bit compilations. LONG_PTR SetWindowLongPtr( HWND hWnd, int nIndex, LONG_PTR dwNewLong ); Remember that the constants have changed too, so usage now looks like: SetWindowLongPtr(hWnd, GWLP_USERDATA, this); Also don't forget that now to retrieve the pointer, you must use GetWindowLongPtr: LONG_PTR GetWindowLongPtr( HWND hWnd, int nIndex ); And usage would look like (again, with changed constants): LONG_PTR lpUserData = GetWindowLongPtr(hWnd, GWLP_USERDATA); MyObject* pMyObject = (MyObject*)lpUserData; A: The other alternative is SetProp/RemoveProp (When you are subclassing a window that already uses GWLP_USERDATA) Another good alternative is ATL style thunking of the WNDPROC, for more info on that, see http://www.ragestorm.net/blogs/?cat=20 http://www.hackcraft.net/cpp/windowsThunk/
{ "pile_set_name": "StackExchange" }
Q: Algolia: How to implement customized Featured Products in a results page? I want to feature (move to top of results) certain products on only certain search result pages. With only a single search performed. A Custom Ranking Attribute would boost a product's ranking for all pages, instead of certain pages. A somewhat working solution is to add an attribute like "featuredin":"[searchterm]", and move "featuredin" to top of Searchable Attributes. However, products with similar searchterm could be featured on the wrong page. example: There are products with "featuredin":"iphone", and products with "featuredin":"iphone accessories". Since searching 'iphone' in attribute 'featuredin' will also get hits on products with "featuredin":"iphone accessories", I'm getting iphone accessories featured on iphone search results. This solution could work if there's a way to force 'true' exact match for an attribute. But I couldn't find something like that. Thanks. A: There is actually a nice way to implement that behavior using "optional" facet filters (a soon to be released advanced feature - as of 2016/11/15). An "Optional Facet Filter" is a facet filter that doesn't need to match to retrieve a result but that will - by default - make sure the hits that have the facet value are retrieved first (thanks to the filters criterion of Algolia's tie-breaking ranking formula). This is exactly what you want: on every single page where you want some results sharing a featuredin value to be retrieved first; just query the Algolia index with the featuredin:"a value" optional facet filter. make sure your featuredin attribute is part of your attributesForFacet index setting at query time, query the index with index.search('', { optionalFacetFilters: ["featuredin:iphone accessories"]) You can read more on this (beta) documentation page.
{ "pile_set_name": "StackExchange" }
Q: Wordpress Uploaded Images Show Broken Link I can upload images via the WP Photo Seller Plugin, but all the links to the images show as broken. Anyone know what I can change to get them to show? I checked the folder permissions and they seem to be fine. Here is a link: http://pics.teamdance.com/photogallery/gallery67/ A: Turns out the permissions on the server were off. A quick chmod solved the problem.
{ "pile_set_name": "StackExchange" }
Q: How do I overload the << operator to print a class member? Here is the class class graph { public: graph() {}; // constructor graph(int size); friend ostream& operator<< (ostream& out, graph g); private: int size; bool** grph; }; This is how I generate the graph: graph::graph(int size) { grph = new bool*[size]; for (int i = 0; i < size; ++i) grph[i] = new bool[size]; for (int i = 0; i < size; ++i) for (int j = i; j < size; ++j) { if (i == j) grph[i][j] = false; else { cout << prob() << endl;//marker grph[i][j] = grph[j][i] = (prob() < 0.19); cout << grph[i][j] << endl;//marker } } cout << "Graph created" << endl;//marker } The constructor and the prob() function work just fine. I have tested them using the markers. This is where I believe the problem exists. This is the code for the overloaded operator << ostream& operator<< (ostream& out, graph g) { for (int i = 0; i < g.size; ++i) { for (int j = 0; j < g.size; ++j) out << g.grph[i][j] << "\t"; out << endl; } return out; } Here is how this is called. graph g(5); cout << g << endl; Now, the program compiles just fine. But, while execution, the graph is not being printed. I haven been able to print the graph the same way without overloading the operator, but by having the for loops run inside the main or by using a class member function. Can anyone help me out? I am using Visual Studio 2015. A: The loops are never entered, because i < g.size is always false, because g.size is 0! You are never actually setting the member variable size to the size entered by the user, so it defaults to 0. You'll have to set it, i.e. this->size = size; //Sets member variable 'size' to 'size' from the function arguments Also, you didn't specify a copy constructor, so the implicit one will be used. But the implicit one just copies the values, and so the pointer grph will point in 2 objects to the same data. You're leaking memory, that's why it doesn't matter (technically), but you should implement a proper destructor and copy/move constructor. But because operator<< should only be printing, consider passing by const& instead of by value!
{ "pile_set_name": "StackExchange" }
Q: Why do the following program leaks memory? I tried to write my first GTK+-program. Compilation went fine but valgrind says that there are memory leaks. I'm unable to find those so could anyone say what am I doing wrong? Or is it possible at all to write graphical Linux programs without memory leaks? #include <gtk/gtk.h> int main(int argc, char* argv[]) { gtk_init(&argc, &argv); GtkWidget* window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_window_set_title(GTK_WINDOW(window), "Hello World"); gtk_container_set_border_width(GTK_CONTAINER(window), 60); GtkWidget* label = gtk_label_new("Hello, world!"); gtk_container_add(GTK_CONTAINER(window), label); g_signal_connect(window, "destroy", G_CALLBACK(gtk_main_quit), NULL); gtk_widget_show_all(window); gtk_main(); return 0; } gcc -Wall gtkhello.c -o gtkhello $(pkg-config --cflags --libs gtk+-2.0) valgrind -v ./gtkhello ... ==9395== HEAP SUMMARY: ==9395== in use at exit: 538,930 bytes in 6,547 blocks ==9395== total heap usage: 21,434 allocs, 14,887 frees, 2,964,543 bytes allocated ==9395== ==9395== Searching for pointers to 6,547 not-freed blocks ==9395== Checked 949,656 bytes ==9395== ==9395== LEAK SUMMARY: ==9395== definitely lost: 4,480 bytes in 30 blocks ==9395== indirectly lost: 5,160 bytes in 256 blocks ==9395== possibly lost: 180,879 bytes in 1,716 blocks ==9395== still reachable: 348,411 bytes in 4,545 blocks ==9395== suppressed: 0 bytes in 0 blocks ==9395== Rerun with --leak-check=full to see details of leaked memory ==9395== ==9395== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) ==9395== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) A: You're not doing anything wrong. GTK widgets use reference counting, but in your programme all the references are taken care of, so you're not (manually) leaking anything. So why does Valgrind claim you are? Firstly, GLib has its own "slab" memory allocator, called GSlice, which is generally faster than system malloc for small allocations. Unfortunately, it confuses Valgrind, but if you set the environment variable G_SLICE=always-malloc, GSlice is effectively turned off. Secondly, you can set the G_DEBUG=gc-friendly, which is supposed to help Valgrind produce more accurate results (although it my experience it generally doesn't make any difference). Both of these environment variables are listed in the GLib documentation. Unfortunately, even if you do both of these things, Valgrind will still report that your app leaks memory. The reason for this is that GTK (and its underlying libraries) allocate some "static" memory at start-up that doesn't get freed until the programme quits. This isn't really a problem, because typically the programme ends as soon as gtk_main() returns, and then the OS frees any remaining resources, so you're not really leaking anything. Valgrind thinks you are though, and for this reason it would be nice to have a gtk_deinit() function, but sadly there isn't one. The best you can do instead is to teach Valgrind to ignore these things, via a suppressions file. The Valgrind page on the Gnome Wiki has details on all this and more.
{ "pile_set_name": "StackExchange" }
Q: How to restrict the drag and drop area in a canvas I have a canvas,lets say of dimensions 500x600.I have some controls inside that canvas.User can rearrange the controls by drag and drop.But I want to restrict the drag and drop within that canvas. For example:There is a button in the canvas.User can drag and drop the button anywhere inside the canvas.But if the user tries to drag the button out of the canvas boundaries,it should stick in the canvas boundary. How to achieve this? A: The signature for startDrag() is public function startDrag(lockCenter:Boolean = false, bounds:Rectangle = null):void The second parameter allows you to pass a Rectangle to act as bounds for your DisplayObject. It won't be dragged outside of this
{ "pile_set_name": "StackExchange" }
Q: Inheritance - Return extended class from parent class I want to make a super class for database access called DataModel. I have also a SQLiteOpenHelper with generic methods. My problem is to convert the type of result on parent class. I will explain with generic code. Supose an all() method in parent class: public class DataModel { public static ArrayList<DataModel> all(){ ArrayList<DataModel> datos = new ArrayList<DataModel>(); Map<String, String> columns = DataModel.getColumns(); Map<String, Object> columnData = LocalStorageServices.DatabaseService().getAllMatchesFromTable(DataModel.getTableName(), columns); for (Map.Entry<String, Object> entry : columnData.entrySet()) { DataModel auxM = new DataModel(columns); } return datos; } } I have else an extended class: public class User extends DataModel { protected String table = "testing"; } What I need to achieve? To get a list of children instances in order to have the data and the concrete methods of the child. I mean, I would like to call ArrayList<User> appUsers = User.all(); but I cant because DataModel.all() returns a DataModel ArrayList (currently I would have to do ArrayList<DataModel> appUsers = User.all();). The problem here is converting the typo for the children without knowing anything about the concrete child. Tomorrow I could create a new Entry class, Post class, or whatever model I want to create. A: You can make it generic e.g. public class DataModel<T> { public ArrayList<T> all(){ // do generic stuff return datos; } } Does it have to be static? If yes, then you need to use public static <T> ArrayList<T> all() { ... } This way your User class looks like this public class User extends DataModel<User> {...} and you can now use ArrayList<User> appUsers = User.all();
{ "pile_set_name": "StackExchange" }
Q: Grails 3 change default service scope In grails 3, the default service scope is Singleton, the documents show it's easy to override this by defining static scope='request' in the service class. Is it possible to change the default service scope for an application similar to the way it is done for controllers in application.groovy? The specific issue is a Service class in a plugin is calling application services (which are designed around request scope). This was working in grails 2, but with the upgrade to grails 3 it no longer does. A: Is it possible to change the default scope for an application similar to the way it is done for controllers in application.groovy? There is no direct support for that, no. You could write a bean definition post processor that could impose that change.
{ "pile_set_name": "StackExchange" }
Q: Pointer with java and string variable I gave a value (as variable) to a String variable. The problem is that when I change this value, it doesn't change the other variable : Code : String test = "hello"; String myname = test; test = "how are you ?"; The output of myname is "hello" I want that the output be "how are you?" the new value of test; how to do it without give the value again like : myname = test; I don't want to give the value again because I my code I got a lots of variables that got the test variable as value, I want to do the shortest way. A: First you are assigning the value "hello" (at some place in memory) to test: test --------------> "hello" Then, you are setting myname to the same location in memory. test --------------> "hello" / myname ----------/ Then, you are assigning the value "how are you?" at a new location in memory to test: -> "hello" / myname ----------/ test --------------> "how are you?" It's all about the pointers. EDIT AFTER COMMENT I have no idea what the rest of your code looks like, but if you want to be able to update a single String and have the rest of the references to that String update at the same time, then you need to use a StringBuilder (or StringBuffer if you need synchronization) instead: StringBuilder test = new StringBuilder("hello"); StringBuilder myname = test; StringBuilder foo = test; StringBuilder bar = test; test.replace(0, test.length(), "how are you?"); Assert.assertEquals("how are you?", test.toString()); Assert.assertEquals("how are you?", myname.toString()); Assert.assertEquals("how are you?", foo.toString()); Assert.assertEquals("how are you?", bar.toString()); Got it from there? A: It sounds like you need to learn about the difference between a reference variable and the object that a variable refers to. Let's look at what your code does: String test = "hello"; This assigns the reference variable test to refer to a constant String object with the value "hello". String myname = test; Now the reference myname refers to the same String object as test does. test = "how are you ?"; Now test refers to a new String object with the value of "how are you ?". Note that this does not change myname, as you have seen. A: The problem is that String is immutable so "how are you ?" is a new object. Thus myname still refers to the old object "hello". Solution 1: Don't use two references. Why not just use test throughout your code if there's only one object? Solution 2: Have the two references point to the same object, but use a mutable object. e.g. use StringBuilder or StringBuffer instead of String.
{ "pile_set_name": "StackExchange" }
Q: How does the addition of のか to the end of a sentence affect the meaning? I've always had trouble understanding か (question particle) in casual speech. I read that in casual situations, か can be used to give the sentence an exasperated or sarcastic tone. Like in: 負けっかよ! As if I'd lose! I think that I understand that usage reasonably well. What confuses me however is when か is used with a sentence that clearly should be a question (i.e. it has a question mark or a question word). For example: やってみるか? Want to give it a try? I believe I've read that か is unnecessary (and not used) in casual speech to ask a question and questions are simply conveyed through a rising intonation or the addition of の(だ). Could someone explain how is the above sentence different in terms of tone or nuance from the same sentence omitting か 「やってみる?」? Furthermore, on a similar note I believe, the sentence final particle combination のか?! seems to occur frequently. I'm not really sure what to make of this one. I thought の might be the explanatory の but how can one both explain and ask a question? For example: そんな嘘にオレがだまされっと思ってんのか!? Do you think I'd be taken in by a lie like that?! (I'm not confident of this translation) How would that sentence's meaning be affected if it were instead: そんな嘘にオレがだまされっと思ってんの!? or そんな嘘にオレがだまされっと思ってんか!? Thank you very much for reading my post to the end : ). I know this question isn't that specific but any and all help is appreciated. Thanks again! A: Your translation was correct in meaning. そんな嘘にオレがだまされっと思ってんのか!? I think the latter is the more colloquial version of the following: そんな嘘にオレがだまされっと思ってんですか!?(or思ってるんですか) So this: そんな嘘にオレがだまされっと思ってんの!? has the same meaning with perhaps less inquisitive emphasis, while this: そんな嘘にオレがだまされっと思ってんか!? sounds less natural to me. I think one could say the sentence in the last way, but it becomes very slangy, almost to a rarefied extent. (perhaps it would sound immature or "country" to a native listener, but I lack the expertise to say) So in conclusion, yes, you can make a sentence a question simply by altering the tone of the words, but obviously adding か makes it a more "complete expression." ex.: これ食べる? You eat this? これを食べますか? Do you eat this?
{ "pile_set_name": "StackExchange" }
Q: devExtreme TreeView Expand, ScrollTo and Focus is it possible to expand a treeview, scroll to a node and focus it on one function? $("#buttonTest").dxButton({ text: "Test", onClick: function () { editTreeView.expandItem(editTreeView.element().find(".dx-treeview-item")[0]) var currentNode = $("#editTreeView").find("[data-item-id=" + 80 + "]"); var scrollable = $("#editTreeView").find(".dx-scrollable").dxScrollable("instance"); scrollable.scrollToElement(currentNode); $("#editTreeView").find(".dx-treeview-node").removeClass("dx-state-focused"); var currentNode = $("#editTreeView").find("[data-item-id=" + 80 + "]"); currentNode.focus().addClass("dx-state-focused"); } }); In this example, the tree is opened at the first click and scrolled/focused on the second click. But I want it with one click :) Thanks. A: Seems like there is an issue connected with scrollable height calculation. You can fix it using the setTimeout function like below: $("#buttonTest").dxButton({ text: "Test", onClick: function () { //... setTimeout(function() { scrollable.scrollToElement(currentNode); }, 300); } }); This solution looks like a hack, but still)) I've created the fiddle as well.
{ "pile_set_name": "StackExchange" }
Q: Real time chart using jquery Flot that shows time accurately? I am trying to use Flot: http://www.flotcharts.org to create a realtime chart that is updated via ajax. I am basing my code on the following: var cpu = [], cpuCore = [], disk = []; var dataset; var totalPoints = 100; var updateInterval = 5000; var now = new Date().getTime(); var options = { series: { lines: { lineWidth: 1.2 }, bars: { align: "center", fillColor: { colors: [{ opacity: 1 }, { opacity: 1}] }, barWidth: 500, lineWidth: 1 } }, xaxis: { mode: "time", tickSize: [60, "second"], tickFormatter: function (v, axis) { var date = new Date(v); if (date.getSeconds() % 20 == 0) { var hours = date.getHours() < 10 ? "0" + date.getHours() : date.getHours(); var minutes = date.getMinutes() < 10 ? "0" + date.getMinutes() : date.getMinutes(); var seconds = date.getSeconds() < 10 ? "0" + date.getSeconds() : date.getSeconds(); return hours + ":" + minutes + ":" + seconds; } else { return ""; } }, axisLabel: "Time", axisLabelUseCanvas: true, axisLabelFontSizePixels: 12, axisLabelFontFamily: 'Verdana, Arial', axisLabelPadding: 10 }, yaxes: [ { min: 0, max: 100, tickSize: 5, tickFormatter: function (v, axis) { if (v % 10 == 0) { return v + "%"; } else { return ""; } }, axisLabel: "CPU loading", axisLabelUseCanvas: true, axisLabelFontSizePixels: 12, axisLabelFontFamily: 'Verdana, Arial', axisLabelPadding: 6 }, { max: 5120, position: "right", axisLabel: "Disk", axisLabelUseCanvas: true, axisLabelFontSizePixels: 12, axisLabelFontFamily: 'Verdana, Arial', axisLabelPadding: 6 } ], legend: { noColumns: 0, position:"nw" }, grid: { backgroundColor: { colors: ["#ffffff", "#EDF5FF"] } } }; function initData() { for (var i = 0; i < totalPoints; i++) { var temp = [now += updateInterval, 0]; cpu.push(temp); cpuCore.push(temp); disk.push(temp); } } function GetData() { $.ajaxSetup({ cache: false }); $.ajax({ url: "http://www.jqueryflottutorial.com/AjaxUpdateChart.aspx", dataType: 'json', success: update, error: function () { setTimeout(GetData, updateInterval); } }); } var temp; function update(_data) { cpu.shift(); cpuCore.shift(); disk.shift(); now += updateInterval temp = [now, _data.cpu]; cpu.push(temp); temp = [now, _data.core]; cpuCore.push(temp); temp = [now, _data.disk]; disk.push(temp); dataset = [ { label: "CPU:" + _data.cpu + "%", data: cpu, lines: { fill: true, lineWidth: 1.2 }, color: "#00FF00" }, { label: "Disk:" + _data.disk + "KB", data: disk, color: "#0044FF", bars: { show: true }, yaxis: 2 }, { label: "CPU Core:" + _data.core + "%", data: cpuCore, lines: { lineWidth: 1.2}, color: "#FF0000" } ]; $.plot($("#flot-placeholder1"), dataset, options); setTimeout(GetData, updateInterval); } $(document).ready(function () { initData(); dataset = [ { label: "CPU", data: cpu, lines:{fill:true, lineWidth:1.2}, color: "#00FF00" }, { label: "Disk:", data: disk, color: "#0044FF", bars: { show: true }, yaxis: 2 }, { label: "CPU Core", data: cpuCore, lines: { lineWidth: 1.2}, color: "#FF0000" } ]; $.plot($("#flot-placeholder1"), dataset, options); setTimeout(GetData, updateInterval); }); A working example can be seen here: http://www.jqueryflottutorial.com/tester-11.html But why is the time axis along the bottom back to front? And I have also noticed that it does not keep time accurately and it gets further out of sync the longer its left running. Can this be overcome? A: Add now -= totalPoints * updateInterval; at the beginning of the initData() function so that the current time is at the right end of the x-axis instead of the left end. And change now += updateInterval to now = new Date().getTime(); in the update(_data) function so that new data points always have the current time. I would also recommend to change the tickFormatter function to tickFormatter: function (v, axis) { var date = new Date(v); if (date.getSeconds() == 0) { var hours = date.getHours() < 10 ? "0" + date.getHours() : date.getHours(); var minutes = date.getMinutes() < 10 ? "0" + date.getMinutes() : date.getMinutes(); return hours + ":" + minutes; } else { return ""; } }, and directly call GetData(); at the end of the document.ready() function (instead of the setTimeout(GetData, updateInterval);).
{ "pile_set_name": "StackExchange" }
Q: How can I maintain consistent DB schema accross 18 databases (sql server)? We have 18 databases that should have identical schemas, but don't. In certain scenarios, a table was added to one, but not the rest. Or, certain stored procedures were required in a handful of databases, but not the others. Or, our DBA forgot to run a script to add views on all of the databases. What is the best way to keep database schemas in sync? A: SQL Compare by Red Gate is a great tool for this. A: For legacy fixes/cleanup, there are tools, like SQLCompare, that can generate scripts to sync databases. For .NET shops running SQL Server, there is also the Visual Studio Database Edition, which can create change scripts for schema changes that can be checked into source control, and automatically built using your CI/build process. A: SQLCompare is the best tool that I have used for finding differences between databases and getting them synced. To keep the databases synced up, you need to have several things in place: 1) You need policies about who can make changes to production. Generally this should only be the DBA (DBA team for larger orgs) and 1 or 2 backaps. The backups should only make changes when the DBA is out, or in an emergency. The backups should NOT be deploying on a regular basis. Set Database rights according to this policy. 2) A process and tools to manage deployment requests. Ideally you will have a development environment, a test environment, and a production environment. Developers should do initial development in the dev environment, and have changes pushed to test and production as appropriate. You will need some way of letting the DBA know when to push changes. I would NOT recommend a process where you holler to the next cube. Large orgs may have a change control committee and changes only get made once a month. Smaller companies may just have the developer request testing, and after testing is passed a request for deployment to production. One smaller company I worked for used Problem Tracker for these requests. Use whatever works in your situation and budget, just have a process, and have tools that work for that process. 3) You said that sometimes objects only need to go to a handful of databases. With only 18 databases, probably on one server, I would recommend making each Databse match objects exactly. Only 5 DBs need usp_DoSomething? So what? Put it in every databse. This will be much easier to manage. We did it this way on a 6 server system with around 250-300 DBs. There were exceptions, but they were grouped. Databases on server C got this extra set of objects. Databases on Server L got this other set. 4) You said that sometimes the DBA forgets to deploy change scripts to all the DBs. This tells me that s/he needs tools for deploying changes. S/He is probably taking a SQL script, opening it in in Query Analyzer or Manegement Studio (or whatever you use) and manually going to each database and executing the SQL. This is not a good long term (or short term) solution. Red Gate (makers of SQLCompare above) have many great tools. MultiScript looks like it may work for deployment purposes. I worked with a DBA that wrote is own tool in SQL Server 2000 using O-SQl. It would take an SQL file and execute it on each database on the server. He had to execute it on each server, but it beat executing on each DB. I also helped write a VB.net tool that would do the same thing, except it would also go through a list of server, so it only had to be executed once. 5) Source Control. My current team doesn't use source control, and I don't have enough time to tell you how many problems this causes. If you don't have some kind of source control system, get one.
{ "pile_set_name": "StackExchange" }
Q: How can I move a Label with a KeyEvent? (JavaFX) I just learned how to make a GUI application with JavaFX with a FXML file. There is one thing that I don't understand though. When I try to add a KeyListener to a Label or the layout in my FXML file, the code doesn't get executed. It is a simple task like System.out.println("worked");, nothing complicated(eventually I want to move the Label with the keylistener, but now I just wanted something simple where I could easily see if it worked). I read somewhere that you need to add the listener on Frame level, but I don't know how. I would really appreciate it if someone could help me Main.java: package sample; import javafx.application.Application; import javafx.event.EventHandler; import javafx.fxml.FXMLLoader; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.scene.input.KeyEvent; import javafx.stage.Stage; public class Main extends Application { @Override public void start(Stage primaryStage) throws Exception{ Parent root = FXMLLoader.load(getClass().getResource("sample.fxml")); primaryStage.setTitle("Hello World"); Scene scene = new Scene(root, 600, 600); scene.addEventHandler(KeyEvent.KEY_PRESSED, new EventHandler<KeyEvent>() { @Override public void handle(KeyEvent event) { Controller controller = new Controller(); controller.moveLabel(event); } }); primaryStage.setScene(scene); primaryStage.show(); } public static void main(String[] args) { launch(args); } } My controller class: package sample; import javafx.fxml.FXML; import javafx.scene.control.Label; import javafx.scene.input.KeyEvent; public class Controller { @FXML private Label label; @FXML public void moveLabel(KeyEvent e){ switch (e.getCode()){ case RIGHT: label.setTranslateX(3); break; default: System.out.println("not possible"); } } } My FXML file: <?import javafx.scene.layout.GridPane?> <?import javafx.scene.control.Label?> <GridPane fx:controller="sample.Controller" xmlns:fx="http://javafx.com/fxml" alignment="center" hgap="10" vgap="10"> <Label text="Text" fx:id="label" ></Label> </GridPane Error message: Exception in thread "JavaFX Application Thread" java.lang.NullPointerException at sample.Controller.moveLabel(Controller.java:15) at sample.Main$1.handle(Main.java:22) at sample.Main$1.handle(Main.java:18) at com.sun.javafx.event.CompositeEventHandler$NormalEventHandlerRecord.handle BubblingEvent(CompositeEventHandler.java:218) at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent (CompositeEventHandler.java:80) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent (EventHandlerManager.java:238) at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent (EventHandlerManager.java:191) at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent (CompositeEventDispatcher.java:59) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent (BasicEventDispatcher.java:58) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent (EventDispatchChainImpl.java:114) at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent (BasicEventDispatcher.java:56) at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent (EventDispatchChainImpl.java:114) at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74) at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:54) at javafx.event.Event.fireEvent(Event.java:198) at javafx.scene.Scene$KeyHandler.process(Scene.java:3964) at javafx.scene.Scene$KeyHandler.access$1800(Scene.java:3910) at javafx.scene.Scene.impl_processKeyEvent(Scene.java:2040) at javafx.scene.Scene$ScenePeerListener.keyEvent(Scene.java:2501) at com.sun.javafx.tk.quantum.GlassViewEventHandler$KeyEventNotification.run (GlassViewEventHandler.java:217) at com.sun.javafx.tk.quantum.GlassViewEventHandler$KeyEventNotification.run (GlassViewEventHandler.java:149) at java.security.AccessController.doPrivileged(Native Method) at com.sun.javafx.tk.quantum.GlassViewEventHandler.lambda$handleKeyEvent$352 (GlassViewEventHandler.java:248) at com.sun.javafx.tk.quantum.QuantumToolkit.runWithoutRenderLock (QuantumToolkit.java:389) at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleKeyEvent (GlassViewEventHandler.java:247) at com.sun.glass.ui.View.handleKeyEvent(View.java:546) at com.sun.glass.ui.View.notifyKey(View.java:966) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.lambda$null$147 (WinApplication.java:177) at java.lang.Thread.run(Thread.java:748) A: Add a method to your controller to move the label, e.g. public class Controller { private final double moveDelta = 10 ; @FXML private Label labelTest ; public void moveLabel(int deltaX, int deltaY) { labelTest.setTranslateX(labelTest.getTranslateX() + moveDelta * deltaX); labelTest.setTranslateY(labelTest.getTranslateY() + moveDelta * deltaY); } } Then get a reference to the controller in the start() method (you need to use the non-static load() method from FXMLLoader to do this), and call the method from the key handler: @Override public void start(Stage primaryStage) throws Exception{ FXMLLoader loader = new FXMLLoader(getClass().getResource("sample.fxml")); Parent root = loader.load(); primaryStage.setTitle("Hello World"); Scene scene = new Scene(root, 600, 600); Controller controller = loader.getController(); scene.addEventHandler(KeyEvent.KEY_PRESSED, new EventHandler<KeyEvent>() { @Override public void handle(KeyEvent event) { switch(event.getCode()){ case RIGHT: controller.moveLabel(1, 0); break; default: System.out.println("not possible"); } } }); primaryStage.setScene(scene); primaryStage.show(); }
{ "pile_set_name": "StackExchange" }
Q: Paradox regarding phase transitions in relativistic systems The main question I would like to ask is whether quantities such as density are dependent on the frame of reference. I have searched several forums and the answer is somewhat controversial. Some answers use the concept of relativistic mass to justify that it is invariant. Some of the answers say that relativistic mass is not a correct concept (given in Classical Mechanics by John R. Taylor pg 633) and that mass is invariant and hence the density must be an observer dependent quantity. This is a little odd because of the following thought experiment: Imagine that a container filled with liquid is made to travel at relativistic speeds. In the frame of the container, the density is d at temperature T and pressure P. To a person in the ground frame the volume of the liquid will decrease because of length contraction. At the critical density there must be a phase transition from liquid to solid. In the moving frame, the container has a liquid but in the rest frame, the container is filled with a solid. So based on the above I have the following questions: Is there anything wrong in the above argument and why is it incorrect? (eg. Phase diagram changes depending on the velocity of the object) If the density is observer-dependent, could it mean that thermodynamics is observer-dependent? A: Every time someone comes up with an apparent "paradox" in Special Relativity, it is always related to some application of the "Lorentz contraction" or "time dilation" concepts in some questionable contexts. Over the years of many similar discussions, I've worked out the recipe for dissolution of such "paradoxes": just reduce your "paradox" down to the Lorentz transformations. They are fundamental to the Special Relativity -- both time dilation and the Lorentz contraction are derived from the Lorentz transformations. So, whatever you get from Lorentz transformation is what Special Relativity actually predicts. Any contradiction with the result you get from using Lorentz transformation is a mistake somewhere in your reasoning. In order to reduce your "paradox" to these fundamental equations of Special Relativity - you first have to formulate your problem in terms of space-time events and world lines. When dealing with thermodynamic systems you'll have to get down to individual particles making up your bodies. Extract their worldlines and Lorentz-transform them to get your trajectories in moving reference frames. To illustrate this, I've randomly generated a 1000 particles bouncing in a 3x1 box with Maxwell-distributed velocities: I've saved the worldlines of all the particles and applied the Lorentz transform with $\gamma=2$ to each of the particles. Plotting the resulting coordinates, I've got this: Above I've kept the scale size of both scales and made sure that the center of the scale is at the center of the resulting particle cloud. You can see that the density has, indeed, increased. Notice that the velocities of the particles along the x-axis are changed in respect to the bounding volume -- it is not a Maxwell distribution anymore. In the moving frame, the container has a liquid but in the rest frame, the container is filled with a solid. As for your thought experiment, you can apply this reduction to Lorentz transformations to realize that there is no way you can get dynamic motions of individual particles of a liquid and then Lorentz-transform them to get static (relative to each other) positions of the same particles in a solid body. A: Is there anything wrong in the above argument and why is it incorrect? You have implicitly assumed that the pressure remains constant, and therefore that since density is increasing, at some point as the speed of the medium increases you will hit a "critical point" and undergo a phase transition. This does not follow, since the pressure of the matter will increase along with the density. Specifically, the mass/energy density, pressure, and momentum density of a bulk solid all form part of the stress-energy tensor of the material, where $$ T_{00} = \text{mass/energy density} \\ T_{0i} = \text{momentum density vector} \\ T_{ij} = \text{stress tensor} $$ (working in units where $c = 1$.) In particular, let's suppose that we have an isotropic fluid at rest, so that $T_{00} = \rho$, $T_{ij} = P \delta_{ij}$, and all other components of the stress-energy tensor vanish. When we transform into another frame moving at speed $\beta$ in the $x$-direction relative to the rest, we find that $$ T'_{00} = \gamma^2 (\rho + \beta^2 P) \\ T'_{11} = \gamma^2 (P + \beta^2 \rho) $$ The key realization here is while the observed density of the medium ($T_{00}$) increases as the speed of the medium increases, the pressure ($T_{11}$) will also increase. The argument that "since the density increases, you'll eventually get a phase transition" is therefore not necessarily correct; the density at which a phase transition occurs depend on the pressure, which is also changing. A: A phase transition should occur when the attracting forces between particles becomes high relative to their relative motion (broadly speaking). Thought experiment: We have two objects with a certain opposite electrical charge. Now we move with respect to the objects, so that the distance between them becomes smaller (and charge won't). Does this change the attractive forces between them? The laws of nature don't change for a moving observer (the fundamental assumption in relativity), so I would say yes. With a factor $\gamma^2$, according to Coulomb's law, the attractive force would increase. However, would it be in disagreement with what an observer that doesn't move (w.r.t. the objects) would see? Let's say the objects have equal mass and the acceleration between the objects in the 'standing still'-frame is $g/r^2$ (relative acceleration), and they start with a velocity such that they will make a circular motion with radius $R$. For the moving observer this should Lorentz-contract to an ellipse. When the objects align according to your line of motion, the attraction would be bigger, but the velocity of the objects change with a factor $\gamma$ too (addition of perpendicular velocities. So the centripetal force should change with a factor $\gamma^2$, which we have seen it does, so there it tempts to keep a circular motion. When the objects align perpendicular to your line of motion, the force doesn't change, but the velocity decreases, so there they would fall a bit to each other. If someone is very skeptic whether their movement will be elliptic, you can do it explicitly by finding an expression for their attraction as a function of their angle and solve the differential equation with the right starting conditions. But this broadly explains why there would be no issue with the attracting forces being bigger. Now a liquid in a container is the same problem with more bodies. So generalizing this result, they should move in a way that would give same liquid when Lorentz-transformed. So if my reasoning is correct, for a moving observer it is possible for a liquid to have a higher density than would be possible in a resting frame and still be a liquid, i.e.: the phase transition shifts.
{ "pile_set_name": "StackExchange" }
Q: Tough Moment of Inertia Problem About a Super Thin Spherical Shell Using Spherical Coordinates I need to compute the moment of inertia of a spherical shell with radius $R$, constant density $\rho$, and total mass $M$ throughout some (any) axis through the origin. The question specifies that this should be a double integral, since it's a super thin shell. So that means I'd need to use $dA = R^2\sin{\phi}d{\phi}d{\theta}$. From here, though, I'm stuck, and even unsure where to start. Any help would be much appreciated. A: The mass of the $dA$ area is $dm=\rho dA=\frac{M}{4\pi R^2}dA$. I assumed here that $\rho$ is a surface density. You can calculate the moment of inertia with respect to any axis, they are all equal. Then for simplicity, use the axis where $\phi=0$. The distance from this axis is $r=R\sin\phi$, so $$I=\iint r^2dm=\int_0^{2\pi} d\theta\int_0^\pi\frac{M}{4\pi R^2}R^4\sin^3\phi$$
{ "pile_set_name": "StackExchange" }
Q: Are some motherboard brands, per se, of lower quality? I have heard in different occasions that some motherboard brands (such as PC Chips and ASRock) are cheap, in both senses: less expensive and of lower quality, because the manufacturers use components that don't reach certain standars and/or have been discarded from assembly lines with higher standars (like ASrock that would be a cheap line from Asus). Are these affirmations true or complete nonsense? A: Per se lower quality? No. There are lines which produce very cheap motherboards. I suspect (-with no hard evidence to back that up!-) that most use cheaper components. However things get quite muddled at the cheaper end. E.g. consider the Cyrix 200+ CPU. Sold very cheap compared to the pentium-1. Often found in desktops with the cheapest possible power supplies, the cheapest possible motherboards, the cheapest possible graphics cards... and often in an unstable system. Unsurprisingly it had a very bad name, yet one ran without problems for years in an old Asus boards with good hardware. The point of that anecdote is to point out that a piece of cheap hardware can be worse quality, but it does not have to be. And that mouth to mouth experience in this case is often wrong. Having said that: Your motherboard is the backplane of all other components. Both on add-in cards or integrated. Getting a good motherboard is worth it. So do read reviews of which board you are considering, but do so with a critical eye.
{ "pile_set_name": "StackExchange" }
Q: script/plugin install for rails 4 (daemon generator) I am new to rails and am trying to get up and running with the daemons gem by following railcasts http://railscasts.com/episodes/129-custom-daemon. I am using linux Mint 17 and my Gemfile looks like so: source 'https://rubygems.org' gem 'rails', '4.1.1' gem 'pg' gem 'sass-rails', '~> 4.0.3' gem 'uglifier', '>= 1.3.0' gem 'coffee-rails', '~> 4.0.0' gem 'jquery-rails' gem 'turbolinks' gem 'jbuilder', '~> 2.0' gem 'sdoc', '~> 0.4.0', group: :doc gem 'spring', group: :development gem 'feedjira' However when I run script/plugin install git://github.com/dougal/daemon_generator.git I get the following error: bash: script/plugin: No such file or directory When I try rails plugin install git://github.com/dougal/daemon_generator.git I just get the usage manual. Any help is appreciated. A: Rails::Plugin is deprecated and has been removed since Rails 4.0. Instead of adding plugins to vendor/plugins use gems or bundler with path or git dependencies. Also if your goal is to run logic in the background there are way newer options to implement like Resque https://github.com/resque/resque or Sidekick https://github.com/mperham/sidekiq
{ "pile_set_name": "StackExchange" }
Q: $L(M) = L$ where $M$ is a $TM$ that moves only to the right side so $L$ is regular Suppose that $L(M) = L$ where $M$ is a $TM$ that moves only to the right side. I need to Show that $L$ is regular. I'd relly like some help, I tried to think of any way to prove it but I didn't reach to any smart conclusion. what is it about the only side right moves and the regularity? A: Hint -- You need to show that your TM has the same power as a finite-state automaton (as the commenter Dave Clarke said), that is, given such a TM, construct a FSA that accepts the same language. But since the TM has no memory but the tape, ask yourself what a right-only TM can do with its tape. It should be relatively straightforward to actually construct the parts of the FSA you are looking for. Just go through them -- the states, the input alphabet and most crucially the move function (usually $\delta$) and define them in terms of the parts of the TM you started with. Then you have to show that the two accept the same language, in terms of the definition of "accept" for each, being sure to mention potential looping behavior in the TM. BTW, this sort of problem is amenable to a pretty convincing "hand-waving" proof that would actually be of a type that is quite acceptable in a research paper. Your course, however, may be expecting a precise proof, using $\delta$ and the other components of the tuples that constitute the TM and FSA. A: The only difference between a finite automaton and a Turing machine is the tape. The tape provides memory ability. If you can go only on one side, then you simply can't read what you have written. To formally prove this you build an automaton from your machine $M$. Suppose $A$ is the automaton in $M$. Then your automaton $A'$ will be the same as $A$ with a one-cell memory, because you can still read the cell you are on in $M$. (Number of states of $A'$ = number of states of $M$ $×$ size of the alphabet of the tape). Note that this is still true if you can only read the cells that are at a bounded distance from the visited cell the most on the right because you have still a bounded memory. Also, the number of tapes and whether you read from a tape or a "standard input" like a DFA does not matter.
{ "pile_set_name": "StackExchange" }
Q: File cannot be accessed because it is being used by another program I am trying to remove the space at the end of line and then that line will be written in another file. But when the program reaches to FileWriter then it gives me the following error Process can't be accessed because it is being used by another process. The Code is as below. private void FrmCounter_Load(object sender, EventArgs e) { string[] filePaths = Directory.GetFiles(@"D:\abc", "*.txt", SearchOption.AllDirectories); string activeDir = @"D:\dest"; System.IO.StreamWriter fw; string result; foreach (string file in filePaths) { result = Path.GetFileName(file); System.IO.StreamReader f = new StreamReader(file); string newFileName = result; // Combine the new file name with the path string newPath = System.IO.Path.Combine(activeDir, newFileName); File.Create(newPath); fw = new StreamWriter(newPath); int counter = 0; int spaceAtEnd = 0; string line; // Read the file and display it line by line. while ((line = f.ReadLine()) != null) { if (line.EndsWith(" ")) { spaceAtEnd++; line = line.Substring(0, line.Length - 1); } fw.WriteLine(line); fw.Flush(); counter++; } MessageBox.Show("File Name : " + result); MessageBox.Show("Total Space at end : " + spaceAtEnd.ToString()); f.Close(); fw.Close(); } } A: File.Create itself returns a stream. Use that stream to write file. Reason you are receiving this error is because Stream returned by File.Create is open and you are trying to open that file again for write. Either close the stream returned by File.Create or better use that stream for file write or use Stream newFile = File.Create(newPath); fw = new StreamWriter(newFile);
{ "pile_set_name": "StackExchange" }
Q: Gradle couldn't execute npm command I'm trying to run a npm command inside of gradle task but I'm getting a strange error: Caused by: net.rubygrapefruit.platform.NativeException: Could not start 'npm' at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:27) at net.rubygrapefruit.platform.internal.WrapperProcessLauncher.start(WrapperProcessLauncher.java:36) at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:65) ... 2 more Caused by: java.io.IOException: Cannot run program "npm" (in directory "/Users/psilva/Documents/projects/registrolivre"): error=2, No such file or directory at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:25) ... 4 more Caused by: java.io.IOException: error=2, No such file or directory And this is my task: task npmInstall(type: Exec) { commandLine "npm", "install" } Could someone help? A: This answer worked for me with a different npm-related task. The recommendation there is to use an executable and args rather than commandLine. executable 'npm' args ['install'] Depending on your directory structure, you may also need to add the workingDir property and set it to the directory where your package.json lives. As an alternative, the Gradle Node Plugin is also really handy for managing the most common Node tasks in a Gradle build. I use this plugin as the basis for my Node tasks and then create other custom tasks as needed. A: If you are on Windows try this: task npmInstall(type: Exec) { commandLine "npm.cmd", "install" } instead of this: task npmInstall(type: Exec) { commandLine "npm", "install" } A: If you are using Windows OS, you have to use 'npm.cmd' instead of 'npm'. Better to detect whether OS is windows or not and build your npm command. Please see the code snippet below, import org.apache.tools.ant.taskdefs.condition.Os task npmInstall(type: Exec) { String npm = 'npm'; if (Os.isFamily(Os.FAMILY_WINDOWS)) { npm = 'npm.cmd' } workingDir 'src/main/webapp' commandLine npm, 'install' }
{ "pile_set_name": "StackExchange" }
Q: Singleton Overuse I was considering using a Singleton pattern in a winforms application that I am working on, but a lot of people seem to think that singletons are evil. I was planning on making a "Main Menu" form which is a singleton. I can't think of any reason that I would want multiple instances of my Main Menu, and it is always going to be the first form that comes up, so I am not concerned with wasting resources if it gets instantiated unnecessarily. Also, I could see issues arising if there are multiple instances of the Main Menu. For example, if another form has a "Main Menu" button and there are multiple instances of the Main Menu, then the decision of which instance to show seems ambiguous. Also, if I have another winform which has to look at the state of the program to determine whether there is already an instance of the main menu, then I feel like I am breaking modularity, though I might be wrong. Should I be avoiding the use of a singleton in this case or would it be better to make the Main Menu static? I just started using c# a couple days ago, and I haven't really done much with OOP in the last few years, so I apologize if this is a stupid question. Thanks A: Is MainMenu shown all the time? If not, it would make sense to release the old instance when it's closed and create a new one every time you need to open it. This way other modules won't need to know of its instances, they'll just create one when they need to open it.
{ "pile_set_name": "StackExchange" }
Q: How to load a partial view in a new window? I have an MVC partial view. I try to load it using window.open by calling the controller name and the action name. I got from the action the related partial view back but there are no links to stylesheets and scripts because this is a new window. How can I fix this issue please? This is the javascript code I´m using: function OpenNewWindowForDetail(model) { $.ajax({ url: '@Url.Action("ActionName", "ControllerName")', dataType: 'html', data: { userid: model.UserId }, type: 'POST', success: function (data) { var win = window.open('about:blank'); with (win.document) { open(); write(data); close(); } } }); } All links to stylesheet files and javascript files are in the layout.cshtml by the way. Thank you in advance A: You should modify your action to regular view with layout, instead of partial view. There is no reason to use partial view in this situation. There is also no reason to use AJAX. Simple anchor with target='_blank' would be enough. userId can be passed by GET parameter.
{ "pile_set_name": "StackExchange" }
Q: Setar atributos em um único bloco JS Existe alguma maneira de fazer isso? Eu to repetindo muito código com o setAttribute em um único elemento, então estou pesquisando se existe alguma forma de fazer isso de um jeito mais organizado e otimizado, mas não achei ainda. Abaixo vai um EXEMPLO do que to fazendo e um EXEMPLO do que queria fazer se possível: Link qualquer que irei utilizar no exemplo: <a class="teste"></a> Código JS que estou fazendo pra setar os atributos: document.querySelector('.teste').setAttribute('href', 'https://www.teste.com') document.querySelector('.teste').setAttribute('title', 'Site de Teste') document.querySelector('.teste').setAttribute('id', 'idteste') Como eu queria se possível ou algo similar: document.querySelector('.teste').setAttribute(['href', 'title', 'id'], ['https://www.teste.com', 'Site de Teste', 'idteste']) Teria algo similar a isso ja nativo do JavaScript? A: Poderia fazer usando uma função, passando como parâmetros o elemento, e os atributos e os valores como arrays, e fazer um for em qualquer uma das arrays setando os atributos: function setAttr(el, atts, vals){ for(var x=0; x<atts.length; x++){ el.setAttribute(atts[x], vals[x]); } console.log(el); } setAttr(document.querySelector('.teste'), ['href', 'title', 'id'], ['https://www.teste.com', 'Site de Teste', 'idteste']); <a class="teste">link</a> Outra forma (ainda melhor) é passando dois parâmetros: o elemento e um objeto com pares atributo:valor para a função e fazer um for...in: function setAttr(el, atts){ for(var x in atts){ el.setAttribute(x, atts[x]); } console.log(el); } setAttr(document.querySelector('.teste'), {'href': 'https://www.teste.com', 'title': 'Site de Teste', 'id': 'idteste'}); <a class="teste">link</a> Pra ser bastante sincero, eu preferiria repetir mesmo como você está fazendo. Usar uma função para aplicar atributos, na minha opinião, é trocar 6 por meia-dúzia. Não vejo muita vantagem e acho que fica até pior de ler. Neste caso, eu apenas atribuiria o elemento a uma variável para não repeti-lo várias vezes: var el = document.querySelector('.teste') el.setAttribute('href', 'https://www.teste.com') el.setAttribute('title', 'Site de Teste') el.setAttribute('id', 'idteste')
{ "pile_set_name": "StackExchange" }
Q: Navigate to contact's profile I want to open the default windows phone contact's profile out of my own app. I use the following code to receive all contacts: Contacts contacts = new Contacts(); contacts.SearchCompleted += HandleContactsSearchCompleted; contacts.SearchAsync(string.Empty, FilterKind.None, null); The HandleContactsSearchCompleted method iterates over all contacts, performs some filtering and displays them. On click on a contact (instance of Contact from the list of contacts received through the SeachCompleted event) I want to open the standard profile which I also see when I click on a contact in my people hub. Is there a special Uri to use with the NavigationService.Navigate method or a particular task for that? Thanks in advance! A: The current version of the Windows Phone SDK does not provide a method for showing the contact card of an existing contact in your address book.
{ "pile_set_name": "StackExchange" }
Q: Creating form with fieldset legend is generating errors I am experiencing some issues when trying to generate a form using 'inputs' <?php echo $this->Form->create('Post'); echo $this->Form->inputs(array( 'legend' => 'Personal information', 'name', 'nickname', 'age', 'email')); echo $this->Form->inputs(array( 'legend' => 'Employment information', 'company', 'started_work', 'description')); ?> Output: Warning (2): array_keys() expects parameter 1 to be array, null given [CORE\Cake\View\Helper\FormHelper.php, line 848] When i remove $this->Form->create('Post'); it generates succesfully, but it does not have the output... A: Have you double-triple-checked that: a) your Post.php model exists and is named correctly b) your posts table exists in your database c) your app is connecting to the database OK? The error is coming when the FormHelper calls the '_introspectModel' method, and returns null. The _introspectModel method is supposed to return info about the fields in the model and so on. But, if it can't retrieve the information it needs from the model, then it'll return null rather than an array. So that's what's causing the error you're getting. Double check everything related to your Post model, and if you still can't fix the error, update your question and paste the code from your Post model.
{ "pile_set_name": "StackExchange" }
Q: Sieve Of Erastothenes using Java have started learning Java recently and was looking into some easy algorithms. I found the Sieve Of Erastothenes algorithm here I am trying to get better at writing good code for my solutions. Please give me your suggestions. import java.util.Scanner; public class SieveofErastothenes { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); System.out.println("Enter the prime number ceiling"); int ceiling = keyboard.nextInt(); keyboard.close(); prime(ceiling); } private static void prime(int n) { // create an array with 0 and 1 where 1= Prime and 0= Non Prime int[] isPrime = new int[n + 1]; // set all values to 1 for (int i = 2; i <= n; i++) { isPrime[i] = 1; } int upperbound = (int) Math.sqrt(n); for (int i = 2; i <= upperbound; i++) { if (isPrime[i] == 1) { for (int j = 2; j * i <= n; j++) { isPrime[j * i] = 0; } } } printprime(isPrime, n); } private static void printprime(int[] isPrime, int n) { for (int i = 0; i <= n; i++) { if (isPrime[i] == 1) { System.out.println(i); } } } } A few online tutorials use a boolean array to set all the values as true or false whereas I am using an integer array and setting the values to 0 and 1 initially.Does this make any difference in the performance? A: It does indeed make a difference whether you use bool[] or int[] - an int uses four times as much memory and hence the array requires four times as many cache line transfers as bool does, and it will exceed the CPU's level-1 cache capacity - usually around 32 KiByte - for smaller values of n than a boolean array. And, as Dave said, testing the truthiness of a bool cell looks nicer in languages where integers aren't truthy and thus need to be compared to 0, and it reflects the program logic better. If you invert the logic of your sieve from 'is prime' to 'is composite' then you can skip the initialisation of the array to all true (or 1) because the array will already have been initialised to the bit pattern 0, and semantically it would be more accurate anyway. Besides, I don't think that the old Greek ever said anything about marking all numbers as 'prime' before the start of the sieving - that must have been invented by tutors in the modern age. Also, the Sieve of Eratosthenes has no need for multiplying indices - it strides additively, by adding the current prime to the current position during each step: for (int i = 2, sqrt_n = (int)Math.sqrt(n); i <= sqrt_n; ++i) if (!is_composite[i]) for (int j = i * i; j <= n; j += i) is_composite[j] = true; I've also adjusted the starting point for the crossing-off to the square of the current prime, since all smaller multiples will already have been crossed off during the cycles for smaller primes. The outer loop still contains a small inefficiency: there is only one even prime, yet the loop tests all even numbers up to sqrt(n) for compositeness. Quite a few performance reserves can be unlocked by removing the number 2 from the picture entirely and sieving only the odd numbers. The biggest gain comes from removing half of all 'hops' (crossings-off) in the inner loop and from the halved memory pressure. Last but not least, the performance will drop significantly when the sieve array gets appreciably bigger than the L1 cache size of the CPU (typically 32 KiByte), and it will drop even further when the L2 cache size is exceeded. If you need decent performance in ranges beyond the L1 cache size then you might want to consider segmented sieving, i.e. working the sieve range in cache-sized blocks. This is quite simple to implement, and definitely simpler than windowed sieving; you can find an example in my answer to Find prime positioned prime number. There you can also see that odds-only sieving is just as simple as plain sieving; it just requires a bit of care when dealing with indexes (i.e. clarity whether a given value is supposed to be a number or a bit index). Many coding challenges that deal with primes require the sieving of millions of numbers instead of small handfuls, and so an investment in studying suitable 'prime technology' can pay off handsomely. Moreover, this study can tell you lot about the efficiency of basic mechanisms in your language - be it bool[] vs BitSet (bool[] wins hands-down in Java, C# and C++, if you have an eye on cache sizes), or the performance cost of syntactic sugar. A: The short answer to your question, is yes. It does make a difference to performance. However, we're only talking about increased RAM usage (integer datatype is bigger than boolean) - and we're talking about a lot less than 1kb. So on a modern computer (even a Raspberry Pi), you won't have a problem. Now, general advice & tips... Variable names should be appropriate to their use. When I see isPrime I assume that it's a boolean value (because it's either a prime or it isn't). Regardless of performance impact, boolean makes more sense because you only ever set 0 or 1. Loops - is there a specific requirement why you appear to be writing 1.3 compliant code? For loops could be done with the for( TYPE var : collection) form. Makes it more readable IMO - I don't need to worry about the control variable being tampered with during loop execution. Method names - should be descriptive, and use camel case. prime() and printprime are examples of how not to do it. Math.sqrt() takes a double and returns a double - why are you casting directly to int? If you really want an int datatype, you should be rounding as appropriate (up vs down). Get out of the habit of just casting primitives before you move on to their wrappers - eg, you cannot cast Double directly to int. Comments - opinions vary. Use sparingly and only when deliberately against accepted convention, or to explain why you have to do something a certain way even though you know it should be done differently. JavaDoc comments again, should be used sparingly. Get the class & method names right, and JavaDoc shouldn't be needed. There could be constraints that would push you to use JavaDoc, and if so, that's fine. Unit test. Unit test. Unit test. You don't appear to have any. Write the Unit Tests before you write your main code. Write as little main code as possible to make your unit tests pass (make sure your Unit Tests are appropriate for requirements first). A: A few online tutorials use a boolean array to set all the values as true or false whereas I am using an integer array and setting the values to 0 and 1 initially.Does this make any difference in the performance? You would need to profile it (it will probably depend on the virtual machine used). But regarding readability, boolean makes a lot more sense. A number is either prime, or it is not (there is no "maybe prime" here). true has a real meaning here (is prime? true), while 1 does not; it is only understandable via comments or convention. Using boolean also makes your ifs nicer. if (isPrime[j]) is quite nice to read and easy to understand, while isPrime[i] == 1 is not. Misc you can use Arrays.fill to initialize your array. your function should not print the primes itself, but return them instead (just an array of prime numbers, not the whole isPrime array). That way, it's easily testable and reusable. then, your function could be called getPrimes, which makes it more obvious what the function does.
{ "pile_set_name": "StackExchange" }
Q: Problems with sprockets when deploying Rails 3.1.rc4 I'm sure that I am just overlooking something simple here but this has been driving me crazy all night! When trying to deploy a Rails 3.1.rc4 application to the Cedar stack on Heroku (I did this successfully a month ago with a similar Gemfile) I am receiving this error: Could not find sprockets-2.0.0.beta.10 in any of the sources My Gemfile looks like this: source 'http://rubygems.org' # Core gem 'rails', '3.1.0.rc4' # Asset template engines gem 'sass-rails', "~> 3.1.0.rc" gem 'coffee-script' gem 'uglifier' # Misc gem 'devise' gem 'jquery-rails' gem 'omniauth' gem 'fb_graph' gem 'compass', git: 'https://github.com/chriseppstein/compass.git', branch: 'rails31' gem 'haml' gem 'cancan' gem 'kaminari' gem 'friendly_id', '~> 3.3.0', git: 'https://github.com/norman/friendly_id.git' gem 'recaptcha', :require => 'recaptcha/rails' gem 'aws-ses', '~> 0.4.3', :require => 'aws/ses' # Local Environment group :test do # Pretty printed test output gem 'turn', :require => false gem 'sqlite3' end # Heroku Environment group :production do gem 'pg' gem 'execjs' gem 'therubyracer' end After searching around and finding this article on Google Groups, I determined that this must be fixable by adding this line gem 'sprockets', '2.0.0.beta10' to my Gemfile and then running bundle update sprockets This failed with Could not find gem 'sprockets (= 2.0.0.beta10, runtime)' in any of the gem sources listed in your Gemfile. and at this point I don't know what to do or how to handle this. Is it possible that I need to upgrade to Rails 3.1.rc5 and if so how may I do that without starting over from scratch? Thank you for any help that you can provide! -Robert A: Just bump your rails version up to rc5 gem 'rails', '3.1.0rc5' then: bundle update
{ "pile_set_name": "StackExchange" }
Q: Why can't SBT's thread context classloader load JDK classfiles as resources? lihaoyi test$ tree . └── Foo.scala 0 directories, 1 file lihaoyi test$ cat Foo.scala object Main{ def main(args: Array[String]): Unit = { println(getClass.getClassLoader.getResourceAsStream("java/lang/String.class")) println(getClass.getClassLoader.getClass) println(Thread.currentThread().getContextClassLoader.getResourceAsStream("java/lang/String.class")) println(Thread.currentThread().getContextClassLoader.getClass) } } lihaoyi test$ sbt run [info] Loading global plugins from /Users/lihaoyi/.sbt/0.13/plugins [info] Set current project to test (in build file:/Users/lihaoyi/Dropbox/Workspace/test/) [info] Updating {file:/Users/lihaoyi/Dropbox/Workspace/test/}test... [info] Resolving org.fusesource.jansi#jansi;1.4 ... [info] Done updating. [info] Compiling 1 Scala source to /Users/lihaoyi/Dropbox/Workspace/test/target/scala-2.10/classes... [info] Running Main sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@18e38ff2 class sbt.classpath.ClasspathUtilities$$anon$1 null class sbt.classpath.ClasspathFilter [success] Total time: 2 s, completed 29 May, 2017 4:14:11 PM lihaoyi test$ Here, we can see that the getClass.getClassLoader and the Thread.currentThread.getContextClassLoader are returning different values. What's more, the Thread.currentThread.getContextClassLoader seems to be refusing to load java/lang/String.class, while the other can. Notably, when I run the jar file using an external tool like scalac/scala, or java, both classloaders are able to load the classfile as a resource lihaoyi test$ scalac Foo.scala lihaoyi test$ scala Main sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@1b28cdfa class scala.reflect.internal.util.ScalaClassLoader$URLClassLoader sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@7229724f class scala.reflect.internal.util.ScalaClassLoader$URLClassLoader I would expect SBT to behave similarly: to have the Main.getClass.getClassLoader and the Thread.currentThread().getContextClassLoader both be able to load java/lang/String.class as a resource. What gives? A: Some hints provided by Jason Zaugg (retronym)'s sbt launcher notes. sbt/launcher is a small Scala application that bootstraps an arbitrary Scala program (typically, SBT) described a config file and sourced via Ivy dependency resolution. This creates a child classloader containing Scala 2.10.6. A child of this contains SBT itself and xsbti/interface-0.13.11.jar. SBT needs to use non-standard classloader delegation to selectively hide classes when creating child classloaders for plugin code, for the Scala compiler, or for user code. Some more hints in the sbt 0.13 sources: https://github.com/sbt/sbt-zero-thirteen/blob/v0.13.15/run/src/main/scala/sbt/Run.scala#L57-L62 https://github.com/sbt/sbt-zero-thirteen/blob/v0.13.15/util/classpath/src/main/scala/sbt/classpath/ClasspathUtilities.scala#L71-L72 def makeLoader(classpath: Seq[File], instance: ScalaInstance, nativeTemp: File): ClassLoader = filterByClasspath(classpath, makeLoader(classpath, instance.loader, instance, nativeTemp)) def makeLoader(classpath: Seq[File], parent: ClassLoader, instance: ScalaInstance, nativeTemp: File): ClassLoader = toLoader(classpath, parent, createClasspathResources(classpath, instance), nativeTemp) Basically sbt is a kitchen sink of a Java application that has an arbitrary Scala versions and your code, and your test libraries along with the Oracle/OpenJDK's Java library. To construct a classpath that makes sense without loading them over and over again, it's creating a hierarchy of classloaders each filtered by some criteria. (I think)
{ "pile_set_name": "StackExchange" }
Q: Why does the android app freeze while copying files? I am making a simple file explorer for Android. So, the problem is that when I copy files or folders to any place of file system, the app just freezes until the file/folder is copied. I have to notice that the app doesn't crash, doesn't send errors to the Logcat. The app simply doesn't respond for any actions while files are being copied. As I can see, in other file explorers this problem doesn't happen and you can do any actions in the app when files are being copied. I start copying files in a Fragment. I tried using both own methods of copying files and FileUtils library methods but the result is the same. Basic things: import android.support.v7.widget.Toolbar; ... private Toolbar mToolbar; onCreateView method: .... //Start copying files mToolbar.setOnMenuItemClickListener(new Toolbar.OnMenuItemClickListener() { @Override public boolean onMenuItemClick(MenuItem menuItem) { if(menuItem.getItemId() == R.id.paste_button ){ if(fileActionMode.equals("copy")){ copy(); } } if(menuItem.getItemId() == R.id.cancel_button){ mToolbar.getMenu().removeItem(R.id.paste_button); mToolbar.getMenu().removeItem(R.id.cancel_button); } updateUI(); return true; } }); ... Copy method: private void copy(){ mToolbar.getMenu().removeItem(R.id.paste_button); mToolbar.getMenu().removeItem(R.id.cancel_button); File file = new File(initFilePath); if(file.isDirectory()){ try { FileUtils.copyDirectoryToDirectory(file, new File(FileFoldersLab.get(getActivity()).getCurPath())); } catch (IOException e) { e.printStackTrace(); }finally { updateUI(); } }else if(file.isFile()){ try { FileUtils.copyFileToDirectory(file, new File(FileFoldersLab.get(getActivity()).getCurPath())); } catch (IOException e) { e.printStackTrace(); }finally { updateUI(); } } } My method that I used instead of FileUtils....: public void copyFile(File src) throws IOException{ createFile(src.getName()); try (InputStream in = new FileInputStream(src)) { try (OutputStream out = new FileOutputStream(mCurPath+File.separator+src.getName())) { byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } } } } A: The App freezes because the copy() method of yours is a time consuming process.It should also be remembered that, you are performing this time consuming operation on the main thread which is also responsible for maintaining the UI. Since the UI thread is busy completing your long operation , it avoids refreshing and performing UI operations.As a result the App finally hangs and freezes. Now there are two ways you can handle this situation, You can make the User of the App wait until the copy() method of your is completed by showing the spinning loader , just before your copy() method starts and hiding the loader once the method is completed.This loader technique ensures that the User doesn't perform any UI related event(like a button click) until the copy() method is completed.Here is the code for showing loader, sorry the code is in Kotlin. class DialogUtil private constructor() { init { throw AssertionError() } companion object { private var progressDialog: ProgressDialog? = null fun showAlertDialog(context: Context, message: String?) { AlertDialog.Builder(context).setMessage(message) .setCancelable(false).setPositiveButton("OK") { dialogInterface, _ -> dialogInterface.dismiss() }.show() } fun showProgressDialog(context: Context) { progressDialog = ProgressDialog(context) progressDialog!!.setProgressStyle(ProgressDialog.STYLE_SPINNER) progressDialog!!.requestWindowFeature(Window.FEATURE_NO_TITLE) progressDialog!!.setMessage("Please wait...") progressDialog!!.setCancelable(false) progressDialog!!.isIndeterminate = true progressDialog!!.show() } fun hideProgressDialog() { if (progressDialog != null) { progressDialog!!.dismiss() } } } } Make a helper class like above for loader and then just above the line where your copy() method starts call this loader like this DialogUtil.showProgressDialog(this) Above this refers to context , so if copy() method is in a fragment you need to pass activity(getActivity() in java) in its place. Once your copy() method completes , you can hide the spinning loader like this , DialogUtil.hideProgressDialog() write this line just below your copy() method. The second way is to start the copy() method in some other thread , so that your UI thread(main thread) need not care about the copy() method and your App is saved from freezing.One advantage of this technique is that even though your copy() method is still in progress , the user can still perform UI related events without experiencing any jitter or lag.I suggest you should use AsyncTask for performing your copy() method since it can perform long tasks in worker thread and then can provide you the opportunity to notify user about task completion on main(UI) thread. The code goes like this, private class MyTask extends AsyncTask<X, Y, Z> { protected void onPreExecute(){ //any specific setup before you start copy() method , runs on UI // thread } protected Z doInBackground(X...x){ // your copy() method itself, runs on worker thread other than main UI // thread, don't perform any UI related activities from here , since // it is the worker thread } protected void onProgressUpdate(Y y){ //any event you wanna perform , while the task is in progress, runs on //UI main thread } protected void onPostExecute(Z z){ //the event you wanna perform once your copy() method is complete, runs //on UI main thread } } After defining the class , you can start this AsyncTask like this, MyTask myTask = new MyTask(); myTask.execute(x); Wherever I have mentioned UI thread it means , you can perform any task or operation related to UI from there. You should remember that UI events can only be performed from UI thread, if you try to perform UI related operations from thread other than UI thread(main thread) your App can unexpectedly close. I suggest making this AsyncTask as a inner class , in fragment itself , instead of making a new class entirely.
{ "pile_set_name": "StackExchange" }
Q: Why is a .bak so much smaller than the database it's a backup of? I just took a backup of a SQL Server database. The MDF and LDF files together total around 29 GB, but the .bak file was only 23 GB, about 20% smaller. My first guess when one version of a set of data is smaller than another version containing the same data would be data compression, but compression usually yields a much better compression ratio than 20%, especially for highly-ordered data (such as database tables.) Also, compressed data can't easily be compressed further, but I know that .bak files can be compressed. So if the data isn't being compressed, and nothing's being discarded, (because the whole point of making a backup is to be able to restore it to an identical state afterwards,) then what's that 20% that's unaccounted for? A: The space was allocated to the database files, but not used. You can create a new database, make it 10GB in size, and see the files allocate that amount of space on disk. However, until you put data in the database, the file is essentially empty, and your backup file size will be minimal. HTH A: For a full backup, the LDF can be ignored usually The MDF contains the actual data The Bak file contains only data pages that are in use inside the mdf. Some space won't be used. This space is overhead user for index rebuilds for example. It's quite typical to have a 100gb backup for a DB that may have a 250gb mdf. If my mdf is the same size as my backup it would be red flag about an unexpected DB shrink or lack of disk space etc A: When a DB is created, you can specify (for performance) how much space you want to allocate to the data and log files. This space is then reserved even if no data is stored in the tables. Only the extents that have data written to it are backed up. In your case, your MDF/LDF total could have even been 100 GB but your backup would still be around 23 GB for the backup that you did. If around 1 GB of data was added, your MDF/LDF total would still be 100 GB, but your backup would now be around 24 GB. A full backup contains all the extents that have data in them and a bit of the log file. The full backup contains all the data from the time the backup task ended, and not just from the time the backup task started; this is why a bit of the log file is also required.
{ "pile_set_name": "StackExchange" }
Q: Remove 0-byte (UTF-8) characters in String I am currently programming a multi-player game, and I am working on the networking side of it all right now. I have a packet system set up, and the way it works (with Strings at least) is that it takes a number of characters to a maximum of "X" characters. The characters are converted to bytes for sending to the server. If there are less than X characters, then the remaining bytes are set to 0. The issue is that when processing this information on the server and converting it to a string, the 0-byte characters are a '□' in my console, and invisible in my JTextPane. How can I remove all of these 0-byte characters from the String in a clean way? I'd prefer not to have another loop and more variables just to remove the 0-bytes before converting to a String. No one likes dirty-looking code. :p Packet Data: 03100101118000000000971001091051100000000 03 = Packet ID (irrelevant) 100101118000000000 = Username ("dev") 971001091051100000000 = Password ("admin") Resulting String: usernameString = "dev□□□□□□□□□" passwordString = "admin□□□□□□□" What I've tried: usernameString.replaceAll(new String(new byte[] {0}, "UTF-8"), ""); passwordString.replaceAll(new String(new byte[] {0}, "UTF-8"), ""); However, this did not change the String at all. A: As all of your zeros appear at the end of the string you can solve this even without regular expresions: static String trimZeros(String str) { int pos = str.indexOf(0); return pos == -1 ? str : str.substring(0, pos); } usernameString = trimZeros(usernameString); passwordString = trimZeros(passwordString); A: usernameString = // replace 1 or more \0 at the end of the string usernameString.replaceAll("\0+$", ""); \0 is an escape for the Unicode character with a numerical value 0 (sometimes called NUL). In other words: System.out.println((int) "\0".charAt(0)); // prints 0 What you tried seems to work for me too. (See Ideone example.) Not sure why it doesn't for you. There may be another problem. Make sure you are reassigning the result. ; )
{ "pile_set_name": "StackExchange" }
Q: Error: In this configuration Angular requires Zone.js I'm trying to run a Grails 2.3.9 application that use inside Angular 5. For this i create a new Grails project and a new angular project with angular-cli (ng new app). After this i made these steps. Build the angular project to get the resources bundles: ng build --prod Copy the already created dist folder's files and paste it in the Grails project in web-app/js/lib In grails create a new Controller and view to serve the index for angular In the index.gsp i put the content of the index.html that was created in angular build and replace the src of the scripts with Grails create link statements pointing to js/lib/... to be able for Grails to serve correctly those files When I run my Grails app and go to the specified address for angular to run I get a blank page and in the console this error: Error: In this configuration Angular requires Zone.js I try to copy the dist folder in an Apache server and everything works fine, so, i don't think that the problem is related to the generated files after ng build --prod. A: Ok, i figure it out, the problem was the way that i was referencing the angular js resources in the index.gsp: Wrong way index.gsp <html> <head></head> <body> <app-root></app-root> <script src="${createLink(uri:'js/lib/inline.bundle.js')}"</script> <script src="${createLink(uri:'js/lib/polyfills.bundle.js')}"</script> <script src="${createLink(uri:'js/lib/main.bundle.js')}"</script> </body> </html> Correct way ApplicationResources.groovy modules = { ... angular{ resource url:"js/lib/styles.bundle.css", nominify:true, disposition: 'head' resource url:"js/lib/inline.bundle.js", nominify:true resource url:"js/lib/polyfills.bundle.js", nominify:true resource url:"js/lib/main.bundle.js", nominify:true } } index.gsp <html> <head> ... <r:require modules="angular"/> <r:layoutResources/> </head> <body> <app-root></app-root> <r:layoutResources/> </body> </html> The correct way to add resources in grails 2 was found here
{ "pile_set_name": "StackExchange" }
Q: How to check Facebook login status in android application.? As i checked in previous answers they suggest to use Session to check the Facebook logon status, but that class got removed in new versions, any one can you help me out to resolve this. How can i find my current login status - facebook API android A: you can check the AccessToken : AccessToken token; token = AccessToken.getCurrentAccessToken(); if (token == null) { //Means user is not logged in }
{ "pile_set_name": "StackExchange" }
Q: How should/must Catholics handle, transport, and use holy water? Some shrines, like the Shrine of Padre Pio Chapel in Quezon City, distribute holy water in bottles for Catholics who want some. So far the only guideline I saw in getting holy water from a basin is that to get one, you use a clean pitcher to scoop the water and pour it into the bottle. You do not sink the bottle into the basin of holy water until it gets full. But that is all. How should/must observant Catholics handle, transport, and use holy water such that it won't get desecrated? Thanks! A: Your question really deals with that of all blessed objects. Holy water is a sacramental of the Catholic Church. Holy water should be treated like any other blessed sacramental object that is at our disposal! As Catholics, we are accustomed to having religious objects “blessed,” which signifies the permanent sanctification and dedication of an object for some sacred purpose. I think every weekend someone asks me as well as the other priests to bless a rosary, a statue, or some other religious object. Once a religious object is blessed and dedicated for divine worship or veneration, it must be treated with reverence and must not be used in either an improper or profane way (cf. Code of Canon Law, #1171). What happens when the rosary or statue breaks and is irreparable? Or, when the palm dries out, and the following Palm Sunday provides us with new palm? The basic rule for the disposition of these items is to burn or to bury them. - Fr. Joseph Edattu, VC Personally, I have some holy water in my home and in my car at all times. I am sure that many priests do this also, as it is never known as to when one might have a need for it. I even know of one priest that carries holy water on his person in a holy water sprinkler the size of a pen!
{ "pile_set_name": "StackExchange" }
Q: How do I get associated models to run __construct? Little history; I hate the fact that I can't use enums in CakePHP, but I get it. However, another pet peev I have is that my Booleans return 0 or 1 and there is no way to universally turn them to yes' and no's. So I though I would create a little function in the afterFind method of the AppModel to do this for me. The first step I wanted to take was to identify which columns where boolean (since some columns will return zeros and ones that do not need to be converted). I devised this little peace of code: function __construct($id = false, $table = null, $ds = null) { parent::__construct($id, $table, $ds); foreach($this->_schema as $col => $colDetails){ if($colDetails['type'] == 'boolean') $this->_booleans[] = $col; } } However a quick debug($this) in the model show that only the current model's boolean columns are captured. When I hit those columns directly the $this->_booleans show up but again, not those of associated models. I've looked though the manual and the API..I see no mention as to how to approach a solution. What am I doing wrong? A: Enums are not supported by CakePHP in order to make an application database type independent. Enums are not supported by many database engines. The simplest solution for your task is: echo $model['boolField'] ? 'Yes' : 'No';
{ "pile_set_name": "StackExchange" }
Q: VB.net connection string to sql databse After using the following code, no errors are shown, but my database is not updated once i have made a change using my management system application. Any recommendations ? Dim constring As String = Application.StartupPath.ToString() + "\mydatabaseName.mdf" Public c As String = "Data Source=.\SQLEXPRESS;AttachDbFilename=" + constring + ";Integrated Security=True;User Instance=True" Sub openConnection() conn.ConnectionString = c conn.Open() End Sub A: Use the following code: Dim constring As String = Application.StartupPath.ToString() + "\mydatabaseName.mdf" Public c As String = "Data Source=.\SQLEXPRESS;AttachDbFilename=" + constring + ";Integrated Security=True;User Instance=True" Dim conn As New SqlConnection(c) Dim comm As New SqlCommand Dim strQuery As New String = "SQL Update Query" Try comm.CommandText = strQuery comm.Connection = conn conn.Open() comm.ExecuteNonQuery() Catch ex As Exception End Try Change your connection string to: "Data Source=.\SQLEXPRESS;AttachDbFilename=" + constring + ";Initial Catalog=mydatabaseName;Integrated Security=True;User Instance=True" Using InitialCatalog helps when there are more than one databases.
{ "pile_set_name": "StackExchange" }
Q: What code will close a the command line window when 'Cls' does not? I have a batch file that launches standalone Hangouts, but the cmd console window does not close, not even when 'CLS' or 'ECHO OFF' is the last line. What code would close the command line window ? I have the answer - I just need to precede the existing code with : Start "" A: What you need is the Start command. Try the following example to see what I mean: @Calc.exe Now try this one: @Start Calc.exe All you need to do is replace Calc.exe with the name of your actual executable. BTW, if your program or its path contains spaces, you should use this format, where "" is a blank title, (can optionally be filled). Start "" "C:\Path To\My Program.exe" I hope this helped
{ "pile_set_name": "StackExchange" }
Q: Regex to match "a string of length less than X resides between two ">" symbols" The have text in this form: >xxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx >xxxxxxxxxxxxxx xxxxxxxxxxx > I need regex to match all >xxx... if there is less than, say a 100, amount of x in between > symbols. How can I do this? The actual problem is: "smalt.c:334 ERROR: sequence too short to be hashed" when trying to index fasta file with reference sequences of multiple viruses. It worked before, when only longer sequences were present in the file. I haven't found a solution for the smalt error (and even if I would, I would prefer to run it first with default settings), so I need to remove all the shorter reference sequences from the reference file. A: you can use grep to give you only the parts of the file where there are more than 100 characters in between > and write the results into a new file (which then should work with fasta): grep -Pzo '>[^>]{100,}' fasta.txt > fasta_wo_short_genes.txt explanation: -P tells grep to accept pearl regular expressions (for some reason, I could not get it to work with the normal grep regular expressions) z tells grep to see the whole file as one big line o tells grep to output only the matching parts (otherwise, because of the z flag it would always output the whole file if it finds any match at all) the regular expression: > the character separating your virus gene sequences [^>] matches any character except > {100,} matches 100 or more repetitions of the previous expression (in this case [^>])
{ "pile_set_name": "StackExchange" }