text
stringlengths
64
89.7k
meta
dict
Q: jQuery append method creating x blank values I'm running into a problem. Here's my code : this.setDataInDropDown = function(members) { /* * @todo check if some values are already availible in JSON */ var divs = [".xy-layer-longitude", ".xy-layer-latitude"], len = divs.length; for (var i=0; i<len; i++) { $(divs[i]).empty(); if (!members || !members.length) { alert("<option value='!none'>None<option>"); // triggered two times like it should ! $(divs[i]).append("<option value='!none'>None<option>"); } else { var memLen = members.length; for (var j=0; j<memLen; j++) { $(divs[i]).append("<option value="+members[j]+">"+members[j]+"<option>"); } } } } If the members array contains for example 4 values, the append method creates < option > like this : <option value=member[0]>member[0]</option> <option value=""></option> <option value=member[1]>member[1]</option> <option value=""></option> ... Indeed, it's appending a blank < option > after each new member < option >, no matter the length of the member array. Why ? I've checked if $(div[i]) was containing only one or more objects but anyway the result is the same. Note : I can only test it in IE11 because I'm working in a specific environment that does not accept other browsers. jQuery version is 1.10. A: You don't have a closing tag for <option> , rather you have 2 opening tags each time. This is causing your invalid markup. Change both closing tags to </option> In other words: <option><option> Should be <option></option>
{ "pile_set_name": "StackExchange" }
Q: SteamGuard requires an email I no longer have access to I tried to login online to Steam but it says that I need to go to my email to activate SteamGuard. The problem is that the email it is set to was on an old outlook account on a computer I no longer have. I have no idea how to view the inbox of my old account (what website could i use?) I have not deleted the email account, just simply have no idea how to access it. :( A: I quote from the Steam Support page: https://support.steampowered.com/kb_article.php?ref=4020-ALZM-5519&l=english#email Please contact us if you have not received the verification email after 3 hours. So, the only way you can reclaim your account is to ask them to change the email associated with Steam Guard (in particular, your Steam Account). You can submit a support ticket here: https://support.steampowered.com/newticket.php?category=273 Just remember to include proof of ownership: Your past passwords Your email associated with Steam Guard Any old Steam Guard codes Any payment information (just not the full details) Just a warning: Be prepared to wait around 3-5 days for a reply. Steam support (for some reason) takes a long time to process tickets.
{ "pile_set_name": "StackExchange" }
Q: how does foreach works in react-redux I am making a nodejs app which can fetch flights from the KIWI api, it returns a json list, and since you parameters like from and to, it should return a list of flights. I can successfully get that, but when I want to display everything I dont know how to do it. This is my render: render(){ return ( <table className="table table-hover"> <thead> <tr> <th>Flight #</th> <th>From</th> <th>To</th> </tr> </thead> <tbody> {this.props.flights.map(this.renderFlights)} </tbody> </table> ); } and renderFlights(flightData){ // add key to make them unique return( <tr> <td>{flightData.data[0].id}</td> <td>{flightData.data[0].mapIdfrom}</td> <td>{flightData.data[0].mapIdto}</td> </tr> ); } {this.props.flights.map(this.renderFlights)} just maps the first one in the array, I know that I have to use foreach, but I dont know how I can use this to print everything in the list, flight id plus the from and to, so when u fetch the flights u get around 15, and I want to be able to display all the 15 flights can someone help me out here? this returns undefined for forEach: <tbody> { this.props.flights.array.forEach(element => { this.renderFlights }) } </tbody> A: I found your API and test interface here: https://skypickerbookingapi1.docs.apiary.io/#reference/check-flights/checkflights/check_flights?console=1 So it seems that you are getting this object for your response: { "server_time": 1516568910, "flights_checked": false, "extra_fee": 0, // blah blah, "flights": [ // first flight { "bags_recheck_required": false, "dtime_unix": 1524646800, "extra": "", "atime_unix": 1524651600, "priority_boarding": { "currency": null, "price": null, "is_possible": false }, "price": 313, "currency": "NOK", "price_new": 313, // blah blah }, // second flight { "bags_recheck_required": true, "dtime_unix": 1524683400, "extra": "", "atime_unix": 1524691800, "priority_boarding": { "currency": null, "price": null, "is_possible": false }, "price": 1560, "currency": "NOK", "price_new": 1560, // blah blah }, // more and more flights So I'm guessing that your "this.props.flights" is referring to the "flights" property of the above object. Now, you need to use "map", not "foreach": this.props.flights.map(this.renderFlights) //this is correct And your callback function should be: renderFlights(one_single_flight){ // add key to make them unique return( <tr> <td>{one_single_flight.id}</td> <td>{one_single_flight.src_country}</td> <td>{one_single_flight.dst_country}</td> </tr> ); } }
{ "pile_set_name": "StackExchange" }
Q: Could a starship's impulse engine reactor produce enough energy to power the warp drive? Basically, I'm wondering if there anything in Trek canon that indicates that a starship (Federation, Klingon, etc) could hypothetically use its impulse engines as an emergency source of power for the warp drive, even for a short period of time? The information I've been able to find over the years has been conflicting. I've read sources that suggest warp cores produce millions of times the amount of power a ship's impulse drive generates (making the possibility seem highly unlikely), while others point to early Federation starships using fusion power (i.e. impulse drive technology) for their warp drive. There is also some evidence to suggest impulse power might be a form of FTL itself, such as the Romulan Warbird in "Balance of Terror." Scotty states that the Romulan ship used simple impulse power as its propulsion, yet it was able to cross light years of space to attack Federation outposts, and head for home (albeit at great fuel cost). Obviously, Sci-fi franchises are often wildly inconsistent on science and technology in their universe, but is there any current Trek canon that might give a clear answer here? A: I didn't complete my first answer so here is a second: This is a long post. Eventually it will come to the part where there is a problem that can be solved several ways. One of the possible solutions is using the impulse reactor to power the warp drive. Anyone who rejects the other solutions is forced to accept that the impulse reactor was used to power the warp drive at least once. The data about interstellar travel from the earliest episodes of Star Trek is rather hard to interpret. Part 1 Time Warps. In "The Cage"/"Menagerie" Captain Pike and Doctor Boyce discuss: BOYCE: Sometimes a man'll tell his bartender things he'll never tell his doctor. What's been on your mind, Chris, the fight on Rigel Seven? PIKE: Shouldn't it be? My only yeoman and two others dead, seven injured. Later Pike relives the fight on Rigel Seven: PIKE: It's starting just as it happened two weeks ago. Except for you. In the first scene Pike decides to continue to the Vega Colony: SPOCK: We aren't going to go, to be certain? PIKE: Not without any indication of survivors, no. Continue to the Vega Colony and take care of our own sick and injured first. You have the helm. Maintain present course. These bits of dialog indicate that the Enterprise was travelling from the star Rigel (Beta Orionis), then believed to be about 500 to 1,000 light years from Earth, to the star Vega (Alpha Lyrae) which is about 25 light years From Earth, and the trip already lasted about two weeks. If "two weeks" equals 11 to 17 days, and the total distance is about 500 to 1,000 light years or 182,625 to 365,250 light days, then speeds of about 10,742.6 to 33,204.5 times the speed of light would be needed. But since the trip was not yet completed after 11 to 17 days, the average speed of the Enterprise during the voyage must have been less than 10,742.6 to 33,204.5 times the speed of light. Pike does decide to make a detour to Talos IV 18 light years away to search for survivors of the Columbia. With speeds an unknown amount less than 10,742.6 to 33,204.5 times the speed of light, the trip to Talos IV will take an unknown about of time more than 0.198 to 0.612 days. At Talos IV Pike Tells the Talosians: PIKE: Can you hear me? My name is Christopher Pike, commander of the space vehicle Enterprise from a stellar group at the other end of this galaxy. Our intentions are peaceful. Can you understand me? If they have traveled to the "other end of the galaxy", a distance of at least 20,000 light years (or 7,305,000 light days) and possibly several times as far, in less than one week, the average speed in the voyage to Talos IV must be at least 1,043,571.4 times the speed of light. But Pike must be lying to the Talosians, since a star group 18 light years from the line from Rigel to Vega must be less than 1,000 light years from Earth. Pike orders the 18 light year trip to Talos IV to be made at time warp factor 7, which may be faster or slower than the speed for the rest of the time from Rigel to Vega. And Pike mentions "time warp" in both "The cage" and "The Menagerie Part 1": PIKE: Address intercraft. TYLER: System open. PIKE: This is the captain. Our destination is the Talos star group. [Hearing room] PIKE [on screen]: Our time warp, factor seven. The "time warp" of Pike's Enterprise is totally canonical in TOS. In "The Cage" and also "The Menagerie Part 1" they talk to survivors on Talos IV: SURVIVOR: Is Earth all right? PIKE: The same old Earth, and you'll see it very soon. TYLER: And you won't believe how fast you can get back. Well the time barrier's been broken. Our new ships can This shows that the "time barrier" - whatever that is - has been broken comparatively recently and new ships, presumably including Pike's Enterprise, can make interstellar voyages in much less time than before. This may mean that the new ships actually travel much faster, or that they use some sort of "time warp" to make time pass much slower aboard them and thus make the voyages seem shorter, or something else I can't think of. But use of time warps in Pike's era is clearly established by the first pilot episode in dialog that was quoted in "The Menagerie Part 1" and aired back in 1966. "The Alternative Factor" has some interesting dialog: BARSTOW [on viewscreen]: You may not be aware of its scope. It occurred in every quadrant of the galaxy and far beyond. Complete disruption of normal magnetic and gravimetric fields, timewarp distortion, possible radiation variations. And all of them centring on the general area which you are now patrolling. So what were the "timewarp distortions"? Were they weird phenomena that distorted and warped time, or were they weird phenomena that distorted the effects of Federation time warp technology and made it warp time in unintended ways instead of desired ways? In "The Naked Time" Spock invents a new formula to rapidly start the warp engines of the Enterprise with an "implosion" to escape a peril. SULU: Captain, my velocity gauge is off the scale. SPOCK: Engine power went off the scale as well. We're now travelling faster than is possible for normal space. KIRK: Checked elapsed time, Mister Sulu. SULU: My chronometer's running backwards, sir. KIRK: Time warp. We're going backward in time. Helm, begin reversing power. Slowly. Obviously that particular time warp that sent the Enterprise back in time was something new. But if the warp drive normally warps time in some way that would make its ability to travel back in time in unusual circumstances much more plausible. It certainly seems possible that the Federation uses time warps in Kirk's era. In the era of TNG, Worf's son Alexander, Molly O'Brien, and Naomi Wildman seemed to grow older super fast in their early years and then grow older at more normal rates later. And anyone trying to make TNG era programs seem as plausible as they can must be very irritated by what seems to be examples of SORAS (Soap Opera Rapid Aging Syndrome) in their science fiction shows. Warning! TV Tropes link: http://tvtropes.org/pmwiki/pmwiki.php/Main/SoapOperaRapidAgingSyndrome?from=Main.SORAS1 But the aging of those characters doesn't have to be examples of SORAS. Soap Opera Rapid Aging Syndrome is never commented on by the characters or explained in any way, as the writers like to pretend nothing happened. But in some stories kids rapidly age with some sort of magical or scientific explanation or are actually seen being aged by some process. This is called Plot-Relevant Age-Up. Warning!: TV Tropes link: http://tvtropes.org/pmwiki/pmwiki.php/Main/PlotRelevantAgeUp2 Fans of TNG era shows should prefer Plot-Relevant Age-Up to Soap Opera Rapid Aging Syndrome. Bu there is no magic in Star Trek. If only TNG-era Star Trek programs happened in a society with technology advanced centuries beyond our own to make Plot-Relevant Age-Up seem plausible. Wait, they do happen in a society with science and technology centuries beyond our own. Star Trek fans are thus free to come up with technological explanations that make Alexander, Molly, and Naomi examples of Plot-Relevant Age-Up instead of Soap Opera Rapid Aging Syndrome. My theory is that many Federation citizens and especially Starfleet members take time off from their jobs when their children are born. They stay inside special quarters with accelerated time rates for months or years of time inside until their babies grow into toddlers that don't need so much parental attention. Then they turn off the time warps and return to their jobs after perhaps a few days have passed outside. That might also explain why Amanda seemed a bit young to have a son as old as Spock. This has led me to imagine 23rd century tabloid headlines saying: "Alien Ambassador Elopes With Teenage Earth Girl". Perhaps Spock (and Amanda) aged super fast in a time warp to grow him from a baby to a toddler, and after Spock later left home for Starfleet Academy Amanda used time warps to freeze time for herself every other day, so that she would age not much faster than Sarek instead of dying of a old age when he was still young. So IMHO Rick Sternbach and Michael Okuda should have included time warps among Federation technologies in the Star Trek: The Next Generation Technical Manual in 1991. And if they didn't want to use time warps as part of the warp drive in the TNG era, fine. But they still had to explain how time warps were part of the warp drive in the era of Pike's Enterprise and explain why time warps were no longer part of the TNG era warp drive. Part 2: "Where No Man Has Gone Before". The second pilot episode begins with: Captain's log, Star date 1312.4. The impossible has happened. From directly ahead, we're picking up a recorded distress signal, the call letters of a vessel which has been missing for over two centuries. Did another Earth ship once probe out of the galaxy as we intend to do? What happened to it out there? Is this some warning they've left behind? And a few minutes later on the bridge: KELSO: Screen on, sir. Approaching galaxy edge, sir. KIRK: Neutralise warp, Mister Mitchell. Hold this position. MITCHELL: Neutralise warp, sir. KIRK: Address intercraft. MITCHELL: Intercraft open. KIRK: This is the Captain speaking. The object we encountered is a ship's disaster recorder, apparently ejected from the S.S. Valiant two hundred years ago. SPOCK: The tapes are burnt out. Trying the memory banks. KIRK: We hope to learn from the recorder what the Valiant was doing here and what destroyed the vessel. We'll move out into our probe as soon as we have those answers. All decks, stand by. So the Enterprise has reached the edge of the galaxy and found that the S.S. Valiant was there 200 years earlier. The outermost part of our galaxy is a very thin spherical halo about 200,000 light years in diameter and the edge of the halo should be about 75,000 to 125,000 light years from Earth. but the main part of the galaxy is the galactic disc about 100,000 to 120,000 light years in diameter and quite thin. If the edge of the galactic disc is meant then I will arbitrarily say the Enterprise and the Valiant should have reached it at a spot between about 500 and 85,000 lightyears from Earth. On the bridge the department heads are introduced: PIPER: Life sciences ready, sir. This is Doctor Dehner, who joined the ship at the Aldebaran colony. DEHNER: Psychiatry, Captain. My assignment is to study crew reaction in emergency conditions. Aldebaran is about 65 light years from Earth, or just about next door when traveling to the edge of the galaxy. Apparently Kirk was never introduced to a beautiful female officer on the entire trip from Aldebaran to the edge of the galaxy. If the Enterprise traveled 500 to 85,000 light years, or 182,625 to 31,046,250 light days, in a time so short that Kirk never met Dr. Dehner, then it must have been going very fast. If we arbitrarily assume it took 0.5 to 7.0 days to go from Aldebaran to the edge of the galaxy, 182,625 to 31,046,250 light days away, the average speed of the Enterprise should have been between 26,089.285 and 62,092,500 times the speed of light. What about the "time warp" in The Cage"? Suppose that the Enterprise used time warps to slow down the passage of time aboard ten times, so that 5 to 70 days passed in the outside universe while 0.5 to 7.0 days passed on the Enterprise in the voyage from Aldebaran to the edge of the galaxy. Then the average speed on the voyage would be between 2,608.9285 and 6,209,250 times the speed of light. Suppose they used time warps to slow down time 100 times, so that 50 to 700 days passed in the outside universe while 0.5 to 7.0 days passed on Enterprise in the voyage from Aldebaran to the edge of the galaxy. Then the average speed on the voyage would be between 260.89285 and 620,9250 times the speed of light. Thus, if they travel to a spot on the edge of the galaxy very close to Earth, it is possible for the speeds of the Enterprise in "The Cage" and "Where No Man Has Gone Before" to be consistent. Soon after Dehner is introduced: SPOCK: Decoding memory banks. I'll try to interpolate. The Valiant had encountered a magnetic space storm and was being swept in this direction. KIRK: The old impulse engines weren't strong enough. Kirk's statement has several possible explanations: 1) Early Earth ships used impulse drive to travel faster than light. The warp drive was invented after the time of the Valiant. 2) Early Earth ships used impulse engines to travel slower than light and used the power from the impulse engines to power the warp drive when going faster than light. 3) An early form of warp drive was called impulse warp and Kirk was referring to impulse warp engines. 4) Kirk believed the warp engines would go offline in such a situation and the Valiant had to use the impulse engines. 5) Kirk meant to say "warp engines" but said "impulse engines" by mistake. Soon afterwards the Enterprise encounters the force field at the edge of the galaxy and is damaged. SPOCK: Main engines are out, sir. We're on emergency power cells. Casualties, nine dead. and: Captain's log, Star date 1312.9. Ship's condition, heading back on impulse power only. Main engines burned out. The ship's space warp ability gone. Earth bases which were only days away are now years in the distance. Our overriding question now is what destroyed the Valiant? They lived through the barrier, just as we have. What happened to them after that? Let us study this statement in detail. 1) Ship's condition, heading back on impulse power only. This can mean that the Enterprise is only using its impulse engines, and thus is travelling slower than light, unless impulse engines can be used to travel faster than light. But since Kirk said "impulse power" this could also mean that energy from the impulse reactor is being used to power the warp engines - or what is left working of them - instead of energy from the warp drive reactor that would normally be used but is off line. There is another mention of "impulse power", in "Balance of Terror". KIRK: Yes, well gentlemen, the question still remains. Can we engage them with a reasonable possibility of victory? SCOTT: No question. Their power is simple impulse. KIRK: Meaning we can outrun them? There seems to be three possible interpretations of This dialog: a) That the Romulan ship only has impulse engines and travels slower than light and the Romulans can't have a large interstellar empire. b) That the Romulan ship only has impulse engines and can travel many times faster than light, thus enabling the Romulans to have a large interstellar empire. c) That the Romulan ship has both impulse engines and warp drive engines, but uses an impulse reactor to power both sets of engines, since they haven't invented warp drive reactors yet. So the Romulans ships travel many times the speed of light and they have a large interstellar empire but their ships are much slower than Federation ships. Interpretation c) would be consistent with the Enterprise being able to power its warp engines with the impulse reactor. 2) Main engines burned out. If the main engines are the warp drive engines, and are burned out, it seems like they can't be used at all, even if impulse power is diverted to them. But possibly the warp engines include the main space warp engines and the lesser time warp engines and only the "main [space warp] engines" are burnt out and the lesser time warp engines can still be used with power from the impulse engines. Or maybe it means that the power supply for the main engines is burned out, not the main engines themselves. 3) The ship's space warp ability gone. Since "time warp" is mentioned in "The Cage" and "The Menagerie Part 1", the "space warp" in this episode could be another function of the warp drive. Thus the space warp capability of the warp engines could be out but they might still be able to generation time warps to make the real or perceived duration of the voyage shorter. 4) Earth bases which were only days away are now years in the distance. Clearly Enterprise is much slower now. If "days" are 1.0 days to 7.0 days (up to one week) and "years" are 1.0 to 10.0 years (up to one decade) or 365.25 to 3,652.5 days, the normal speed of the Enterprise should be about 52.178 to 3,652.5 times faster than its speed after the accident. If the Enterprise after the accident can travel at one tenth the speed of light, the normal speed of the Enterprise should be 5.2178 to 365.25 times the speed of light. If the Enterprise after the accident can travel at exactly the speed of light, the normal speed of the Enterprise should be 52.178 to 3,652.5 times the speed of light. If the Enterprise after the accident can travel ten times speed of light, the normal speed of the Enterprise should be 521.78 to 36,525 times the speed of light. Kelso visits Mitchell in sickbay: MITCHELL: So, er, so, how go the repairs? KELSO: Well, the main engines are gone, unless we can find some way to re-energise them. MITCHELL: You'd better check the starboard impulse packs. Those points have about decayed to lead. KELSO: Oh, yeah, sure, Mitch. MITCHELL: I'm not joking, Lee! You activate those packs, and you'll blow the whole impulse deck. Later, in the briefing room: KELSO: Well, it didn't make any sense that he'd know, but naturally, I checked out the circuit anyway. I don't know how, but he was right. This point is burned out exactly the way he described it. That leaves two possibilities: a) The ship has been using the impulse engines, but the impulse packs are used only rarely when using the impulse engines, and so had not been used since the accident at the galactic barrier. b) The impulse packs are activated whenever the impulse engines are used, but the ship had not used the impulse engines since the accident at the galactic barrier. Thus the ship should be using "impulse power" but not the impulse engines, and seems to be using the impulse reactor to power the warp drive engines. Spock offers a course of action: SPOCK: Recommendation one. There's a planet a few light days away from here. Delta Vega. It has a lithium cracking station. We may be able to adapt some of its power packs to our engines. KIRK: And if we can't? We'll be trapped in orbit there. We haven't enough power to blast back out. This implies that the problem with the warp engines is not how they work but lack of energy, and thus that the warp reactor is offline. Thus the ship might be be using the impulse reactor to power the warp drive engines. When they approach Delta Vega, Kirk's log entry says: ...Kelso's task, transport down with a repair party, try to regenerate the main engines, save the ship... The fuel bins in the lithium cracking station apparently contain a lot of energy: KIRK: Can you do it, Lee? KELSO: Maybe, if we can bypass the fuel bins without blowing ourselves up. KIRK: The fuel bins, Lee. Could they be detonated from here? KELSO: A destruct switch? I guess I could wire one up right there. KIRK: Do it. KELSO: Direct to the power bins. From here you could blow up this whole valley. Since the fuel bins are not emptied, the "power packs" taken from the station to the ship may be reactors or generators that use fuel to generate power. Consider the stardate in "Where No Man Has Gone Before". In the opening teaser: Star date 1312.4. After encountering the Galactic barrier: Star date 1312.9. Hours or days later, in the briefing room, Kirk and Spock discuss options and Kirk decides to head for Delta Vega. The next scene is another log entry: Star date 1313.1. We're now approaching Delta Vega. Course set for a standard orbit. Soon after they put Mitchell in a cell in the station on Delta Vega. The next stardate is when they are about to leave Delta Vega. Star date 1313.3. In the next line Dehner says: DEHNER: He's been like that for hours now. If 1.0 to 24.0 (up to one day) hours have passed in 0.15 to 0.25 stardate units between 1313.1 and 1313.3, there should be about 4 to 159.99 hours per stardate unit. Right after that Kirk is Knocked unconscious. He is revived by Dr. Piper after minutes or hours. Kirk gives Piper an order before chasing Mitchell and Dehner: KIRK: If you have not received a signal from me within twelve hours, you'll proceed at maximum warp to the nearest Earth base with my recommendation that this entire planet be subjected to a lethal concentration of neutron radiation. No protest on this, Mark. That's an order. When Kirk confronts Michell, Mitchell creates a tombstone indicating that Kirk is about to die on stardate 1313.7. After the confrontation, Kirk calls the Enterprise which is still in orbit. A few minutes or hours after being beamed up, Kirk makes the final log entry: KIRK: Captain's log, Star date 1313.8. Add to official losses, Doctor Elizabeth Dehner. Be it noted she gave her life in performance of her duty. Lieutenant Commander Gary Mitchell, same notation. I want his service record to end that way. He didn't ask for what happened to him. I guess that about 1.00 to 24.00 hours pass between stardates 1313.3 and 1313.8, or in about 0.45 to 0.55 stardate units. Thus a stardate unit should be about 1.81 to 53.33 hours long. If there are 4.0 to 159.99, and also 1.81 to 53.33, hours in a stardate unit, a "Where No Man Has Gone Before" era stardate unit should be 4.0 to 53.33 hours long. Stardates can either measure time in the outside universe which may pass at a different rate than on a starship, or they can measure time aboard a starship which can pass at a different rate than time in the outside universe. Sometime after stardate 1312.9 Kirk decides to go to Delta Vega. On Stardate 1313.1 they are approaching Delta Vega. Thus the trip to Delta Vega takes sometime less than the entire 0.15 to 0.25 stardate units between stardate 1312.9 and stardate 1313.1. If a stardate unit is 4.0 to 53.33 hours long, the trip to Delta Vega takes less than 0.6 to 13.325 hours. Spock says that Delta Vega is: a few light days away from here. A light day is the distance light travels in a day. If Delta Vega was more than 7 light days away Spock would probably call it a light week away. Thus The trip to Delta Vega should have been about 1.0 to 7.0 light days, or about 24.0 to 168.0 light hours. If stardates measure the passage of time in the outside universe, traveling a distance of 24.0 to 168.0 light hours in 0.6 to 13.325 hours would require speeds of 1.801 to 2,800 times the speed of light. And if the normal speed of the Enterprise is about 52.178 to 3,652.5 times faster than that, the normal speed of the Enterprise should be about 93.97 to 10,227,000 times the speed of light. Of course the trip to Delta Vega lasted an unknown amount of time less than 0.6 to 13.225 hours, and thus the speed of the Enterprise should be an unknown amount faster than calculated. So if the Enterprise traveled faster than light to reach delta Vega, there seem like two possible methods. 1) The Enterprise used its impulse engines which could make travel faster than light. Probably when the Enterprise was repaired and updated between "where No Man Has Gone before" and the first season of TOS it got upgrade warp engines and the impulse engines were downgraded to merely slower than light speeds. 2) The Enterprise used its impulse reactor to power the warp engines which retained full or partial functionality but couldn't travel as fast as before due to the lower energy levels provided. But what if stardates measure time aboard starships, and not the outside universe? If so, there are three possibilities: 1) Time aboard the Enterprise passed at the same rate as in the outside universe. Thus the Enterprise traveled faster than light on the voyage to Delta Vega and the only possible methods are the same as for stardates measuring time in the outside universe. 2) Time aboard the Enterprise passed much faster than in the outside universe. Thus the Enterprise traveled faster than light on the voyage to Delta Vega and the only possible methods are the same as for stardates measuring time in the outside universe. 3) Time aboard the Enterprise passed much slower than in the outside universe. Thus the voyage to Delta Vega took much more time in the outside universe than aboard the Enterprise. That means that the Enterprise traveled much slower than it would if stardates measure time in the outside universe. This gives two possibilities: 3a) Time traveled slower on the Enterprise, the voyage lasted longer, and the Enterprise traveled slower, but still much faster than light. And the only possible methods are the same as for stardates measuring time in the outside universe. 3b) Time traveled slower on the Enterprise, the voyage lasted longer, and the Enterprise traveled slower, at or slower than the speed of light. And presumably the Enterprise used the impulse engines to travel at or slower than the speed of light. If the Enterprise had an average speed of one percent of the speed of light on the Voyage to Delta Vega it should take about 2,400.0 to 16,800.0 hours - 100 to 700 days. If 0.6 to 13.325 hours passed on the Enterprise the time warps would slow down time 180.112 to 28,000 times. If the Enterprise had an average speed of ten percent of the speed of light on the Voyage to Delta Vega it should take about 240.0 to 1,680.0 hours - 10 to 70 days. If 0.6 to 13.325 hours passed on the Enterprise the time warps would slow down time 18.011 to 2,800 times. As far as I know I am the original creator of the theory that the Enterprise has any time warp capability. Naturally I am in favor the theory that the Enterprise could have and might have traveled to Delta Vega slower than light and using time warps to slow down time aboard the ship. Other Star Trek fans might not like it that much. What other way would be there be for the Enterprise to travel to Delta Vega slower than light with time slowed down? If the Enterprise traveled almost as fast as light time dilation could slow down time aboard the Enterprise enough. I don't know if impulse engines can reach speeds fast enough for significant time dilation. In Star Trek: The Motion Picture the possibly highly upgraded impulse engines of the the Enterprise reached warp 0.5: KIRK: Impulse power, Mister Sulu. Ahead, warp point five. ...Departure angle on viewer. Warp 0.5 should be 0.125 times the speed of light, too slow for significant time dilation. If impulse engines can reach speeds fast enough for significant time dilation, then leaving orbit around any plausible habitable planet in any plausible solar system should be incredibly easy for them. When Spock recommends repairing the engines with power packs from the station on Delta Vega, Kirk says: KIRK: And if we can't? We'll be trapped in orbit there. We haven't enough power to blast back out. How could the impulse engines have enough power to accelerated to relativistic speeds with time dilation and then decelerate and yet not have enough power to reach escape velocity from Delta Vega and its sun(s)? Maybe Delta Vega and its sun(s) orbit close to the event horizon of a black hole. But the gravity of ultra dense objects like black holes falls away rapidly with distance. I find it hard to picture Delta Vega and its suns(s) orbiting far enough from the event horizon to avoid tidal disruption and yet having such a strong pull from the black hole's gravity that the escape velocity is too high to achieve with impulse engines capable of relativistic speeds. So the possibilities for the voyage to Delta Vega include: 1) Faster than light impulse engines. 2) Impulse reactor powering warp engines. 3) Slower than light impulse engines with time warps slowing time. 4) Slower than light impulse travel at relativistic velocities and significant time dilation. Different persons will prefer different solutions. If 1), 3), and 4) seem impossible to someone, 2) will be the only solution left. Thus some Star Trek fans will believe that the most likely method of traveling to Delta Vega would be to use the impulse reactor to power the warp engines.
{ "pile_set_name": "StackExchange" }
Q: AJAX onreadystatechange: navigate away and save changes at same time When a user clicks a link, I would like to send an AJAX request to save the contents of the current page, and navigate away at the same time. Typically the window is trying to navigate away, all AJAX requests get the "stop" button, but that may or may not mean that the server is processing the request. If the AJAX is aborted to soon, the changes will not be saved. The valid readystates according to W3Schools 1: server connection established 2: request received 3: processing request 4: request finished and response is ready I should I wait for number 2 or number 3 to ensure the request goes through on major browsers before navigating away? I acknowledge the risk that by not confirming a successful save in number 4, I risk not letting the user know about a failure in saving changes,, But the code is very stable, so once the server receives the request, I am almost 100% sure that if the changes are not saved, the user will have no recourse anyway (post deleted or locked or something like that, and the changes are not that important anyway). But the only problem is, if there is an Internet Connection Failure, I need to at least know about that failure in major browsers. Do I have to wait for number 4 to know about that? Assuming I don't even care about connection failures, which one should I wait for? A: Yes wait for 4 and check the response. You could pass back something from your server in the POST / GET to say success, then change window.location. Be sure to preventDefault if you're clicking a link to trigger your ajax.
{ "pile_set_name": "StackExchange" }
Q: Querying the "Annoy" index for all the points within radius r Can I use spotify's "Annoy" package to query points within radius r https://github.com/tjrileywisc/annoy I coudn't find any relevant function call in the implementation on their github page. I have used Kd-tree using query ball point for such problem. But since annoy is faster and I have to query billions of points, I am wondering if its possible to using this package A: Can I use spotify's "Annoy" package to query points within radius r? No, the library doesn't provide such a method.
{ "pile_set_name": "StackExchange" }
Q: In jQuery, what's the proper way to "move" an element from its parent to another element? Using jQuery 1.4 and jQueryUI 1.8 Specifically, I'm using draggables/droppables, and when dropped, I would like to move the draggable (it's children, events, etc) from belonging to its parent element to be appended/added as a child of the drop target. I know that in the droppable drop option, I can supply the following callback: function(event, ui) { // stuff } where $(this).target will be the drop target, and ui.draggable will be the child element I would like to move - but I'm not sure the proper way to actually perform the move, preserving events, etc. A: append() will remove the element and place it where you want. $(this).target.append(ui.draggable); // or, if $(this).target is not a jQuery object var target = $(this).target; $(target).append(ui.draggable);
{ "pile_set_name": "StackExchange" }
Q: HTML input element password pattern with alphabet and number range I know this may be very simple but I couldn't find anything relevant. I am using the HTML element for accepting the password from the user. There is a condition for the password to be accepted It should only contain letters between a to h (0 times or more) It should only contain numbers between 1 to 8 (0 times or more) Following the above two conditions the user can come up with any password combination. For example: abc123, 6ad27, hefb etc etc which should be accepted by the input element. But it should not accept patterns like z00911, ksoql234 etc. What should be the value for pattern attribute in the following code snippet for checking the above two conditions? <input type="password" pattern="WHAT-SHOULD-I-PUT-HERE"> I hope someone might help. Thank you A: <form action="/blablabla"> <input type="password" pattern="[a-h1-8]+" required="required" title="Wrong password"> <input type="submit"> </form> In regular expression [a-h] means range of character, you can define multiple ranges in square brackets: [a-h1-8]. If you want to allow repetitions of pattern you add *(0 or more repetitions) or +(1 or more repetition) after pattern. Your pattern for single letter is [a-h1-8] so for password containing at least on character full pattern is [a-h1-8]+. You can read more here. I have also added required attribute to enforce filling password field, without that attribute user could simply leave password blank.
{ "pile_set_name": "StackExchange" }
Q: how to write from linkedlist to a text file using java I wrote this code to write the nodes of this linked list to a text file, but it won't work with FileWriter whenever I try it with System.out.println("n.ModelName"); public void modName() throws IOException{ PrintWriter outputStream = null; outputStream = new PrintWriter(new FileWriter("C:\\Users\\OsaMa\\Desktop\\Toyota.txt")); node n=head; while (n != null){ if(n.Company.equalsIgnoreCase("Toyota")){ outputStream.println(n.ModelName); n=n.next; } else{ n=n.next; } } } A: Try this public void modName() throws IOException{ PrintWriter outputStream = null; outputStream = new PrintWriter("C:\\Users\\OsaMa\\Desktop\\Toyota.txt","UTF-8"); node n=head; while (n != null){ if(n.Company.equalsIgnoreCase("Toyota")){ outputStream.println(n.ModelName); n=n.next; } else{ n=n.next; } } outputStream.close(); } You need to close the stream once writing is done. See Also How to create a file and write to a file in Java?
{ "pile_set_name": "StackExchange" }
Q: c# design question - standalone GUI application It's a pleasure to see how much knowledge people have on here, it's a treasure of a place. I've seen myself writing code for DataGridView events - and using DataSource to a backend prepared DataTable object. Sometimes the user can remove rows, update them etc. and the underlying data will need validation checks again. Let's assume we have a person class class Person { public string FirstName { get; set; } } Let's say some other part of the code deals with creating an array of Person. class Processor { public static Person[] Create() { .... .... return person[]; } } And this information would appear on a DataGridView for user viewing. I've tried something like this: public static DataTable ToTable(List<Person> list) { ... } And had this method in the Person class .. which I would think it'd belong to. Then I would bind the DataGridView to that DataTable and the user will then see that data and do their tasks. But I've thought of using BindingList<> which I'm not so educated on yet.. would I still have the same capability of sorting the DataGridView like it does with DataTable as a DataSource? Would BindingList be implemented by a container class like "PersonCollection" or would the Person class implement itself? I would like to fire some events to be able to modify the collection in a clean way without having to reset datasources, etc. Where the user experience could really be affected. I understand that modifying the DataSource DataTable is the good way. But sometimes I need to fire methods in the corresponding class that that specific row refers to, and had an ugly extra hidden column which would hold a reference to the existing object somewhere else (the Person reference). If you guys know a better design solution, I would be more than happy to hear it. Thanks in advance, PS. After reading "The Pragmatic Programmer", I just can't stop thinking critically about code! Leo B. A: Create a business object class. Implement INotifyPropertyChanged. Look at the code below: public class Employee:INotifyPropertyChanged { public Employee(string Name_, string Designation_, DateTime BirthDate_) { this.Name = Name_; this.Designation = Designation_; this.BirthDate = BirthDate_; } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; #endregion private void NotifyPropertyChanged(String info) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(info)); } } [DisplayName("Employee Name")] public string Name { get { return this._Name; } set { if (value != this._Name) { this._Name = value; NotifyPropertyChanged("Name"); } } } private string _Name = string.Empty; [DisplayName("Employee Designation")] public string Designation { get { return this._Designation; } set { if (value != this._Designation) { this._Designation = value; NotifyPropertyChanged("Designation"); } } } private string _Designation = string.Empty; public DateTime BirthDate { get { return this._BirthDate; } set { if (value != this._BirthDate) { this._BirthDate = value; NotifyPropertyChanged("BirthDate"); } } } private DateTime _BirthDate = DateTime.Today; [DisplayName("Age")] public int Age { get { return DateTime.Today.Year - this.BirthDate.Year; } } } Create your custom collection: public class EmployeeCollection:BindingList<Employee> { public new void Add(Employee emp) { base.Add(emp); } public void SaveToDB() { //code to save to db } } Set the data source: _employeeStore = new EmployeeCollection(); this.dataGridView1.DataBindings.Add("DataSource", this, "EmployeeStore"); Now if you want to add an employee to your datagridview, Employee employee = new Employee(textBoxName.Text, textBoxDesignation.Text, dateTimePicker1.Value); _employeeStore.Add(employee); This is very clean. You just play with business object and don't touch the UI.
{ "pile_set_name": "StackExchange" }
Q: for given data i need to find the count of "a" in the column Key Key ---------- 0 a 1 a 2 b 3 b 4 a 5 c so far i tried this: df.groupby(["key1"],).count() However it is also showing the counts of b and c, i want only for a. A: Create mask and count by sum: df["Key"].eq('a').sum()
{ "pile_set_name": "StackExchange" }
Q: Codeigniter select box click get product details Am creating a Billing System using Codeigniter. Here i want to select product using droupdown that product details show without refresh the page like ajax. My code is following: View <?php foreach($productlists as $product) { ?> <tr> <td>1</td> <td> <select class="form-control" name="product_id"> <option>--Select Payment--</option> <option value="<?php echo $product['sno']; ?>"><?php echo $product['product_name']; ?></option> </select> </td> <td><input type="number" name="product_order_qty" class="form-control" value="1"></td> <td>0.8</td> <td>0.8</td> <td><input type="number" name="product_price" class="form-control" value="100"></td> <td><input type="number" name="product_total" class="form-control" value="101.60"></td> <td><a class="btn btn-primary waves-effect waves-light btn-xs"><i class="fa fa-plus"></i> Add</a></td> </tr> <?php } ?> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <script type="text/javascript"> //Ajax code here </script> And Pass the product id to controller controller function productdetails() { $id = $_GET['product_id']; $data['product'] = $this->orders->productdetails($id); $data['title'] = "View Customer Order"; $this->load->view('customer_order_view',$data); } Please help me to pass the product_id using Ajax. Thanks in advance A: <select class="form-control" name="product_id" onchange="getProduct(this.value)"> <option>--Select Payment--</option> <option value="<?php echo $product['sno']; ?>"><?php echo $product['product_name']; ?></option> </select> //your jquery function function getProduct(product_id){ alert(product_id); $.ajax({ url: "your_url", type: "POST", data: { id:product_id, } }).done(function( data ) { // alert(data); }); }
{ "pile_set_name": "StackExchange" }
Q: Dynamic content and crawlers Will search engine crawlers index my dynamically loaded content using Javascript and API calls? or must I load this content trough server side programming (PHP, ASP, etc...) A: This has been documented in the webmasters section of the developer pages on google. Short answer; no but there are workarounds.
{ "pile_set_name": "StackExchange" }
Q: How to do calculation based on previous row results in dplyr I performing some calculations where the result of a row is the input to the next. I'm using a for loop which is quite slow, is there a way I can use dplyr for these types of calculations? example below df <- data.frame(beginning_on_hand = c(10,0,0,0,0,0,0,0,0,0,0,0), sales = c(10,9,4,7,3,7,2,6,1,5,7,1), ship = c(10,9,4,7,3,7,2,6,1,5,7,1)) dfb <- df %>% mutate(receipts = 0) %>% mutate(ending_on_hand = 0) %>% mutate(receipts = lag(ship, 2)) %>% mutate(receipts = if_else(is.na(receipts), 0, receipts)) > dfb beginning_on_hand sales ship receipts ending_on_hand 10 10 10 0 0 0 9 9 0 0 0 4 4 10 0 0 7 7 9 0 0 3 3 4 0 0 7 7 7 0 0 2 2 3 0 0 6 6 7 0 0 1 1 2 0 0 5 5 6 0 0 7 7 1 0 0 1 1 5 0 for(i in 1:(nrow(dfb)- 2)) { dfb$ending_on_hand[i] <- dfb$beginning_on_hand[i] + dfb$receipts[i] - dfb$sales[i] dfb$beginning_on_hand[i+1] <- dfb$ending_on_hand[i] } > dfb beginning_on_hand sales ship receipts ending_on_hand 1 10 10 10 0 0 2 0 9 9 0 -9 3 -9 4 4 10 -3 4 -3 7 7 9 -1 5 -1 3 3 4 0 6 0 7 7 7 0 7 0 2 2 3 1 8 1 6 6 7 2 9 2 1 1 2 3 10 3 5 5 6 4 11 4 7 7 1 0 12 0 1 1 5 0 A: I don't have a dplyr solution for this, but here is a data.table solution for this. df <- data.frame(beginning_on_hand = c(10,0,0,0,0,0,0,0,0,0,0,0), sales = c(10,9,4,7,3,7,2,6,1,5,7,1), ship = c(10,9,4,7,3,7,2,6,1,5,7,1)) dfb <- df %>% mutate(ending_on_hand = 0) %>% mutate(receipts = lag(ship, 2)) %>% mutate(receipts = if_else(is.na(receipts), 0, receipts)) dfb<-data.table(dfb) df.end <- dfb[, ending_on_hand := cumsum(beginning_on_hand + receipts - sales)][, beginning_on_hand := beginning_on_hand + lag(ending_on_hand, default = 0)] >df.end beginning_on_hand sales ship ending_on_hand receipts 1: 10 10 10 0 0 2: 0 9 9 -9 0 3: -9 4 4 -3 10 4: -3 7 7 -1 9 5: -1 3 3 0 4 6: 0 7 7 0 7 7: 0 2 2 1 3 8: 1 6 6 2 7 9: 2 1 1 3 2 10: 3 5 5 4 6 11: 4 7 7 -2 1 12: -2 1 1 2 5 To explain, data.table uses basically lists to comprise the data and displays it in typically a flat-file manner. It uses SQL type instructions to organize and process data. The functions of note used here are cumsum and lag. cumsum calculates all values prior to a particular index, and lag looks for a value above or prior to a given index.
{ "pile_set_name": "StackExchange" }
Q: How to build QT with -reduce-relocations I am using Ubuntu 16.04, cmake 3.10.1, QT 5.6.2. I used to develop applications on windows, so I am not sure how to trouble shoot on linux platform. When I compile my code, I get error In file included from /usr/local/Qt/5.6.2/5.6/gcc_64/include/QtCore/qcoreapplication.h:37:0, from /usr/local/Qt/5.6.2/5.6/gcc_64/include/QtWidgets/qapplication.h:37, from /usr/local/Qt/5.6.2/5.6/gcc_64/include/QtWidgets/QApplication:1, from /home/sulfred/Documents/SoftwareDev/github/SulfredLee/PcapReplayer/BackEnd/main.cpp:3: /usr/local/Qt/5.6.2/5.6/gcc_64/include/QtCore/qglobal.h:1087:4: error: #error "You must build your code with position independent code if Qt was built with -reduce-relocations. " "Compile your code with -fPIC (-fPIE is not enough)." # error "You must build your code with position independent code if Qt was built with -reduce-relocations. "\ ^ Q1. How to verify that my Qt was built with -reduce-relocations. A: Obviously Qt is already compiled with -reduce-relocations; the error message points out that you must build your own code using the appropriate flags. Related: Error while compiling QT project in cmake https://github.com/wkhtmltopdf/qtbase/commit/36d6eb721e7d5997ade75e289d4088dc48678d0d So just try to add either the -fPIE or -fPIC flag to your compiler flags.
{ "pile_set_name": "StackExchange" }
Q: Pakistan Twitter Stream Latitude and Longitude box I want to filter twitter streams by location and language. But facing error. I have used location parameter mentioned in link Passing Longitude and Latitude in Twitter Streaming API of Pakistan Error : 7924 [Twitter Stream consumer-1[Establishing connection]] WARN twitter4j.TwitterStreamImpl - Parameter not accepted with the role. 406:Returned by the Search API when an invalid format is specified in the request. Returned by the Streaming API when one or more of the parameters are not suitable for the resource. The track parameter, for example, would throw this error if: The track keyword is too long or too short. The bounding box specified is invalid. No predicates defined for filtered resource, for example, neither track nor follow parameter defined. Follow userid cannot be read. No filter parameters found. Expect at least one parameter: follow track locations LinkedBlockingQueue<Status> queue = new LinkedBlockingQueue<Status>(1000); SpoutOutputCollector _collector = collector; StatusListener listener = new StatusListener() { @Override public void onStatus(Status status) { System.out.println(status.getLang()); System.out.println(status.getPlace()); System.out.print(status.getText()); } @Override public void onDeletionNotice(StatusDeletionNotice sdn) { } @Override public void onTrackLimitationNotice(int i) { } @Override public void onScrubGeo(long l, long l1) { } @Override public void onException(Exception e) { } @Override public void onStallWarning(StallWarning arg0) { // TODO Auto-generated method stub } }; TwitterStreamFactory fact = new TwitterStreamFactory(new ConfigurationBuilder().setUser(_username).setPassword(_pwd).build()); TwitterStream _twitterStream = fact.getInstance(); _twitterStream.addListener(listener); ArrayList<Long> follow = new ArrayList<Long>(); ArrayList<String> track = new ArrayList<String>(); long[] followArray = new long[follow.size()]; String[] trackArray = track.toArray(new String[track.size()]); /** * Upper/northern latitude that marks the * upper bounds of the geographical area * for which tweets need to be analysed. */ double northLatitude = 35.2; /** * Lower/southern latitude. Marks the lower bound. */ double southLatitude = 25.2; /** * Eastern/left longitude. Marks the left-side bounds. */ double eastLongitude = 62.9; /** * Western/right longitude. Marks the right-side bounds. */ double westLongitude = 73.3; double bb[][] = {{eastLongitude, southLatitude} ,{westLongitude, northLatitude}}; _twitterStream.filter(new FilterQuery(0, followArray,trackArray,bb,new String[]{"en-US"})); . Please help me where am i wrong? A: Try reading Passing Longitude and Latitude in Twitter Streaming API of Pakistan again. It clearly says In order to give the box co-ordinates correctly you need to have them in the format bottom-left-longitude, bottom-left-latitude, top-right-longitude, top-right-latitude. That means west, south, east north. You have: double locationsPakistan[][] = {{eastLongitude, southLatitude} ,{westLongitude, northLatitude}}; Try: double locationsPakistan[][] = {{westLongitude, southLatitude} ,{eastLongitude, northLatitude}}; Update As per your comment after your code edits you now have: double northLatitude = 35.2; double southLatitude = 25.2; double eastLongitude = 62.9; double westLongitude = 73.3; double bb[][] = {{eastLongitude, southLatitude},{westLongitude, northLatitude}}; You are mixing up East/West Left/Right. Your comments show this as you have "Eastern/left longitude. Marks the left-side bounds." East is the right side bound. Similarly West is the left side bound. West should be < East. So do this: double northLatitude = 35.2; double southLatitude = 25.2; double westLongitude = 62.9; double eastLongitude = 73.3; double bb[][] = {{westLongitude, southLatitude},{eastLongitude, northLatitude}};
{ "pile_set_name": "StackExchange" }
Q: jQuery Ajax file upload:org.springframework.web.bind.MissingServletRequestParameterException: Required String parameter 'upload' is not present I am trying to upload the file, it is working on my local system, but not working in the server. <form class="form-group row" style="height:100px;" id="uploading" method="post" enctype="multipart/form-data"> <div class="col-md-10" align="center"> <div class="form-group row" align="center"> <label class="col-md-2 form-control-label"> File to upload:</label> <div class="col-md-10" > <div class="input-group"> <input type="file" class="filestyle" data-buttonName="btn-primary" name="upload" id="upload" accept="*"/> </div> </div> </div> <div class="form-group row" id="buttonzone"> <div class="col-sm-14"> <div class="input-group"> <button type="submit" class="btn btn-success" id="upload" style="margin-left: 96px;"> <i class="fa fa-cloud-upload"></i> Upload</button> <button type="button" class="btn btn-danger" id="cancel" ><i class="fa fa-ban"></i> Cancel</button> </div> </div> </div> </div> </form> $("form#uploading").submit(function(){ var formData = new FormData($(this)[0]); $.ajax({ url : '/uploadController/upload', type: 'POST', data: formData, async: false, beforeSend: beforeSendHandler, success: function (data){ var msg=data.msg; var obj=data.obj; if(data.success == true){ $('#successmsg').html(msg); $('.alert-success').show(); $('.alert-danger').hide(); setTimeout(function(){ $(".alert-success").alert('close'); }, 10000); }else{ $('#errmsg').html(msg); $('.alert-danger').show(); $('.alert-success').hide(); setTimeout(function(){ $(".alert-danger").alert('close'); }, 10000); } }, cache: false, contentType: false, processData: false }); return false; }); Java code: @RequestMapping(value = "/uploadController/upload",headers=("content-type=multipart/*"), method = RequestMethod.POST) public @ResponseBody StatusResponse totxnsUpload(@RequestParam("upload") MultipartFile upload, HttpServletRequest request, HttpServletResponse response) throws IOException, NoSuchFieldException, SecurityException{ logger.debug(" file upload controller"); //my logic here } I am getting this in browser console: { "timestamp":1495781126083, "status":400, "error":"Bad Request", "exception":"org.springframework.web.bind.MissingServletRequestParameterException", "message":"Required MultipartFile parameter 'upload' is not present", "path":"/uploadController/upload" } But it is working on out of server, I don't what is the problem. A: the parameter "upload" as seen in @RequestParam("upload") MultipartFile upload is a required parameter. If it is working in some systems it means that it is getting a parameter named "upload". In your case it fails because it is not present in the request. You do have an input named upload in your form though. But I can see you are trying to send form data using ajax. Can you see the request in browser dev tools network tab? Also place a breakpoint in your totxnsUpload method and see if you are getting two form submit requests (one standard and one with ajax) for debugging purposes you can set upload parameter to optional in your Java code with this replacement @RequestParam(value = "upload", required = false) MultipartFile upload With that being said. If the exact same code is working on your machine but not working on the server, you might need to configure your context. Take a look at this How to use HttpServletRequest#getParts() in a servlet filter running on Tomcat?
{ "pile_set_name": "StackExchange" }
Q: Analyzer warning about incorrect decrement of reference count I just installed Xcode 4 and opened an earlier version of my app. The analyzer is reporting for this line: [self.myViewControllerObject release]; incorrect decrement of the reference count of an object that is not owned at this point by the caller I didn't enable ARC for my project. When I analyze v2.0 of my app in Xcode 3.2.5, it doesn't show any potential error. Header: @class MyViewController; MyViewController *myViewControllerObject; @property ( nonatomic , retain ) MyViewController *myViewControllerObject; Implementation: #import "MyViewController.h" @synthesize myViewControllerObject; When a button is clicked I have: TRY 1: self.myViewControllerObject = [[MyViewController alloc]initWithNibName:@"MyViewController" bundle:nil]; [self.navigationController pushViewController:self.myViewControllerObject animated:YES]; [self.myViewControllerObject release]; TRY 2: MyViewController *temp = [[MyViewController alloc]initWithNibName:@"MyViewController" bundle:nil]; self.myViewControllerObject = temp; [temp release]; [self.navigationController pushViewController:self.myViewControllerObject animated:YES]; [self.myViewControllerObject release]; TRY 3: self.myViewControllerObject = [[MyViewController alloc]initWithNibName:@"MyViewController" bundle:nil]; [self.navigationController pushViewController:self.myViewControllerObject animated:YES]; In the dealloc method, I release it: [self.myViewControllerObject release]; A: The warning comes from you calling release on a property through the accessor: when you do [self.myViewControllerObject release] you are actually calling the accessor method myViewControllerObject and then release on the return value. Since the name of the method does not begin with new, copy, or mutableCopy, you do not own the object it returns, hence you are not “allowed” to release it. The solution is to never call release on the return value of that accessor, so basically your try #2 was fine: MyViewController *temp = [[MyViewController alloc] initWithNibName:@"MyViewController" bundle:nil]; self.myViewControllerObject = temp; [self.navigationController pushViewController:temp animated:YES]; [temp release]; But in dealloc do not use the accessor, rather: [myViewControllerObject release]; If you need to release myViewController other than in dealloc, assign nil through the setter: self.myViewControllerObject = nil; Edit: For more on the subject, see Apple's Advanced Memory Management Guide. A: As I used the XCode 4 Beta version, the problem occurred. But when I tried it in the XCode 4 version, the Analyzer warning didn't occur to me. I Thank you to all whoever participated to help me.Thank you for your time.
{ "pile_set_name": "StackExchange" }
Q: Reviewers and incorrect article references - Do they adjust them? Let's say that an academic paper has been submitted and that it contains some references which are not cited correctly by the authors, e.g. volume, pages, issue are wrong. Do the peer reviewers adjust them or they get back to the authors saying that they are wrong and need to be checked? Thanks A: Peer reviewers never make changes to the paper they review. It is not their task to do so, and they do not have the means anyway. If they do notice mistakes, they will ask the author(s) to fix them. For mistakes such as minor errors in references, I consider it unlikely that the reviewers will notice them at all. If I want to check out a reference, I will either go by doi or the provided link, or simply google author + title -- thus, I wouldn't notice a wrong volume number. In some journals, the editing staff might check and adjust references. They would then just make a note of that in the page proofs, such that the author(s) can check that the changes are indeed correct.
{ "pile_set_name": "StackExchange" }
Q: SQL Query syntax check Can someone write this requirement out for me. I can't seem to load the null values alongside all other conditions. -- (1) None of the artists were 80 or over -- and none were 50 or younger when they -- died. -- (2) None of the artists were aged 54, 56, -- 71 or 76 when they died. -- (3) Some of the artists are still alive. -- (4) Artists who are both German and -- specialise in Photographic art ('Photo') -- are excluded. -- (5) None of the artists have the letter 'o' -- anywhere in their last name. -- (6) None of the artists have the first name -- 'Hannah', 'Julia' or 'Frasier'. select artist_id, first_name, last_name, died - born AS age, speciality, nationality from simon_antiques where Died - born Not in (54, 56, 71, 76) AND first_name NOT IN ('Hannah', 'Julia', 'Frasier') AND last_name not like '%o%' AND (nationality <> 'German' or speciality <> 'Photo') AND (died - born between 50 and 80 or died is null) Order by last_name This loads all the results but not the artists who are still alive. If i bring the condition out of a bracket it will bring back all people who are alive regardless of other conditions. A: Your first clause in your WHERE is only going to match rows where died is not null. given that, you can reorder the clauses something like: WHERE first_name NOT IN ('Hannah', 'Julia', 'Frasier') AND last_name NOT LIKE '%o%' AND (nationality <> 'German' OR speciality <> 'Photo') AND ((died - born BETWEEN 50 AND 80 AND Died - born NOT IN (54, 56, 71, 76)) OR died IS NULL) This should bring back those that are currently alive.
{ "pile_set_name": "StackExchange" }
Q: Android start new activity when clicking widget I am new with coding Java and xml for android applications and I wanted to know how I start/open a new activity when clicking on something. In this case I am using a relative layout to keep a image and text together as one object. What I want to do is when you click on it it will start/open the new activity. How do I do this? Could someone tell me step by step since I am quite new to this. A: First of all, if you want your layout to act (RelativeLayout) like a button (do not handle onClick on layout child components) firstly set in your xml layout file RelativeLayout attribute android:clickable="true" Or you can do this directly in your code (in onCreate method) relativeLayout.setClickable(true); Than you need to set onClickListener for your layout. You can do this simply by creating anonymous class relativeLayout.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { Intent startActivityIntent = new Intent(getApplicationContext(),YourDesiredActivity.class); startActivity(startActivityIntent); } } UPDATE Layout is defined in xml file, of course in Android you can do this in code ,but it is better to use xml file. In your IDEA you have folder res->layout here you should place your layout files. For example layout with name ` relative_root_layout.xml <xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/relative_layout" android:layout_height="wrap_content" android:layout_width="wrap_content"> <ImageView android:id="@+id/image_view"> android:layout_width="wrap_content" android:src="@drawable/icon" android:layout_height="wrap_content" android:layout_alignParentTop="true" ImageView> <TextView android:id="@+id/text_view" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="20sp" android:layout_toRightOf="@+id/image_view" android:text="Relative layout"> TextView> RelativeLayout> But in case you have only text and image it is better to use <Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:drawableLeft="@android:drawable/btn_image" android:text="Button with Image" android:gravity="left" android:drawablePadding="10dp"> Button> How you can access your widgets ? This is very basic thing you have to know if you are developing for android, this is essential part. Please read documentation, read books, watch tutorial or whatever. In short you need to inflate layout in activity onCreate() method RelativeLayout mRelativeLayout; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.relative_root_layout); mRelativeLayout = (RelativeLayout) findViewById(R.id.relative_layout); mRelativeLayout.setOnClickListener(.....) } But again this very basic things you must know.
{ "pile_set_name": "StackExchange" }
Q: Hibernate hbm2ddl.auto default value What is the default value of hibernate.hbm2ddl.auto in hibernate cfg file mapping is it possible to remove <property name="hibernate.hbm2ddl.auto">update</property> this mapping from config file if i remove this property whether it affect my DB ??? A: That is really the answer: no validation, no update, no creation and no dropping takes place when omitting the setting from your configuration. The hibernate source code is the best documentation on Hibernate: // from org.hibernate.cfg.SettingsFactory line 332 (hibernate-core-3.6.7) String autoSchemaExport = properties.getProperty(Environment.HBM2DDL_AUTO); if ( "validate".equals(autoSchemaExport) ) settings.setAutoValidateSchema(true); if ( "update".equals(autoSchemaExport) ) settings.setAutoUpdateSchema(true); if ( "create".equals(autoSchemaExport) ) settings.setAutoCreateSchema(true); if ( "create-drop".equals(autoSchemaExport) ) { settings.setAutoCreateSchema(true); settings.setAutoDropSchema(true); } A: Just omitting hibernate.hbm2ddl.auto defaults to Hibernate not doing anything. Already asked in SO . link A: Automatically validates or exports schema DDL to the database when the SessionFactory is created. With create-drop, the database schema will be dropped when the SessionFactory is closed explicitly. validate | update | create | create-drop validate- existing schema update- only update your schema once created create- create schema every time
{ "pile_set_name": "StackExchange" }
Q: Java - Adding 2 objects in an ArrayList I'm pretty new to programming so I need help. I wanna add the SubjectGrades to the studentList ArrayList. But I think I'm doing the wrong way. What should I do for me to add the SubjectGrades to the ArrayList? Thanks Here's my partial Main class. import java.util.Scanner; import java.util.ArrayList; public class Main { private static Scanner in; public static void main(String[] args) { ArrayList<Student> studentList = new ArrayList<Student>(); //ArrayList<SubjectGrades> Grades = new ArrayList<SubjectGrades>(); in = new Scanner(System.in); String search, inSwitch1, inSwitch2; int inp; do { SubjectGrades sGrade = new SubjectGrades(); Student student = new Student(); System.out.println("--------------------------------------"); System.out.println("What do you want to do?"); System.out.println("[1]Add Student"); System.out.println("[2]Find Student"); System.out.println("[3]Exit Program"); System.out.println("--------------------------------------"); inSwitch1 = in.next(); switch (inSwitch1) { case "1": System.out.println("Input student's Last Name:"); student.setLastName(in.next()); System.out.println("Input student's First Name:"); student.setFirstName(in.next()); System.out.println("Input student's course:"); student.setCourse(in.next()); System.out.println("Input student's birthday(mm/dd/yyyy)"); student.setBirthday(in.next()); System.out.println("Input Math grade:"); student.subjectGrade.setMathGrade(in.nextDouble()); System.out.println("Input English grade:"); student.subjectGrade.setEnglishGrade(in.nextDouble()); System.out.println("Input Filipino grade:"); student.subjectGrade.setFilipinoGrade(in.nextDouble()); System.out.println("Input Java grade:"); student.subjectGrade.setJavaGrade(in.nextDouble()); System.out.println("Input SoftEng grade:"); student.subjectGrade.setSoftEngGrade(in.nextDouble()); studentList.add(student); studentList.add(student.setSubjectGrade(sGrade)); //Here it is that I want to add break; //end case 1 Here is my Student Class. package santiago; public class Student { private String lastName; private String firstName; private String course; private String birthday; SubjectGrades subjectGrade = new SubjectGrades(); public SubjectGrades getSubjectGrade() { return subjectGrade; } public void setSubjectGrade(SubjectGrades subjectGrade) { this.subjectGrade = subjectGrade; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getCourse() { return course; } public void setCourse(String course) { this.course = course; } public String getBirthday() { return birthday; } public void setBirthday(String birthday) { this.birthday = birthday; } } And my SubjectGrades class package santiago; public class SubjectGrades{ Double mathGrade, englishGrade, filipinoGrade, javaGrade, softEngGrade, weightedAverage; public Double getMathGrade() { return mathGrade; } public void setMathGrade(Double mathGrade) { this.mathGrade = mathGrade; } public Double getEnglishGrade() { return englishGrade; } public void setEnglishGrade(Double englishGrade) { this.englishGrade = englishGrade; } public Double getFilipinoGrade() { return filipinoGrade; } public void setFilipinoGrade(Double filipinoGrade) { this.filipinoGrade = filipinoGrade; } public Double getJavaGrade() { return javaGrade; } public void setJavaGrade(Double javaGrade) { this.javaGrade = javaGrade; } public Double getSoftEngGrade() { return softEngGrade; } public void setSoftEngGrade(Double softEngGrade) { this.softEngGrade = softEngGrade; } public Double getWeightedAverage(){ weightedAverage = ((mathGrade + englishGrade + filipinoGrade + javaGrade + softEngGrade)*3) / 15; return weightedAverage; } public String getScholarStatus(){ String status = ""; if(weightedAverage <= 1.5) { status = "full-scholar"; } else if (weightedAverage <= 1.75){ status = "half-scholar" ; } else { status = "not a scholar"; } return status; } } A: Your mistake: studentList.add(student); studentList.add(student.setSubjectGrade(sGrade)); You are adding the student, then trying to add a void. The return value of setSubjectGrade is void, so nothing will be added: Just do: student.setSubjectGrade(sGrade); studentList.add(student); Where sGrade is an Object of type SubjectGrades, which was populated in the same way student.subjectGrade.setSoftEngGrade(in.nextDouble()); was populated.
{ "pile_set_name": "StackExchange" }
Q: OpenStack AutoPilot failing at 98% I'm trying to deploy OpenStack using Landscape's Autopilot. I get no errors till the final steps but after reaching 98% of the install two tasks fail to completed. The two tasks that never reach completeion are: "Add ubuntu-12.04-server-cloudimg-amd64-disk1 to Glance" "Add ubuntu-14.04-server-cloudimg-amd64-disk1 to Glance" A: Those images are downloaded from the Internet to make Ubuntu available in the Horizon dashboard to launch cloud instances with. Autopilot is trying to put them in place before your first login. Can your newfangled OpenStack setup connect to the Internet?
{ "pile_set_name": "StackExchange" }
Q: Считывание с изменяющегося файла Есть текстовый файл, который постоянно изменяется с некоторой периодичностью. Примерно каждые ~5-15 секунд в нём появляется новая запись, которая должна быть тут же считана программой и после проверки на какие-то условия эта запись должна появиться в окне графического интерфейса. Мои попытки с циклами оказались тщетными, либо программа работала, но не совсем так, как надо. Как можно решить такую задачу? Заранее спасибо. A: Поведение может зависеть от платформы, но можно просто периодически пытаться дальше файл прочитать после EOF (или в худшем случае запоминать последнюю позицию и вызывать file.seek(last_position) на переоткрытом файле), предполагая что в файл новые строки добавляются только в конце (как в лог-файле)—нет других изменений. Например, чтобы показывать в GUI последнюю строчку в файле, которая соответствует заданному регулярному выражению (аналог tail -f file | grep -Pe regex): #!/usr/bin/env python3 """Usage: grep-tail <regex> <file>""" import collections import functools import re import sys import tkinter.messagebox def filter_lastline(file, predicate): """Find the last line in *file* that satisfies *predicate*.""" lines = collections.deque(filter(predicate, file), maxlen=1) try: return lines.pop().rstrip('\n') except IndexError: return '' # not found def update_label(root, label, last_line): current = label['text'] new = last_line() if new and current != new: label['text'] = new # update label root.after(1000, update_label, root, label, last_line) # poll in a second def main(): root = tkinter.Tk() root.withdraw() # hide the main window try: # handle command-line arguments regex_string, path = sys.argv[1:] found = re.compile(regex_string).search file = open(path) except Exception as e: tkinter.messagebox.showerror('wrong command-line arguments', 'error: %s\n%s' % (e, __doc__), parent=root) sys.exit(__doc__) last_line = functools.partial(filter_lastline, file, found) label = tkinter.Label(root, text=last_line() or '<nothing matched %r>' % regex_string) label.pack() update_label(root, label, last_line) # start polling # center window root.eval('tk::PlaceWindow %s center' % root.winfo_pathname(root.winfo_id())) root.mainloop() main() Пример: $ ./grep-tail 'python[23]' /var/log/syslog Комментарии к реализации файл является итератором над строками в Питоне, поэтому filter(predicate, file) генерирует строки в файле, которые удовлетворяют predicate(line) критерию (регулярному выражению в данном случае). deque(it, maxlen=1) поглощает итератор, оставляя самое большее только последний элемент. При повторном вызове filter_lastline(file, predicate), file читается с последней позиции (EOF—с предыдущего конца файла). Можно ли не переоткрывая файл прочитать новые строки таким способом, может зависеть от платформы root.after(1000, f, *args) вызывает f(*args) через секунду, поэтому: def f(*args): # do something # continue loop root.after(1000, f, *args) создаёт цикл, не блокируя GUI. Нельзя написать: def loop(): while True: f(*args) time.sleep(1) так как loop() заблокирует GUI и придётся вызов в отдельный поток/процесс помещать. root.after() позволяет f(*args) в GUI потоке вызывать и модифицировать label без проблем. Если файл редко изменяется, то для эффективности можно watchdog модуль использовать, чтобы вызывать update_label() только когда файл действительно поменялся (в on_modified() обратном вызове). В данном случае (обновления через 5-15 секунд), использование watchdog было бы излишнем уcложнением (сторонняя зависимость + интеграция с циклом событий): #!/usr/bin/env python3 """Usage: grep-tail <regex> <file>""" import collections import functools import os import re import sys import tkinter.messagebox from watchdog.observers import Observer # $ pip install watchdog from watchdog.events import FileSystemEventHandler def filter_lastline(file, predicate): """Find the last line in *file* that satisfies *predicate*.""" lines = collections.deque(filter(predicate, file), maxlen=1) try: return lines.pop().rstrip('\n') except IndexError: return '' # not found def update_label(root, label, last_line): current = label['text'] new = last_line() if new and current != new: label['text'] = new # update label def main(): root = tkinter.Tk() root.withdraw() # hide the main window try: # handle command-line arguments regex_string, path = sys.argv[1:] found = re.compile(regex_string).search file = open(path) except Exception as e: tkinter.messagebox.showerror('wrong command-line arguments', 'error: %s\n%s' % (e, __doc__), parent=root) sys.exit(__doc__) last_line = functools.partial(filter_lastline, file, found) label = tkinter.Label(root, text=last_line() or '<nothing matched %r>' % regex_string) label.pack() class EventHandler(FileSystemEventHandler): def on_modified(self, event): if event.src_path == path: update_label(root, label, last_line) observer = Observer() observer.schedule(EventHandler(), os.path.dirname(path)) observer.start() # center window root.eval('tk::PlaceWindow %s center' % root.winfo_pathname(root.winfo_id())) root.mainloop() observer.stop() observer.join() main() В отличии от предыдущей версии, update_label() вызывается только, если входной файл был изменён: нет root.after() вызова. Предполагается, что полные строки пишутся—разумно для лог-файлов и строк меньших размера буфера, иначе следует update_label() подредактировать, чтобы накапливать данные при каждом вызове пока новая строка не встретится.
{ "pile_set_name": "StackExchange" }
Q: csh \ which $SHELL still gives /bin/bash I need to switch to c-shell and after installing it via software-center it looks like I am ready to go. Nevertheless, when I type 'csh' the line changes to %_ I am still not in a c-shell. When typing which $SHELL I get /bin/bash Also my program is recognizing I am wrong and gives me error messages. I guess there is a simple solution? thanks in advance A: The SHELL environment variable does not indicate what shell you are currently using. It is simply set, when you log in, to the value of the login shell field of /etc/passwd, which in your case is /bin/bash. If you want to change your login shell, run chsh (change shell). The login shell set in /etc/passwd controls, among other things, what shell is run when you open a terminal emulator, such as gnome-terminal. To see what shell you are currently using, try ps -p $$
{ "pile_set_name": "StackExchange" }
Q: Changing ListBoxItem colour based on Property on ViewModel I have a listbox like this: <ListBox ItemsSource="{Binding Users}" SelectedItem="{Binding CurrentSelectedUser}" DisplayMemberPath="Username"/> Users is a Observable collection of User which is a class with 2 properties which is Username and Password. I then have a property called CurrentUser on my view model. What I want to do is change the colour of the listboxs item if the Text on it is equal to CurrentUser.Username. Here is what I have tried so far: <ListBox ItemsSource="{Binding Users}" SelectedItem="{Binding CurrentSelectedUser}" DisplayMemberPath="Username"> <ListBox.ItemContainerStyle> <Style BasedOn="{StaticResource {x:Type ListBoxItem}}" TargetType="{x:Type ListBoxItem}"> <Style.Triggers> <DataTrigger Binding="{Binding Content.Username}" Value="{Binding CurrentUser.Username}"> <Setter Property="Background" Value="Green"></Setter> </DataTrigger> </Style.Triggers> </Style> </ListBox.ItemContainerStyle> </ListBox> This doesn't work. Is there any way to do this? I know that Value is not a dependency property. But I want to do something like this. A: It's not compiling because value is not a dependency property, said that you cannot use binding in a non dependency property. You can use IMultiValueConverter to return the color according with the parameter received, here's an example. Converter: public class Converter : IMultiValueConverter { public Converter() { } public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { var currentPersonName = values[0].ToString(); var listItemPersonName = values[1].ToString(); return currentPersonName == listItemPersonName ? Brushes.Red : Brushes.Black; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } Here you will receive the two names by parameter, so you can compare and return the color you want. You pass these two values by Multibinding, here's the XAML. XAML: <Window.Resources> <local:Converter x:Key="converter"/> <Style x:Key="style" TargetType="ListBoxItem"> <Setter Property="Foreground"> <Setter.Value> <MultiBinding Converter="{StaticResource converter}"> <MultiBinding.Bindings> <Binding Path="DataContext.CurrentPerson.UserName" RelativeSource="{RelativeSource AncestorType={x:Type Window}}"/> <Binding Path="UserName"/> </MultiBinding.Bindings> </MultiBinding> </Setter.Value> </Setter> </Style> </Window.Resources> <ListBox ItemsSource="{Binding Persons}" DisplayMemberPath="{Binding UserName}" ItemContainerStyle="{StaticResource style}" SelectedItem="{Binding SelectedPerson}"> </ListBox> I did a style just like you did, but instead use DataTrigger I used a Multibinding to pass the values to be compared to the converter. In the first binding I retrieve the userName of the current person in my viewModel, to do this I need specify where is the object, this is the reason of relativeSource. In the second binding, I just get the Property UserName directly of the ListItemBox DataContext, which has an object of type Person bind to it. And that is it, it works like expected.
{ "pile_set_name": "StackExchange" }
Q: How to skip a website that gives an HTTP 403 error code in Python 3? I have a list of URLs that I am trying to check using urllib. It's working just fine until it encounters a website that blocks the request. In that case I just want to skip it and continue to the next URL from the list. Any idea how to do it? Here is the full error: Traceback (most recent call last): File "C:/Users/Goris/Desktop/ssser/link.py", line 51, in <module> x = urllib.request.urlopen(req) File "C:\Users\Goris\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 223, in urlopen return opener.open(url, data, timeout) File "C:\Users\Goris\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 532, in open response = meth(req, response) File "C:\Users\Goris\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 642, in http_response 'http', request, response, code, msg, hdrs) File "C:\Users\Goris\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 570, in error return self._call_chain(*args) File "C:\Users\Goris\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 504, in _call_chain result = func(*args) File "C:\Users\Goris\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 650, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden A: The error you're seeing simply indicates that the server has marked the requested resource - that is, the URL you're trying to access - as forbidden to you. It doesn't give any indication of why the resource is forbidden, although the most common reason for such an error is that you would need to log in first. But anyway, it doesn't really matter. The way to skip this page and move on to the next one is to catch the raised error and ignore it. If your URL-accessing code is within a loop, like this: while <condition>: x = urllib.request.urlopen(req) <more code> or for req in <list>: x = urllib.request.urlopen(req) <more code> then probably the easiest way to catch and ignore the error is this: while <condition>: try: x = urllib.request.urlopen(req) except urllib.error.HTTPError as e: if e.code in (..., 403, ...): continue <more code> where continue jumps immediately to the next iteration of the loop. Or you could move the processing code to a function: def process_url(x): <more code> while <condition>: try: x = urllib.request.urlopen(req) except urllib.error.HTTPError as e: if e.code in (..., 403, ...): continue else: process_url(x) else: process_url(x) On the other hand, if your URL accessing code is already in a function, you can just return. def access_url(req) try: x = urllib.request.urlopen(req) except urllib.error.HTTPError as e: if e.code in (..., 403, ...): return <more code> I strongly advise you to learn about the HTTP status codes, and be aware of the errors that urllib.request can generate.
{ "pile_set_name": "StackExchange" }
Q: Adding coordinates to an array in Python 3 So I have image data which I am iterating through in order to find the pixel which have useful data in them, I then need to find these coordinates subject to a conditional statement and then put these into an array or DataFrame. The code I have so far is: pix_coor = np.empty((0,2)) for (x,y), value in np.ndenumerate(data_int): if value >= sigma3: pix_coor.append([x,y]) where data is just an image array (129,129). All the pixels that have a value larger than sigma3 are useful and the other ones I dont need. Creating an empty array works fine but when I append this it doesn't seem to work, I need to end up with an array which has two columns of x and y values for the useful pixels. Any ideas? A: You could simply use np.argwhere for a vectorized solution - pix_coor = np.argwhere(data_int >= sigma3)
{ "pile_set_name": "StackExchange" }
Q: Are two similar matrices A and B unique? Are two similar matrices A and B unique? As in if A is similar to B, is it similar to B and itself only? A: Not even almost. Most matrices are even similar to infinitely many other matrices. An example is a matrix with the numbers $1,2,3,...,n$ on the diagonal, and zeros everywhere else. This matrix is similar to all of the other matrices with any permutation of $1,2,3,...,n$ on their diagonal. The only matrices which are similar to only one matrix are matrices which are scalar multiples of the identity. They are similar only to themselves.
{ "pile_set_name": "StackExchange" }
Q: Access to fetch at has been blocked by CORS policy while accessing from another subdomain I have been getting below errors while trying to access graphql url from https://subdomain-b.abc.com/ service. POST https://subdomain-a.abc.com/graphql 504 Access to fetch at 'https://subdomain-a.abc.com/graphql' from origin 'https://subdomain-b.abc.com/' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. Both services are running on same aws account on Fargate with ELB sitting in front of them. Route 53 is redirecting requests to ELB. Most of the answers I googled are related to S3 bucket which is not the case with my setup. Let me know if I can provide some more details. A: Your GraphQL server need to add the Access-Control-Allow-Origin HTTP header to its responses. Here is a good and comprehensive article about what CORS does and why you need it https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
{ "pile_set_name": "StackExchange" }
Q: Javascript/jQuery : Creating a 'Powered By' slideout for footer logo on websites I found a jsfiddle that did something similar to what I wanted to do. The modified version is located here: http://jsfiddle.net/7m7uK/479/ and it works on the jsfiddle. I copied the code to my site, changed the id's and now, it doesn't appear to be working. Below is the code located on my website. I am using jQuery 1.9.1 and jQuery UI 1.10.3 on my site. Any suggestions as to why this isn't working? Javascript <script type="text/javascript"> $( document ).ready(function() { $('#footer_logo').hover(function(){ if ($('#powered_by').is(':hidden')) { $('#powered_by').show('slide',{direction:'right'},1000); } else { $('#powered_by').hide('slide',{direction:'right'},1000); } }); }); </script> HTML <img src="img.png" width="63" height="25" id="footer_logo"/> <div id="powered_by" width="100px"/>Powered By: </div> CSS #footer_logo { color: #000; cursor:pointer; display:block; position: absolute; bottom: 0; right: 0; z-index: 100; } #powered_by { width: 200px; height: 20px; display: none; position: absolute; right: 0; bottom: 0; background: #ff0000; z-index: 99; } A: I tried your code on jsfiddle and its working well. If the issue still persist check this SO post: JQuery UI show/slide not working correctly, maybe their solution can help. You want to show #powered_by when you hover-in, then hide it when the you hover-out, right? I looked into your code and it's not how you properly want it to behave. For example if you hover-in, the element slides, but when you hover-out then hover-in again without letting it finish hiding, the hovering execution will be reversed. It will be more efficient if you do it this way: $(document).ready(function() { $('#footer_logo').hover(function(){ //hover-in $('#powered_by').show('slide',{direction:'right'},1000); },function(){ //hover-out $('#powered_by').hide('slide',{direction:'right'},1000); }); }); See this jsfiddle. Or with pure jQuery: jsfiddle
{ "pile_set_name": "StackExchange" }
Q: ORA-24247: Access denied by ACL from within PL/SQL function but NOT from SQL I've successfully set up the ACL for my user and URL. I confirm this by running: select utl_http.request(*my URL*) from dual; which returns the corresponding HTML code. However, when I place this code inside a PL/SQL function, as follows: create or replace function temp_func (p_url varchar2) return varchar2 is v_output varchar2(1000); begin select utl_http.request(p_url) into v_output from dual; return v_output; end; and run this code from an anonymous PL/SQL block: declare v_result varchar2(1000); begin v_result := temp_func(*my URL*); dbms_output.put_line(v_result); end; I get the following error stack: Error report - ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1722 ORA-24247: network access denied by access control list (ACL) ORA-06512: at line 1 ORA-06512: at "SIEF.TEMP_FUNC", line 7 ORA-06512: at line 4 29273. 00000 - "HTTP request failed" *Cause: The UTL_HTTP package failed to execute the HTTP request. *Action: Use get_detailed_sqlerrm to check the detailed error message. Fix the error and retry the HTTP request. Is there any way to fix this? I was reading https://support.oracle.com/knowledge/Oracle%20Database%20Products/1074843_1.html and the closest thing I find is: '4. Granting the ACL via roles does not work when the service is requested through from a PLSQL procedure', however, I did not use roles while setting up the ACL. Thank you! My database version: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production "CORE 11.2.0.3.0 Production" TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production My ACL setup: -- Creating ACL begin dbms_network_acl_admin.create_acl( acl => 'WS_test_acl.xml', description => 'ACL file for testing purposes.', principal => *my user*, is_grant => TRUE, privilege => 'connect'); end; -- Adding URL to ACL begin dbms_network_acl_admin.assign_acl( acl => 'WS_test_acl.xml', host => *my URL*); end; A: When things work in anonymous blocks but not in stored procedures it's usually because of definer's rights versus invoker's rights. Anonymous blocks and invoker's rights procedures can use privileges granted through roles, but definer's rights procedures cannot. Try changing return varchar2 is to return varchar2 authid current_user is.
{ "pile_set_name": "StackExchange" }
Q: What's the apostle Paul's view on the bodily resurrection? In 1st Corinthians 15, the apostle Paul records the eyewitness accounts of the Lord Jesus Christ's resurrection: For I delivered to you as of first importance what I also received: that Christ died for our sins in accordance with the Scriptures, that he was buried, that he was raised on the third day in accordance with the Scriptures, and that he appeared to Cephas, then to the twelve. Then he appeared to more than five hundred brothers at one time, most of whom are still alive, though some have fallen asleep. Then he appeared to James, then to all the apostles. Last of all, as to one untimely born, he appeared also to me. For I am the least of the apostles, unworthy to be called an apostle, because I persecuted the church of God.—1st Corinthians 15:3-9 (ESV) I had a friend read that passage and ask if the apostle Paul leaves room for a non-physical (perhaps visionary) resurrection. What did the apostle Paul think? A: Abstract Paul can't be read to support a non-physical resurrection, in this passage or any other, unless you take his words out of context. N. T. Wright is certainly the person to ask on the topic and he neatly summarizes the argument in an article addressing four reviews of his The Resurrection of the Son of God: [Michael] Goulder, by contrast, proposes that the Jerusalem leadership held the view that Jesus’ resurrection was a matter of ‘spiritual’ transformation, rather than the ‘bodily resurrection’ which he ascribes to Paul. This is remarkable in itself; Goulder, never one to shirk controversial proposals, has stood on its head a more usual position, which is that Paul held a ‘spiritual’ view of the resurrection (based on the common misreading of the soma pneumatikon in 1 Corinthians 15) while some other, less Hellenized and more Jewish, early Christians stuck to a view of bodily resurrection. What Wright calls "the common misreading", comes from 1st Corinthians 15:42-49 (ESV): So is it with the resurrection of the dead. What is sown is perishable; what is raised is imperishable. It is sown in dishonor; it is raised in glory. It is sown in weakness; it is raised in power. It is sown a natural body; it is raised a spiritual body. If there is a natural body, there is also a spiritual body. Thus it is written, “The first man Adam became a living being”; the last Adam became a life-giving spirit. But it is not the spiritual that is first but the natural, and then the spiritual. The first man was from the earth, a man of dust; the second man is from heaven. As was the man of dust, so also are those who are of the dust, and as is the man of heaven, so also are those who are of heaven. Just as we have borne the image of the man of dust, we shall also bear the image of the man of heaven. The word spiritual throughout the passage is pneumatikos <4152>, which can mean: 1) relating to the human spirit, or rational soul, as part of the man which is akin to God and serves as his instrument or organ 1a) that which possesses the nature of the rational soul 2) belonging to a spirit, or a being higher than man but inferior to God 3) belonging to the Divine Spirit 3a) of God the Holy Spirit 3b) one who is filled with and governed by the Spirit of God 4) pertaining to the wind or breath; windy, exposed to the wind, blowing While the idea of "spiritual renewal" seems possible to modern readers, in the context of resurrection, which meant a bodily coming back to life, it doesn't work. Paul uses the word "spiritual" because he is struggling to describe the sort of body that will be raised. In verses 35-49, he compares the process to the process of burying a seed. What is planted in the ground does not look like what eventually grows up, but both are of the same kind. In the Resurrection, we don't get an identical copy of our bodies, but something better. "Spiritual", in this case, is of the second definition: "belonging to a spirit, or a being higher than man but inferior to God". (It also might include the third meaning: "belonging to the Divine Spirit".) Wright's book goes into great detail about what might and what might not be meant by resurrection in the New Testament. Here's a summary from another of his articles: The first point to make here is vital. I have argued that the early Christians looked forward to a resurrection which was not a mere resuscitation, nor yet the abandonment of the body and the liberation of the soul, but a transformation, a new type of body living within a new type of world. This belief is embroidered with biblical motifs, articulated in rich theology. Yet in the gospel narratives we find a story, told from different angles of course, without such embroidering and theology—told indeed in restrained, largely unadorned prose. Yet the story is precisely of a single body neither abandoned, nor merely resuscitated, but transformed; and this, though itself totally unexpected, could give rise to exactly that developed view of which I have spoken. The Easter narratives, in other words, appear to offer an answer to why the early Christian hope and life took the form and shape they did. A: A non-physical resurrection was unheard of in Jewish thinking. To them, a person wasn't just a body, nor was it just a soul/spirit. Just a body would have been an animal. Just a spirit would have been like an angel. A complete person in Jewish thought was a unification of spirit and body--neither an animal nor an angel. (A spiritual resurrection is impossible to disprove. Even if you have a body, you can still say "But Jesus was raised spiritually.") In 1 Cor. 15, Paul presupposes an empty tomb, and an empty tomb means the body is raised. (3) For I delivered to you as of first importance what I also received, that Christ died for our sins according to the Scriptures, (4) and that He was buried, and that He was raised on the third day according to the Scriptures, (5) and that He appeared to Cephas, then to the twelve. [NASB] (The phrase "I delivered to you... what I also received" is a rabbinic phrase meaning "the tradition I pass on to you is exactly as I received it.") Paul has a 4 part formula here. Christ died for our sins He was buried He was raised on the third day He appeared to Cephas, then to the twelve (William Lane Craig on his website and debates often points out this formula when challenged with the question of a spiritual resurrection). Even though Paul doesn't say "the tomb was empty," that wouldn't fit in the formula. The formula involves Christ's actions and "by the way, that tomb was empty" doesn't fit. However, each of these lines matches up to the events in the Gospels. Christ died for our sins (Matt 27:50; Mark 15:37; Luke 24:36; John 19:30) He was buried (Matt 27:60; Mark 15:46; Luke 23:53; John 19:40) He was raised on the third day (Matt 28:6; Mark 16:6; Luke 24:3; John 20:2) He appeared to Cephas, then to the twelve (Matt 28:16-17; Mark 16:7; Luke 24:36; John 20:19) (Each of those references are often the starting places of full accounts.) Luke is very clear in his account (Luke 24:36ff) that Jesus is not merely spiritual. Jesus even says, "I have flesh and bones which spirit does not." He eats, which a spirit cannot do. This is relatedto the question because Luke was the companion of Paul, and it is unlikely that Luke would believe in a physical resurrection when his teacher in the faith did not. I hope this helps.
{ "pile_set_name": "StackExchange" }
Q: How to do this regex command? This would best be explained with examples. Here is a line before I do anything: Monohydrogen_Phosphate HPO%4^2- Here is what I've done so far: Monohydrogen Phosphate | HPO%4^2- Here is what it should be when finished: Monohydrogen Phosphate | HPO42- The % will put the first number (if any) and + or - signs (if any) in a <sub> tag, and the ^ will put the first number and +/- in a <sup> tag. I am using Javascript's RegEx replace, but I don't mind switching to PHP. A: var txt = "HPO%4^2-"; txt = txt.replace(/%(\d*[+-]?)/, "<sub>$1</sub>"); txt = txt.replace(/\^(\d*[+-]?)/, "<sup>$1</sup>"); txt //HPO<sub>4</sub><sup>2-</sup> Here you go. For more information, see MDN replace - Example: Switching words in a string. A: I would use str.replace(/([^\|])\s+([a-z]+)(%(\d+[+-]?))?(\^(\d+[+-]?))?/gi,'$1 | $2<sub>$4</sub><sup>$6</sup>'); Here's an example: <script type="text/javascript"> var str = 'Monohydrogen_Phosphate HPO%4^2-'; var str1 = str.replace(/([^\|])\s+([a-z]+)(%(\d+[+-]?))?(\^(\d+[+-]?))?/gi,'$1 | $2<sub>$4</sub><sup>$6</sup>'); console.log('str:\t' + str + '\nstr1:\t' + str1); </script> which will output: str: Monohydrogen_Phosphate HPO%4^2- str1: Monohydrogen_Phosphate | HPO<sub>4</sub><sup>2-</sup> which, in HTML will be parsed like… Monohydrogen_Phosphate | HPO42- EDIT: I was thinking about compound(?) elements such as Li2CO3 so I came up with a longer but better solution. function formatStr(str) { return str.replace(/([a-z]+)(%(\d+[+-]?))?(\^(\d+[+-]?))?/gi,function() { // this part allows stuck elements to be parsed right // such as Li%2CO%3 or H%2SO%4 if (!arguments[3] && !arguments[5]) return arguments[0]; var _str = arguments[1]; _str += arguments[3] ? '<sub>' + arguments[3] + '</sub>' : ''; _str += arguments[5] ? '<sup>' + arguments[5] + '</sup>' : ''; return _str; }).replace(/(.+)\s+([a-zA-Z]+<su[bp]>.+<\/su[bp]>)$/,'$1 | $2'); // and this part adds ' | ' to the beginning of the changed element // if there's any content before it. otherwise it's left as is }
{ "pile_set_name": "StackExchange" }
Q: How to get and use the the number of days since the last comment? How would I achieve this condition ... if lastest comment is < 7 days old echo 'New Comment'; else '' A: To get the latest comment use get_comments(). get_comment_date returns the date of a comment in any format for PHP's date(). Now it is easy. Let's put the logic into a function to keep the global namespace clean: /** * Returns the number of days since the latest comment. * * @return int */ function get_days_since_last_comment( $post_id = 0 ) { $args = array ( 'number' => 1, 'status' => 'approve' ); 0 !== $post_id and $args['post_id'] = (int) $post_id; // Array of comment objects. $latest_comment = get_comments( $args ); // No comments found. if ( ! $latest_comment ) { return -1; } $comment_unix = get_comment_date( 'U', $latest_comment[0]->comment_ID ); return round( ( time() - $comment_unix ) / 86400 ); } Add the function to your plugin or to your theme's functions.php. To display a special message: if ( get_days_since_last_comment() > 7 ) { print 'Looks like everything has been said.'; } To get the days for a specific post (here ID 123) : if ( get_days_since_last_comment( 123 ) > 7 ) { print 'Looks like everything has been said.'; }
{ "pile_set_name": "StackExchange" }
Q: Inbuffer emacs calculation Is it possible to do inbuffer calculation in emacs. For example, if my file has the following numbers 10 11 12 (A) I would like to convert these numbers to hex (either in place or paste it next to that), 10 A 11 B 12 C (B) I would like to sum those numbers. 10 11 12 33 (C) I would like to increment the count (sth like an index) 10 11 12 13 14 A: You can use the inbuilt calculator and/or the fact that \, in the replacement string for commands like replace-regexp will evaluate an arbitrary elisp expression. More-or-less of the top of my head you can do: A. Mark the region containing the numbers. Execute M-x replace-regexp For the matching regexp, use \([[:digit:]]+\). For the replacement, use \,(format "%X" (string-to-number \1)). B. Mark the region containing the numbers. Type C-x * g. Type V R +. Type y to insert the sum, or C-u y to replace. C. Same as for A, but mark just the last number, and use a replacement function of \,(format "%s\n%d" \1 (1+ (string-to-number \1))). You can put these in macros or functions which take care of moving point around to the right place.
{ "pile_set_name": "StackExchange" }
Q: Mediawiki mass user delete/merge/block I have 500 or so spambots and about 5 actual registered users on my wiki. I have used nuke to delete their pages but they just keep reposting. I have spambot registration under control using reCaptcha. Now, I just need a way to delete/block/merge about 500 users at once. A: You could just delete the accounts from the user table manually, or at least disable their authentication info with a query such as: UPDATE /*_*/user SET user_password = '', user_newpassword = '', user_email = '', user_token = '' WHERE /* condition to select the users you want to nuke */ (Replace /*_*/ with your $wgDBprefix, if any. Oh, and do make a backup first.) Wiping out the user_password and user_newpassword fields prevents the user from logging in. Also wiping out user_email prevents them from requesting a new password via email, and wiping out user_token drops any active sessions they may have. Update: Since I first posted this, I've had further experience of cleaning up large numbers of spam users and content from a MediaWiki installation. I've documented the method I used (which basically involves first deleting the users from the database, then wiping out up all the now-orphaned revisions, and finally running rebuildall.php to fix the link tables) in this answer on Webmasters Stack Exchange. Alternatively, you might also find Extension:RegexBlock useful: "RegexBlock is an extension that adds special page with the interface for blocking, viewing and unblocking user names and IP addresses using regular expressions." A: There are risks involved in applying the solution in the accepted answer. The approach may damage your database! It incompletely removes users, doing nothing to preserve referential integrity, and will almost certainly cause display errors. Here a much better solution is presented (a prerequisite is that you have installed the User merge extension): I have a little awkward way to accomplish the bulk merge through a work-around. Hope someone would find it useful! (Must have a little string concatenation skills in spreadsheets; or one may use a python or similar script; or use a text editor with bulk replacement features) Prepare a list of all SPAMuserIDs, store them in a spreadsheet or textfile. The list may be prepared from the user creation logs. If you do have the dB access, the Wiki_user table can be imported into a local list. The post method used for submitting the Merge & Delete User form (by clicking the button) should be converted to a get method. This will get us a long URL. See the second comment (by Matthew Simoneau) dated 13/Jan/2009) at http://www.mathworks.com/matlabcentral/newsreader/view_thread/242300 for the method. The resulting URL string should be something like below: http: //(Your Wiki domain)/Special:UserMerge?olduser=(OldUserNameHere)&newuser=(NewUserNameHere)&deleteuser=1&token=0d30d8b4033a9a523b9574ccf73abad8%2B\ Now, divide this URL into four sections: A: http: //(Your Wiki domain)/Special:UserMerge?olduser= B: (OldUserNameHere) C: &newuser=(NewUserNameHere)&deleteuser=1 D: &token=0d30d8b4033a9a523b9574ccf73abad8%2B\ Now using a text editor or spreadsheet, prefix each spam userIDs with part A and Suffix each with Part C and D. Part C will include the NewUser(which is a specially created single dummy userID). The Part D, the Token string is a session-dependent token that will be changed per user per session. So you will need to get a new token every time a new session/batch of work is required. With the above step, you should get a long list of URLs, each good to do a Merge&Delete operation for one user. We can now create a simple HTML file, view it and use a batch downloader like DownThemAll in Firefox. Add two more pieces " Linktext" to each line at beginning and end. Also add at top and at bottom and save the file as (for eg:) userlist.html Open the file in Firefox, use DownThemAll add-on and download all the files! Effectively, you are visiting the Merge&Delete page for each user and clicking the button! Although this might look a lengthy and tricky job at first, once you follow this method, you can remove tens of thousands of users without much manual efforts. You can verify if the operation is going well by opening some of the downloaded html files (or by looking through the recent changes in another window). One advantage is that it does not directly edit the MySQL pages. Nor does it require direct database access. I did a bit of rewriting to the quoted text, since the original text contains some flaws.
{ "pile_set_name": "StackExchange" }
Q: SQL querying multiple select statements I have a dataframe that has a lot of entires similar to the table to the left shown below. I was to query it using SQL to get a result similar to the table to the right shown below. So that I will be able to plot a stacked bar chart with the data with each bar representing a state and Severity count S03, S04 will add up. +--+-----+--------+ |ID|State|Severity| +--+-----+--------+ |01| NY | 3 | +-----+---+---+ |02| CA | 4 | |State|S03|S04| |03| NY | 4 | => +-----+---+---+ |04| CA | 3 | | CA | 1 | 3 | |05| CA | 4 | | NY | 1 | 1 | |06| CA | 4 | I tried the following SQL query but it is giving the same result for every entry in S03 and same for S04. city_accidents = spark.sql("\ SELECT State, \ (SELECT COUNT(ID) AS Count FROM us_accidents WHERE Severity = 3 ) AS S03, \ (SELECT COUNT(ID) AS Count FROM us_accidents WHERE Severity = 4 ) AS S04 \ FROM accidents \ GROUP BY State \ ORDER BY State DESC LIMIT 10") city_accidents.show() +-----+---+---+ |State|S03|S04| +-----+---+---+ | NY | 1 | 3 | | CA | 1 | 3 | That is probably because I haven't entered any filter for the inner select statement from which state to select from. Is there a way I can access those inner variables in the select query? What I meant is if I could change inner select statements to (SELECT COUNT(ID) AS Count FROM us_accidents WHERE Severity = 3 AND State = this.State ) AS S03.. A: You can try below way - city_accidents = spark.sql("\ SELECT State, \ COUNT(case when Severity = 3 then ID end) AS S03, \ COUNT(case when Severity = 4 then ID end) AS S04 \ FROM accidents \ GROUP BY State \ ORDER BY State DESC LIMIT 10") city_accidents.show()
{ "pile_set_name": "StackExchange" }
Q: Referenced Assembly Not Found - How to get all DLLs included in solution I'm running a WCF application CoreApplication whose VS project has a reference to AncillaryProject. CoreApplication uses a class Provider from AncillaryProject; however, it is never explicitly referenced - it's invoked via Reflection. My problem is that sometimes CoreApplication fails to find Provider because AncillaryProject does not come up in the call to GetAssemblies(). Sometimes it works fine, but sometimes (I'm guessing it may be after a JIT) it fails. Here's my original code: var providers = from d in AppDomain.CurrentDomain.GetAssemblies() from c in d.GetTypes() where typeof(BaseProvider).IsAssignableFrom(c) select c; After looking at this question, I tried using GetReferencedAssemblies(): var allAssemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var a in AppDomain.CurrentDomain.GetAssemblies()) { allAssemblies = allAssemblies.Union( a.GetReferencedAssemblies() .Select(b => System.Reflection.Assembly.Load(b))); } var providers = from d in allAssemblies from c in d.GetTypes() where typeof(BaseProvider).IsAssignableFrom(c) select c; I realize that the question I referenced solves the problem through dynamically loading all dll files in the bin directory, but that doesn't sound particularly good to me. Is there a better way to do this, or is .NET simply not loading the other Assemblies in at all? How does this work under the hood, and is there anything I can do about it? A: According to Microsoft documentation AppDomain.CurrentDomain.GetAssemblies() gets the assemblies that have been loaded into the execution context of this application domain. About AppDomain.CurrentDomain.GetAssemblies() It seems that you need to change strategy of loading the assemblies you need from using the appdomain to looking for dlls in your applications folder. I found a discussion on a similar problem here A: You can handle the AssemblyResolve event and load AncillaryProject.dll in that event handler http://msdn.microsoft.com/en-us/library/ff527268.aspx A: You should download the .NET Development SDK and start up FuslogVw.exe (fusion log viewer). It will report on CLR Application trying to resolve .NET dependencies. It will show you were it is looking and how it evaluates the candidates located at those places.
{ "pile_set_name": "StackExchange" }
Q: HTML, CSS: Hovering creates doubled images but only on Smartphone Google Chrome I am building a webpage for my startup, but I am having trouble using the mouse over / hover event. It regards this page: www.innomotion-media.com As you can see, hovering with your mouse over one of the three images will change the image to a colored one. This is working fine, however: when opening said link on a smartphone (at least on my Samsung S8 with Google Chrome) the images are shown twice. The black and white one and the colored one. So there are 6 images instead of only the wanted two. I tried opening the page on my smartphone with Firefox and this showed correctly. Also I tried with Internet Explorer on my computer and it also worked. This is the HTML that I used: <div class="container"> <div align="center"> <a href="./page_construction.html"> <div class="card"> <img src="./img/index_left_bw.png" alt="Card Back"> <img src="./img/index_left.png" class="img-top" alt="Card Front"> </div> </a> </div> <div align="center"> <a href="./page_appDev.html"> <div class="card"> <img src="./img/index_center_bw.png" alt="Card Back"> <img src="./img/index_center.png" class="img-top" alt="Card Front"> </div> </a> </div> <div align="center"> <a href="./page_recording.html"> <div class="card"> <img src="./img/index_right_bw.png" alt="Card Back"> <img src="./img/index_right.png" class="img-top" alt="Card Front"> </div> </a> </div> </div> And this is the corresponding CSS:; /*font face*/ @font-face { font-family: "Baiti"; src: url("./fonts/baiti.ttf"); } body { font-family: "Baiti", serif } .container{ width:900px; margin:auto; } .card { position: relative; display: inline-block; } .card .img-top { display: none; position: absolute; top: 0; left: 0; z-index: 99; } .card:hover .img-top { display: inline; } Is there maybe a better way of doing this, so that it will look the same on all browsers, smartphone or not? I hope you can help me, this is my first time asking anything :) Thank you! A: Instead of using overlaid images, you could use a grayscale filter in CSS on a colour image which is removed on hover. Remove the second image from the HTML: <div align="center"> <a href="./page_construction.html"> <div class="card"> <img src="./img/index_left.png" class="img-top" alt="Card Front"> </div> </a> </div> Remove the exsting CSS for .img-top and replace with this: .card .img-top { filter: grayscale(100%); } .card .img-top:hover { filter: none; } The colour image will be shown but with a grayscale filter on it, so that the image is black and white. When you hover over it, the filter is removed and the image is shown in normal colour. This removes the need to have two images with one positioned absolutely.
{ "pile_set_name": "StackExchange" }
Q: Labeled Likelihood Contour Plot In R I have a csv file with the following values. x,y 50.0,0.0 50.0,0.0 51.0,0.0 53.0,0.0 54.0,0.0 54.0,0.0 54.0,0.0 55.0,0.0 55.0,0.0 56.0,0.0 56.0,0.0 57.0,0.0 57.0,0.0 57.0,1.0 57.0,1.0 58.0,0.0 59.0,0.0 60.0,0.0 60.0,1.0 61.0,0.0 61.0,0.0 61.0,1.0 61.0,1.0 62.0,1.0 62.0,1.0 62.0,0.0 62.0,1.0 63.0,0.0 63.0,0.0 63.0,1.0 64.0,0.0 64.0,1.0 65.0,0.0 67.0,1.0 67.0,1.0 68.0,0.0 68.0,1.0 69.0,0.0 70.0,1.0 71.0,0.0 I can make a nice contour plot in R using the contour() function with the below code, but I would like to make the same thing using ggplot. Could someone show how this can be done? I also attached an image at the bottom showing what the figure looks like with the current code. Likelihood Contour Image #Read in the file `xy` x<- xy$x y<- xy$y #Center age x0 <- x-mean(x) #fit glm xglm <- glm(y~x0,family=binomial) # 2d likelihood b<- summary(xglm)$coef #intercept estimate and se b0hat<-xglm$coef[1]; se0<- b[1,2] #slope estimate and se b1hat<-xglm$coef[2]; se1<- b[2,2] #Compute the log-likelihood fun1 <- function(bo,b1){ sum(y*(bo+b1*x0)- log(1+exp(bo+b1*x0))) } lik<- NULL #get range of values within +- 3 se for intercept bbo<- seq(b0hat-3*se0, b0hat+3*se0 ,len=20) #get range of values within +- 3 se for slope bb1 <- seq(b1hat-3*se1, b1hat+3*se1,len=20) for (bo in bbo) { for (b1 in bb1){ lik <- c(lik,fun1(bo,b1)) } } #get max likelihood maxlik <- max(lik) #get difference lik <- lik-maxlik #take the exponential of the likelihood lik<- exp(lik) contour(bbo,bb1,matrix(lik,20,byrow=T),level=seq(.1,1,by=.2), xlab=expression(beta[0]), ylab=expression(beta[1])) A: Something like the following? library(ggplot2) df.lik <- setNames(expand.grid(bbo, bb1), c('x', 'y')) vfun1 <- Vectorize(fun1, SIMPLIFY = TRUE) df.lik$z <- vfun1(df.lik$x,df.lik$y) p <- ggplot(df.lik, aes(x, y, z=z)) + stat_contour(aes(colour = ..level..)) data<- ggplot_build(p)$data[[1]] indices <- setdiff(1:nrow(data), which(duplicated(data$level))) # distinct levels p + geom_text(aes(label=seq(0,1,by=.1), z=NULL), data=data[indices,]) + xlab(expression(beta[0])) + ylab(expression(beta[1]))
{ "pile_set_name": "StackExchange" }
Q: Replacing old doorbell transformer - wiring This morning I opened up my main breaker box to replace the old doorbell transformer. I thought this would be a straightforward job but didn't account for confusing wiring of the old transformer setup. The new transformer has the typical black, white and green (ground) wires. The old transformer has two blacks and a green (photos attached). Even more confusing is the way those old wires are connected. One of the black wires and the green are connected together directly to the main box. The other black wire is connected to other black wires via a wire nut. How can I know where to plug in my black and white wires? Should I just cap the new ground wire with a wire nut or attach it somewhere in the box? Thanks for your help! A: The old transformer has identical wire colors for hot and neutral. The new one differentiates them. To connect the new transformer, connect the black wire from the new one to the same black wire where your old transformer was connected with the wire nut. Connect both the green and white wires to the same spot where the black and green are connected together from your old transformer. In your case, this is a main panel, which is why it is acceptable to connect the neutrals and grounds together (though it is not the best way to do it, but you might as well stay consistent with how your panel is wired). This question/answer should give you some more background on this. Also note that if your new transformer looks similar to the old one, you should put the threaded part where the 120V wires come through through the knockout in the side of the panel, and use a nut to attach it there. The way the old transformer is attached is not how it should be, those wires should not be passing along the side of the panel and through an unprotected knockout like that.
{ "pile_set_name": "StackExchange" }
Q: How to decode bytes to first forward slash? Hey everyone I'm having a slight issue with some python AES Decryption code I wrote. I'm trying to decrypt two different emails (of different lengths) using PyCryptoDome and AES-256-CBC encryption. My code is below: import base64 from Crypto.Cipher import AES import json from Crypto.Util.Padding import pad, unpad def decrypt(enc): # Get key key = base64.b64decode("mybase64key") # Load dictionary of Base64 values of the payload to decrypt dataDict = json.loads(base64.b64decode(myEncryptedData)) # Create decrypter with our IV decrypter = AES.new(key, AES.MODE_CBC, base64.b64decode(dataDict['iv'])) # Pad and decode data data = decrypter.decrypt(pad(base64.b64decode(dataDict['value']), 16)) # EDIT: PRINTING DATA HERE print(data) # Works for shorter password print(data[:-24].decode()) # Works for longer password print(data.decode()) It seems like just a padding issue, but I'm not sure how to go about getting the correct padding size. Both passwords/IV's have the same exact encrypted length, so print(len(dataDict['value'])) prints 44 for both emails, and padding it prints 48 for both emails which stops me from getting the padding length since it's the same in all cases. Using print(len(data)) returns the same length value for both emails as well. However, when I just print data, I can see the two emails like so: b'[email protected]\x06\x06\x06\x06\x06\x06\x0f\xef\xe2\xa3\xdd\xH9\x7f\xj4\xwf\x14\x88\xd8(x\x90N' b'[email protected]\x08\x08\x08\x08\x08\x08\x08\x08y\xg3?\xa0\x1e\xaa`\xc2\x67\xf1i]3\xe1\xa0F' How can I go about just getting the string I can see within the bytes array without knowing the length of the original text? Is there a workaround? The two example byte arrays I provided have the same exact length/format of the emails, just not sure how to deal with this issue. A: I really don't know what the \x06 or \x08 mean or signify, but they both happen to not be printable characters (which is why they display in hexadecimal like that). In addition, the values you show for the two email isn't valid Python syntax. Ignoring that issue, something along these lines might work: import string # Leaving the invalid '\xH9\x7f\xj4\xwf\x14\x88\xd8(x\x90N' part off. email1 = b'[email protected]\x06\x06\x06\x06\x06\x06\x0f\xef\xe2\xa3\xdd' for i, value in enumerate(email1): if chr(value) not in string.printable: print(i, '\\x{:02x}'.format(value)) print(email1[:i]) # Show everything up to that point. break else: print('all values were printable') Output: 26 \x06 b'[email protected]'
{ "pile_set_name": "StackExchange" }
Q: Best practise for Progress Bar in Python's PyGTK I would like to get feedback on others' opinions of best practice for how to implement a progress bar in Python's PyGTK. The work that the progress bar was to represent was very significant computationally. Therefore, I wanted the work to be done in a separate process (thus giving the operating system the possibility to run it on a different core). I wanted to be able to start the work, and then continue to use the GUI for other tasks while waiting for the results. I have seen many people asking this question indirectly, but I have not seen any concrete expert advice. I hope that by asking this question we will see a community's combined expertise. A: I realise now that I do not have enough reputation to make this a community wiki, so I hope someone else can change this to wiki-status. Thanks. I am by no means an expert Python programmer, however I have spent some time trying to find an acceptable solution. I hope that the following code may act as a starting point to this discussion. import gobject import pygtk pygtk.require('2.0') import gtk import multiprocessing import threading import time gtk.gdk.threads_init() class Listener(gobject.GObject): __gsignals__ = { 'updated' : (gobject.SIGNAL_RUN_LAST, gobject.TYPE_NONE, (gobject.TYPE_FLOAT, gobject.TYPE_STRING)), 'finished': (gobject.SIGNAL_RUN_LAST, gobject.TYPE_NONE, ()) } def __init__(self, queue): gobject.GObject.__init__(self) self.queue = queue def go(self): print "Listener has started" while True: # Listen for results on the queue and process them accordingly data = self.queue.get() # Check if finished if data[1]=="finished": print "Listener is finishing." self.emit("finished") return else: self.emit('updated', data[0], data[1]) gobject.type_register(Listener) class Worker(): def __init__(self, queue): self.queue = queue def go(self): print "The worker has started doing some work (counting from 0 to 9)" for i in range(10): proportion = (float(i+1))/10 self.queue.put((proportion, "working...")) time.sleep(0.5) self.queue.put((1.0, "finished")) print "The worker has finished." class Interface: def __init__(self): self.process = None self.progress = gtk.ProgressBar() button = gtk.Button("Go!") button.connect("clicked", self.go) vbox = gtk.VBox(spacing=5) vbox.pack_start(self.progress) vbox.pack_start(button) vbox.show_all() self.frame = vbox def main(self): window = gtk.Window(gtk.WINDOW_TOPLEVEL) window.set_border_width(10) window.add(self.frame) window.show() window.connect("destroy", self.destroy) gtk.main() def destroy(self, widget, data=None): gtk.main_quit() def callbackDisplay(self, obj, fraction, text, data=None): self.progress.set_fraction(fraction) self.progress.set_text(text) def callbackFinished(self, obj, data=None): if self.process==None: raise RuntimeError("No worker process started") print "all done; joining worker process" self.process.join() self.process = None self.progress.set_fraction(1.0) self.progress.set_text("done") def go(self, widget, data=None): if self.process!=None: return print "Creating shared Queue" queue = multiprocessing.Queue() print "Creating Worker" worker = Worker(queue) print "Creating Listener" listener = Listener(queue) listener.connect("updated",self.callbackDisplay) listener.connect("finished",self.callbackFinished) print "Starting Worker" self.process = multiprocessing.Process(target=worker.go, args=()) self.process.start() print "Starting Listener" thread = threading.Thread(target=listener.go, args=()) thread.start() if __name__ == '__main__': gui = Interface() gui.main() Some of the references I found useful were: A Progress Bar using Threads Sub-classing GObject in Python Signals and Threads Always always always always make sure you have called gtk.gdk.threads_init
{ "pile_set_name": "StackExchange" }
Q: How to generate msi from windows application I have a Windows Application class where I have defined my Windows Service, and I need to generate a .msi (installer) from it. What I have done so far for this is: create a new project in Visual Studio Enterprise 2017 - the project is of type Setup Project for Wix v3 (from Wix Toolset); inside this project I have by default References and Product.wxs. From Add References, Projects, I added the Service project. One of the sources that I found says all that's needed is to add Source="$(var.MyApplication.TargetPath)" /> as seen here: http://wixtoolset.org/documentation/manual/v3/votive/authoring_first_votive_project.html ...but this doesn't work for me because: undefined preprocessor variable $(var.MyApplication.TargetPath) I don't know where to define this variable and what is the meaning of this path. Excerpt here: <Fragment> <ComponentGroup Id="ProductComponents" Directory="INSTALLFOLDER"> <!-- TODO: Remove the comments around this Component element and the ComponentRef below in order to add resources to this installer. --> <Component Id="ProductComponent"> <!-- TODO: Insert files, registry keys, and other resources here. --> <File Source = "$(var.MyApplication.TargetPath)"/> </Component> </ComponentGroup> Any ideas? Thanks. This is all autoenerated code except for the File Source line. Don't know what I should add for INSTALLFOLDER either and what the syntax should be. The purpose is to generate the .msi from my windows service A: The Wix documentation for this step is broken as of at least version 3.11. Instead of creating two separate solutions (app and Wix) you need to add the Wix setup as a second project in your windows forms solution. In the app Solution Explorer pane right-click on the solution then choose Add > New Project. Choose a name like WixSetup. Next, click on the WixSetup project > References and choose Add New Reference. The projects list should show your app since they are in the same solution. Next, add the entry to the in Product.wxs but the documentation is incorrect there too, you need to wrap it in a component tab. (Replace MY-APPLICATION-NAME with the name of your windows forms app project.) <Component Id="ProductComponent"> <File Source="$(var.MY-APPLICATION-NAME.TargetPath)" /> </Component> You also need to edit line 3 of the .wsx to include a non-empty company name or to remove that attribute: <Product Id="*" Name="WixSetup" Language="1033" Version="1.0.0.0" Manufacturer="MY-COMPANY" Finally, you must have a release build in your main application before building the Wix MSI.
{ "pile_set_name": "StackExchange" }
Q: Solidity 0.5.0: TypeError Initial value for constant variable has to be compile-time constant Why can't I declare a constant this way in Solidity 0.5.0? With recent versions everything went fine: uint256 public constant INITIAL_SUPPLY = 10000 * (10 ** uint256(decimals())); /** * @return the number of decimals of the token. */ function decimals() public view returns (uint8) { return _decimals; } A: In Solidity, constants aren't stored in storage anywhere; they're substituted in the bytecode. Roughly, something like this: constant uint256 FOO = 42; function blah() { return FOO; } Turns into this: function blah() { return 42; } The compiler can only do this substitution if the value of the constant is known at compile time. In your example, if _decimals is a constant, it's theoretically possible for a compiler to figure out that decimals() returns a constant and what that value is, but the Solidity compiler is nowhere near that smart.
{ "pile_set_name": "StackExchange" }
Q: Convert Date String from YYYYMMDD to DD Mon YYYY I have a date as a string "20180619" How can I convert this to 19 Jun 2018 I started by trying var date = new Date(parseInt("20180619")); var d = date.getDate(); var m = date.getMonth(); var y = date.getFullYear(); console.log(d + ' ' + m + ' ' + y) But get 1 0 1970 Edit: So there are actually 2 issues here, first is the date is the wrong format, and second get the month name. The second part is answer by the other linked question. So it just boils down to splitting the date down to it's components using one of a couple of different methods in the answers here. A: You will need to parse your string into something that Date can understand. How about: var dateString = "20180619"; var parsedDate = dateString.replace(/(\d{4})(\d{2})(\d{2})/, '$1/$2/$3'); var date = new Date(parsedDate); var d = date.getDate(); var months = ['Jan', 'Feb', 'Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']; var m = months[date.getMonth()]; var y = date.getFullYear(); console.log(d + ' ' + m + ' ' + y) The result is 19 Jun 2018
{ "pile_set_name": "StackExchange" }
Q: How to hide html div I'm developing a small application in Ruby-On-Rails. I want to hide a div in an html.erb file until a link is clicked. What is the simplest way to do this? A: In your html file: <a href="#" id="show_whatever">Show Whatever</a> <div id="whatever" class="hidden">...</div> In your CSS file: div.hidden { display: none; } In an included javascript file, or inside of <script> tags: $(function() { $('a#show_whatever').click(function(event){ event.preventDefault(); $('div#whatever').toggle(); }); }); The hidden class is hiding the div. The jQuery function is listining for a click on the link, then preventing the link from being followed (event.preventDefault() keeps it from browsing to #)`, and lastly toggling the visibility of the div. See the jQuery API for click() and toggle().
{ "pile_set_name": "StackExchange" }
Q: Python string format width I am having problems replicating the format as shown in: what my results are: I'm currently using: print '{}.{:<20} {}.'.format(i,'sum so far:',sum) I have tried left, right, and center alignment, but I just can't get the format that I want. A: You can try first aligning the item number string: tot = 0 for i in xrange(1, 11): tot += i print '{:<20}{} {}.'.format(str(i) + '.', 'sum so far:', tot)
{ "pile_set_name": "StackExchange" }
Q: Ubuntu 14.04 not booting even after grub reinstall and Boot-repair After installing fresh Windows 10 on my Laptop, my external hard drive doesn't boot previously installed Ubuntu. Even though the grub menu appears at the beginning, after selecting ubuntu, the screen becomes black with a blinking console cursor. I reinstalled grub with live CD with no luck. After attempting Boot-Repair from live cd, here is the report in provides: http://paste.ubuntu.com/13297431/ It still not working. How can I get back my Ubuntu exactly same as before? A: Remove the FlexNet crap from the boot sector we found during our discussion in chat. GRUB refused to install to the mbr complaining about a sector being in use by FlexNet. Afterwards reinstall the GRUB boot loader to your Ubuntu installation in legacy mode. Boot from your ubuntu installation media and choose Try Ubuntu without installing. When the Ubuntu desktop appears - open a terminal and execute : sudo dd if=/dev/zero of=/dev/sda bs=512 count=62 seek=1 sudo mount /dev/sdc2 /mnt sudo grub-install --boot-directory=/mnt/boot /dev/sda Note : sda = disk | sdc2 = Ubuntu system partition In case this solution does not work, open GParted and shrink the Windows partition on sda. Start Install Ubuntu - choose Something else and install Ubuntu to the unallocated space. If you want to clone your old system to the new partition use clonezilla for this. Download Clonezilla Live CD and then create a bootable media to boot from it. Backup the Ubuntu partition from the external disk to another disk or partition. Restore it back to the partition where you performed the new install of Ubuntu. Download and more information -> Clonezilla Now boot from Ubuntu install media again ... Identify the partition UUID's - open a terminal and execute : sudo blkid Mount the system partition and edit the fstab file - open another terminal and execute : sudo mount /dev/sda* /mnt sudo gedit /mnt/etc/fstab Replace the UUID entries with the ones from the output given from the blkid command. In case Ubuntu will not boot - reinstall GRUB ... boot from Ubuntu install media again ... Open a terminal and execute : sudo mount /dev/sda* /mnt sudo grub-install --boot-directory=/mnt/boot /dev/sda Replace * with the partition number you have Ubuntu installed. If all this is too complex or complicated - you as well can keep the working Ubuntu configuration on sda and copy your personal data from your old Ubuntu on sdc to your new installation on sda.
{ "pile_set_name": "StackExchange" }
Q: Retrieve elements in array with the same value and group them to manipulate data in the group I have an array like below $crops = array( 0 => array( 'crop_name' => 'Maize', 'crop_variety_name' => 'Longe 10H', 'weeks' => array( 9 => 11.200, 10 => 14.700, 11 => 12.300, 12 => 4.300, 14 => 8.500, 16 => 18.800, 17 => 10.600, 20 => 10.000, 30 => 7.000 ) ), 1 => array( 'crop_name' => 'Maize', 'crop_variety_name' => 'Longe 5', 'weeks' => array( 15 => 15.400, 16 => 4.700, 19 => 11.000, 20 => 3.000, 21 => 5.000, 29 => 2.000 ) ), 2 => array( 'crop_name' => 'Maize', 'crop_variety_name' => 'VP Max', 'weeks' => array( 9 => 6.800, 10 => 8.000, 14 => 3.000, 15 => 6.800, 17 => 4.300, 18 => 7.400, 20 => 5.900, 21 => 2.400, 22 => 2.800, 23 => 5.400, 24 => 3.900 ) ), 3 => array( 'crop_name' => 'Rice', 'crop_variety_name' => 'Superica 2', 'weeks' => array( 18 => 6.600, 19 => 11.500, 20 => 8.300, 21 => 10.100, 24 => 2.800 ) ), 4 => array( 'crop_name' => 'Soya', 'crop_variety_name' => 'Soya N1', 'weeks' => array( 20 => 3.000 ) ), 5 => array( 'crop_name' => 'Soya', 'crop_variety_name' => 'Soya N3', 'weeks' => array( 10 => 5.9, 11 => 12.800, 12 => 5.100, 15 => 4.000, 19 => 4.000, 31 => 3.100 ) ) ); Different crops crop_name have one or more varieties crop_variety_name. I want to retrieve the crops for example `'crop_name'=>'Maize' regardless of their crop varieties and then retrieve the weeks array and add all the values in the week array for each for the crop varieties such that I have an array like this array( 'Maize' => 195.2, 'Rice' => 39.3 'Soya' => 37.9) Where the key is the crop_name and the value is the total of the values in the the weeks array for the crop_varieties of each crop. The first array can contain an arbitrary number of crops, crop_varieties and the weeks array can contain an arbitrary number of values. How can i go about this. For the crop names I tried this $crop_names = array(); for($i = 0; $i < count($crops); $i++ ){ array_push($crop_names, $crops[$i]['crop_name']); } $crop_name = array_values(array_unique($crop_names)); This works for for crop_names but array_unique causes loss of data. A: Your code does not match your requirement. I want to retrieve the crops for example `'crop_name'=>'Maize' regardless of their crop varieties and then retrieve the weeks array and add all the values in the week array for each for the crop varieties such that I have an array like this array( 'Maize' => 195.2, 'Rice' => 39.3 'Soya' => 37.9) This is how the above requirement is translated into code: $amounts = array(); foreach ($crops as $crop) { $name = $crop['crop_name']; if (! isset($amounts[$name])) { // This is the first time when this crop type is processed $amounts[$name] = 0; } // Add all the values in the week array regardless of varieties $amounts[$name] += array_sum($crop['weeks']); } // If you need the names in a separate list you can get it with array_keys() $crop_names = array_keys($amounts); Take a look at the array functions section of the PHP manual for more ideas.
{ "pile_set_name": "StackExchange" }
Q: Serializing an ArrayList of Objects with BufferedImages I'm trying to save a game state by serializing a game object that contains an ArrayList of enemies. Each of the items on the ArrayList is an object, and each of those objects contains a BufferedImage to represent the enemy. Java throws an error that the BufferedImages aren't serializable. All the solutions I've found say to just create a new object and fill it with all the data except the images, but I'm not sure how possible that is with the List set up. public void saveGame(){ GamePanel game = new GamePanel(); game.enemyList = enemyList; game.player = new Player(this.getWidth(), this.getHeight()); game.player.setScore(player.getScore()); game.player.setPos(player.getX(), player.getY()); try{ FileOutputStream fileOut = new FileOutputStream("savegame.txt"); ObjectOutputStream out = new ObjectOutputStream(fileOut); out.writeObject(game); out.close(); fileOut.close(); }catch(IOException ex){ ex.printStackTrace(); } } Thanks! A: This was answered in the comments so I'll repost it here to close the question. Mark the attributes you do not want to serialize as transient. V.g., private transient BufferedImage alienImage;. – SJuan76
{ "pile_set_name": "StackExchange" }
Q: Cant display speedometer on MainWindow (kivy) I am trying to display the speedometer in my MainWindow screen. Right now when i run the code, the speedometer works but it is not displayed on the MainWindow screen which i want rather it is just appearing on a normal screen. It is possible to combine class Gauge(Widget): and class MainWindow(Screen): together so that the speedometer will actually be displayed on the MainWindow? .py file import kivy kivy.require('1.6.0') from kivy.app import App from kivy.clock import Clock from kivy.properties import NumericProperty from kivy.properties import StringProperty from kivy.properties import BoundedNumericProperty from kivy.uix.boxlayout import BoxLayout from kivy.uix.widget import Widget from kivy.uix.scatter import Scatter from kivy.uix.image import Image from kivy.uix.label import Label from kivy.uix.progressbar import ProgressBar from os.path import join, dirname, abspath from kivy.uix.screenmanager import Screen, ScreenManager class WindowManager(ScreenManager): pass class MainWindow(Screen): pass class Gauge(Widget): unit = NumericProperty(1.8) value = BoundedNumericProperty(0, min=0, max=100, errorvalue=0) path = dirname(abspath(__file__)) file_gauge = StringProperty(join(path, "cadran.png")) file_needle = StringProperty(join(path, "needle.png")) size_gauge = BoundedNumericProperty(128, min=128, max=256, errorvalue=128) size_text = NumericProperty(10) def __init__(self, **kwargs): super(Gauge, self).__init__(**kwargs) self._gauge = Scatter( size=(1350,600), do_rotation=False, do_scale=False, do_translation=False ) _img_gauge = Image( source=self.file_gauge, size=(1350,600) ) self._needle = Scatter( size=(self.size_gauge, self.size_gauge), do_rotation=False, do_scale=False, do_translation=False ) _img_needle = Image( source=self.file_needle, size=(self.size_gauge, self.size_gauge) ) self._glab = Label(font_size=self.size_text, markup=True) self._progress = ProgressBar(max=100, height=20, value=self.value , size=(500,400)) self._gauge.add_widget(_img_gauge) self._needle.add_widget(_img_needle) self.add_widget(self._gauge) self.add_widget(self._needle) self.add_widget(self._glab) self.add_widget(self._progress) self.bind(pos=self._update) self.bind(size=self._update) self.bind(value=self._turn) def _update(self, *args): ''' Update gauge and needle positions after sizing or positioning. ''' self._gauge.pos = self.pos self._needle.pos = (self.x, self.y) self._needle.center = self._gauge.center self._glab.center_x = self._gauge.center_x self._glab.center_y = self._gauge.center_y + (self.size_gauge / 4) self._progress.x = self._gauge.x + (self.size_gauge/0.468 ) self._progress.y = self._gauge.y + (self.size_gauge/4 ) self._progress.width = self.size_gauge def _turn(self, *args): ''' Turn needle, 1 degree = 1 unit, 0 degree point start on 50 value. ''' self._needle.center_x = self._gauge.center_x self._needle.center_y = self._gauge.center_y self._needle.rotation = (50 * self.unit) - (self.value * self.unit) self._glab.text = "[b]{0:.0f}[/b]".format(self.value) self._progress.value = self.value class GaugeApp(App): increasing = NumericProperty(1) begin = NumericProperty(50) step = NumericProperty(1) def build(self): box = BoxLayout(orientation='horizontal', padding=5) self.gauge = Gauge(value=50, size_gauge=256, size_text=25) box.add_widget(self.gauge) Clock.schedule_interval(lambda *t: self.gauge_increment(), 0.05) return box def gauge_increment(self): begin = self.begin begin += self.step * self.increasing if begin > 0 and begin < 100: self.gauge.value = begin else: self.increasing *= -1 self.begin = begin if __name__ == '__main__': GaugeApp().run() A: You need to create an instance of ScreenManager and an instance of Screen. Next you add_widget the Screen to ScreenManager and to the Screen you add box sm = ScreenManager() s1 = Screen() s1.add_widget(box) sm.add_widget(s1) This is the complete code which uses your new classes from kivy.app import App from kivy.clock import Clock from kivy.properties import NumericProperty from kivy.properties import StringProperty from kivy.properties import BoundedNumericProperty from kivy.uix.boxlayout import BoxLayout from kivy.uix.widget import Widget from kivy.uix.scatter import Scatter from kivy.uix.image import Image from kivy.uix.label import Label from kivy.uix.progressbar import ProgressBar from os.path import join, dirname, abspath from kivy.uix.screenmanager import Screen, ScreenManager class WindowManager(ScreenManager): pass class MainWindow(Screen): pass class Gauge(Widget): unit = NumericProperty(1.8) value = BoundedNumericProperty(0, min=0, max=100, errorvalue=0) path = dirname(abspath(__file__)) file_gauge = StringProperty(join(path, "cadran.png")) file_needle = StringProperty(join(path, "needle.png")) size_gauge = BoundedNumericProperty(128, min=128, max=256, errorvalue=128) size_text = NumericProperty(10) def __init__(self, **kwargs): super(Gauge, self).__init__(**kwargs) self._gauge = Scatter( size=(1350,600), do_rotation=False, do_scale=False, do_translation=False ) _img_gauge = Image( source=self.file_gauge, size=(1350,600) ) self._needle = Scatter( size=(self.size_gauge, self.size_gauge), do_rotation=False, do_scale=False, do_translation=False ) _img_needle = Image( source=self.file_needle, size=(self.size_gauge, self.size_gauge) ) self._glab = Label(font_size=self.size_text, markup=True) self._progress = ProgressBar(max=100, height=20, value=self.value , size=(500,400)) self._gauge.add_widget(_img_gauge) self._needle.add_widget(_img_needle) self.add_widget(self._gauge) self.add_widget(self._needle) self.add_widget(self._glab) self.add_widget(self._progress) self.bind(pos=self._update) self.bind(size=self._update) self.bind(value=self._turn) def _update(self, *args): ''' Update gauge and needle positions after sizing or positioning. ''' self._gauge.pos = self.pos self._needle.pos = (self.x, self.y) self._needle.center = self._gauge.center self._glab.center_x = self._gauge.center_x self._glab.center_y = self._gauge.center_y + (self.size_gauge / 4) self._progress.x = self._gauge.x + (self.size_gauge/0.468 ) self._progress.y = self._gauge.y + (self.size_gauge/4 ) self._progress.width = self.size_gauge def _turn(self, *args): ''' Turn needle, 1 degree = 1 unit, 0 degree point start on 50 value. ''' self._needle.center_x = self._gauge.center_x self._needle.center_y = self._gauge.center_y self._needle.rotation = (50 * self.unit) - (self.value * self.unit) self._glab.text = "[b]{0:.0f}[/b]".format(self.value) self._progress.value = self.value class GaugeApp(App): increasing = NumericProperty(1) begin = NumericProperty(50) step = NumericProperty(1) def build(self): box = BoxLayout(orientation='horizontal', padding=5) self.gauge = Gauge(value=50, size_gauge=256, size_text=25) box.add_widget(self.gauge) Clock.schedule_interval(lambda *t: self.gauge_increment(), 0.05) sm = WindowManager() s1 = MainWindow() s1.add_widget(box) sm.add_widget(s1) return sm def gauge_increment(self): begin = self.begin begin += self.step * self.increasing if begin > 0 and begin < 100: self.gauge.value = begin else: self.increasing *= -1 self.begin = begin if __name__ == '__main__': GaugeApp().run()
{ "pile_set_name": "StackExchange" }
Q: Service Fabric - How to enable BackupRestoreService on my local dev cluster I would like to get the backup and restore related functionality working inside the service fabric explorer for my local dev cluster. Any action I take related to backup/restore in the cluster manager ui throws a service not found exception currently, I believe due to the backup and restore service not running on the cluster. I can't find any documentation pertaining to configuring the local dev cluster. The standalone cluster steps don't seem to apply. I have attempted to use sfctl to get the cluster configuration with sfctl sa-cluster config but the operations times out against my local dev cluster. I've tried the analogous Get-ServiceFabricClusterConfiguration from powershell module and get a timeout there as well. For the time being I have built a code based backup and restore, but I really like the service and would like to see what I can do with it locally. A: I tested this with cluster version 7.0.470.9590 Verify BackupAndRestore service is available in your installation. C:\Program Files\Microsoft Service Fabric\bin\Fabric\Fabric.Code\__FabricSystem_App{random-number}\BRS.Code.Current folder should exist with the correct binaries. Change your local cluster config. Your clusterconfig is located under: C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup So if your dev cluster is single node unsecure, you can change: C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\OneNode\ClusterManifestTemplate.json In the "addOnFeatures" tag you can add "BackupRestoreService" example: "addOnFeatures": [ "DnsService", "EventStoreService", "BackupRestoreService" ] Under "fabricSettings" you then add the configuration for the backup and restore service: { "name": "BackupRestoreService", "parameters": [ { "name": "SecretEncryptionCertThumbprint", "value": "......YOURTHUMBPRINT....." } ] } After these steps you can reset your dev cluster from the system tray. (Right click the service fabric icon => Reset Local Cluster) When your cluster is restarted you can verify if the service is running by opening the cluster dashboard and open the system services. You can use this approach to configure other system services as well. Note: updating your SDK may result in losing the changes made to your cluster config.
{ "pile_set_name": "StackExchange" }
Q: Making same jquery work for drop downs in dynamically rendered partial views In my main view, I have an add button. Clicking the add button brings in a partial view. Main.cshtml <div id="NameCats"> @Html.Partial("_Temp") </div> <div class="row"> <button id="addInfo" class="btn"><img src="~/Content/Images/Add.PNG" /></button> Add Info </div> In my Temp partial view, I have multiple bootstrap input drop down groups. _Temp.cshtml <div id="info"> <div class="well"> <div class="row"> <div class="col-md-6"> <div class="input-group"> <input id="name" type="text" class="form-control dropdown-text" /> <div class="input-group-btn"> <button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown"> <span class="caret"></span> </button> <ul id="nameList" class="dropdown-menu pull-right"> <li> <a tabindex="-1" href="#"> Test1 </a> </li> <li> <a tabindex="-1" href="#"> Test2 </a> </li> </ul> </div> </div> </div> <div class="col-md-6"> <div class="input-group"> <input id="cat" type="text" class="form-control dropdown-text" /> <div class="input-group-btn"> <button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown"> <span class="caret"></span> </button> <ul id="catList" class="dropdown-menu pull-right"></ul> </div> </div> </div> </div> </div> </div> Selecting an item from first drop down should fill in the input for that drop down with the selected value. Here is my javascript to make it work. The problem is when I select item from the drop down that is rendered initially, the click (li a) function works but when I click Add button and then try to repeat the selection for the drop down in this new info section, the click (li a) function is not hit. What am I doing wrong? $(function () { $('.dropdown-menu').each(function () { $(this).on('click', 'li a', function () { $(this).parents(".input-group").find('.dropdown-text').val($(this).text()); }); }); }); $(function () { $('#addInfo').click(function (e) { e.preventDefault(); $('#NameCats').append($("#info").html()); }); }); A: The issue is your that your newly added elements do not have a click handler associated with them, since the function to add the handler is only run on page load. You have the concepts of event bubbling in place, but on the wrong parent elements and that essentially only keeps an eye on new li a elements and not entire dropdown-menu being added. Change your first function to the following, which will keep a "watch" on any dropdown-menu added in the #NameCats element. $(function () { $('#NameCats').on('click', '.dropdown-menu li a', function () { $(this).parents(".input-group").find('.dropdown-text').val($(this).text()); }); }); Here is a jsFiddle working example.
{ "pile_set_name": "StackExchange" }
Q: Python Implementation of Bartlett Periodogram I am trying to implement Periodogram in Python based on the description from Bartlett's method, and compared the result with those from Scipy, by setting overlap=0, use window='boxcar' (rectangle window). However, my result is off by some scale factor. Can someone points out what was wrong with my code? Thanks import numpy as np import matplotlib.pyplot as plt from scipy import signal def my_bartlett_periodogram(x, fs, nperseg, nfft): nsegments = len(x) // nperseg psd = np.zeros(nfft) for segment in x.reshape(nsegments, nperseg): psd += np.abs(np.fft.fft(segment))**2 / nfft psd[0] = 0 # important!! psd /= nsegments psd = psd[0 : nfft//2] freq = np.linspace(0, fs/2, nfft//2) return freq, psd def plot_output(t, x, f1, psd1, f2, psd2): fig, axs = plt.subplots(3,1, figsize=(12,15)) axs[0].plot(t[:300], x[:300]) axs[1].plot(freq1, psd1) axs[2].plot(freq2, psd2) axs[0].set_title('Input (len=8192, fs=512)') axs[1].set_title('Bartlett Periodogram (nfft=512, zero-overlap, no-window)') axs[2].set_title('Scipy Periodogram (nfft=512, zero-overlap, no-window)') axs[0].set_xticks([]) axs[2].set_xlabel('Freq (Hz)') plt.show() # Run fs = nfft = nperseg = 512 t = np.arange(8192) / fs x = np.sin(2*np.pi*50*t) + np.sin(2*np.pi*100*t) + np.sin(2*np.pi*150*t) freq1, psd1 = my_bartlett_periodogram(x, fs, nperseg, nfft) freq2, psd2 = signal.welch(x, fs, nperseg=nperseg, nfft=nfft, window='boxcar', noverlap=0) plot_output(t, x, freq1, psd1, freq2, psd2) A: TL;DR: Nothing wrong with the code. But welch returns the power spectral density, which is the power spectrum times fs and it compensates for cutting away half the spectrum by multiplying with 2. To compensate, psd2 * fs / 2 should be very similar to psd. According to Wikipedia the calculation of psd seems correct: The original N point data segment is split up into K (non-overlapping) data segments, each of length M For each segment, compute the periodogram by computing the discrete Fourier transform (DFT version which does not divide by M), then computing the squared magnitude of the result and dividing this by M. Average the result of the periodograms above for the K data segments. So whom shall we trust more, Wikipedia or scipy? I would tend towards the latter, but we can find out for ourselves. According to Parseval's theorem the integral over the squared signal should be the same as the integral over the sqared FFT magnitude. Since the Periodogram is obtained from the squared FFT the theorem should hold approximately. print(np.mean(y**2)) # 1.499727698431174 print(np.mean(psd)) # (1.4999999999999991+0j) print(np.mean(psd2)) # 0.0058365758754863788 That's close enough for psd, so let's assume it's correct. But I refuse to believe that scipy should be so blatantly wrong! Let's take a closer look at the documentation and see what they have to say about the scaling argument (emphasis mine): Selects between computing the power spectral density (‘density’) where Pxx has units of V**2/Hz and computing the power spectrum (‘spectrum’) where Pxx has units of V**2, if x is measured in V and fs is measured in Hz. Defaults to ‘density’ Uh-huh! welch's result is the power spectral density, which means it has units of Power per Hz. However, we compared it against the signal power. If we multiply psd2 with the sampling rate to get rid of the 1/Hz units it's the same as psd. Well, except for a factor 2. This factor is meant to compensate for cutting away half the spectrum. If we set return_onesided=False to get the full spectrum that factor is gone.
{ "pile_set_name": "StackExchange" }
Q: Location permission dialog is shown and immediately disappears The iOS dialog prompts and disappears after half a second: let locationManager = CLLocationManager() switch CLLocationManager.authorizationStatus() { case .authorizedWhenInUse: print("In Use \(locationManager.location?.description)") case .denied, .restricted: print("denied") case .notDetermined: locationManager.requestWhenInUseAuthorization() case .authorizedAlways: print("always \(locationManager.location)") } I don't know if this is relevant, but I'm using SWReavealViewController. Xcode9, compiled for iOS 8.0, both simulator and real device A: Your locationManager variable won't live beyond the scope of its definition (the function where that snippet of code lives), so it is deallocated before the user can respond to the dialog. If you move let locationManager = CLLocationManager() up to a class variable, it should stick around.
{ "pile_set_name": "StackExchange" }
Q: Is there a substitute for quote marks " and ' in Javascript document.write function I was wondering if there is a way to write paragraphs containing both the double quotes and single quotes that can't pass on [ document.write(""); document.write('');] whichever I use. Is there some features that I'm not aware of? Please enlighten me Thanks A: You can use backticks (template literals): `` document.write`<p class="my-class">'Hi', she said.</p>`;
{ "pile_set_name": "StackExchange" }
Q: Exporting Python List into csv So I am trying to create a truth table that I can export out into either a csv or excel format. I am a newbie to Python, so please bear with me if my code is horrible. I started with this as my code for the truth table after some research: import itertools table = list(itertools.product([False, True], repeat=6)) print(table) Then I got to this code: import csv import sys with open('C:\\blahblah5.csv','w') as fout: writer = csv.writer(fout, delimiter = ',') writer.writerows(table) This gets me almost to where I need to be with the truth table in a csv format. However, when I open up the file in excel, there are blank rows inserted between my records. I tried a tip I found online where I need to change the input type from w to wb, but I get this error when I do: Traceback (most recent call last): File "<pyshell#238>", line 3, in <module> writer3.writerows(table) TypeError: 'str' does not support the buffer interface I am not sure where to go from here because I feel like I am so close to getting this into the format I want. A: I suspect you're using Python 3. The way you open files for writing csvs changed a little: if you write with open("C:\\blahblah5.csv", "w", newline="") as fout: it should work, producing a file which looks like False,False,False,False,False,False False,False,False,False,False,True False,False,False,False,True,False [etc.]
{ "pile_set_name": "StackExchange" }
Q: How to safely wipe a USB flash drive I want to wipe all residual data left behind even after a format on a regular 64GB fash drive, the ones someone can scan and recover data. What's the most efficient but quickest way to do this? Any test software I can scan for those residual files before and after the wipe? A: To quote the ISM (Australia's military standards for cyber security). Security Control: 0359; In flash memory media, a technique known as wear levelling ensures that writes are distributed evenly across each memory block. This feature necessitates flash memory being overwritten with a random pattern twice as this helps ensure that all memory blocks are overwritten. This means that if you select a secure delete function such as DoD 5220.22M, you will need to run it twice (note that this method only writes randomly through one pass). If you do this it will mean that your data should be safe from the average attacker, however if your USB contains the Colonel's Secret Recipe, or evidence of you committing a serious crime, refer again to the ISM: Security Control: 0360; Following sanitisation, highly classified non-volatile flash memory media retains its classification In other words Destroy it with fire and spread its ashes to the 4 corners of the globe. If you don't have anything as valuable as I stated above don't listen to the paranoid people on here about how you can never sanitse it, as frankly no one will have the resources to retrieve the data, unless it is a Nation State or a Global Conglomerate. You may also be interested in NIST SP800-88 Which is American Guidelines for Sanitisation, although I like the ISM as it is much more succinct. A: Next time you're about to put sensitive data on a flash drive, consider encrypting it first! Strongly encrypted data is useless without the key, and if you securely erase the drive first, all that will be left is an occasional sector of such encrypted data surviving due to wear leveling. If you're still unsatisfied by this technique because there's a small probability that (a) a meaningful chunk of data survives and (b) the adversary will be able to read it out and (c) decrypt it, consider that physical destruction may not destroy the data definitely: there will be a chance that one night you will sleepwalk to a potential adversary and sleeptalk the data to them. Edit addressing some of the comments: consumer-grade flash storage does have over-provisioning, e.g. SanDisk microSD Product Manual tells it's an intrinsic function in their products. And this over-provisioning is much more significant that the difference between 1GB and 1GiB, in fact, the ability to use low-grade flash wafers is why the flash storage is so cheap. On such wafers, 5% to 10% of the cells are stillborn, and a few others will only last a few write cycles, while a decent flash card or thumb drive is typically specced to survive 100-500 complete overwrites. Furthermore, the chance of a random sector to survive N full overwrites (assuming 15% over-provisioning) is not 0.15^N. Wear leveling is nowhere near uniform write distribution, in fact, if a file stays on the flash drive for a long time while other content is written/removed/overwritten, sectors allocated to that file will have significantly less writes done to them, so they may be overwritten every single time during subsequent full-disk overwrites. Additionally, wear leveling is not based exclusively on write count, but also on the number of correctable errors in a sector. If a sector containing sensitive data exceeds such correctable error threshold, it will never be written to again, so the data in it will be there no matter how many times you overwrite the disk. A: A quick check at amazon.com shows 64GB USB drives in non-designer cases go for about $20. Less if you buy in bulk. Since you want "quick and efficient" lets factor in the time needed to overwrite the drive at least twice, and maybe running a drive scanner to verify the erasure. And then remembering to do it each time. A quick check of homedepot.com shows a propane torch goes for $20, and that's the fancy model with the built-in igniter. Replacement tanks of propane are $4, and will melt quite a few usb drives. So, take the drive and open it with either pliers or a hammer. A door jamb also works. Pull out the circuit board, go out to the parking lot and incinerate it. meowcat mentioned this along with the military classification bit - he wasn't making a funny. From a security perspective, nothing ever gets recovered from a melted blob of plastic (semiconductors fail completely at far lower temperatures than a propane torch can provide). From an economic perspective, buying a new one is cheaper than your time to wipe and verify the old one. Same with SSD in retired laptops and spinning drives - physical destruction is quicker, cheaper and more reliable than software solutions. 30 years ago drives were much more expensive, and a lot smaller. A 7 times overwrite to recycle the hardware made much more sense back then - not any more.
{ "pile_set_name": "StackExchange" }
Q: What is this な doing here? あれ?参加者は20人のはずなのに、21人いる。 I'd translate it as follows: "Huh? I'm sure it should be 20 participants, but 21 are here." I interpreted "should" because のに to my knowledge indicates a surprise here. Using plain indicative wouldn't reflect that in my opinion. However, the real "problem" is the bold な. First, I don't know into what category はず falls (formation morpheme like suffix, or anything else like noun, verb etc.). Therefore it is hard for me to try to use other grammatical rules to deduce the meaning of な, like な after nouns in -んです/-のです constructions. And even if I knew that, I still wouldn't know what function it bears here. A: 筈{はず} is a noun that roughly means that something is expected to hold true, and the な is actually the 連体形/attributive form of the particle だ (断定). It's not making はず into an na-adjective. Basically, there is an expectation "that there should be twenty participants", and it's being attributed the nominalization particle の, turning it into a noun-phrase. That に that follows then turns that into "despite (noun phrase)..." Your translation is correct :)
{ "pile_set_name": "StackExchange" }
Q: How to Properly Resolve M:M Relationships in an Entity Relationship Model To demonstrate my question, I will create a simple relationship between two entities: Person and Cooperation. A Person must work for one or more Cooperations while a cooperation may not have any associated persons working in it. Diagram of this relationship: https://imgur.com/a/wfhOS When resolving this M:M relationship by creating an intersecting entity, how would we properly implement this optionality? Would the following be correct: https://imgur.com/a/TSmWQ If this solution is not correct, please specify how to properly resolve this M:M relationship. Specifically, I am wondering if how I have set the optionality of the relationships are correct when adding an associative table. A: It looks like you are using oracle data modeler, when you click at "Engineer to Relational Model" and then "Engineer" oracle will do the job for u https://i.stack.imgur.com/sFrLb.jpg
{ "pile_set_name": "StackExchange" }
Q: Collision detection in STL's hash_map I have to create a lookup table for C function names (as keys) to function pointers (as values). I am thinking of using STL's hash_map container, as the access time of a hash table is O(1). Is there any good hash function for this? Currently I am using (31*H + c) as my hash function. Also, does STL's hash_map takes care of collisions, or I have to take care of them in my code? Please give some examples if possible. Sample Code I am currently working upon #include <iostream> #include <ext/hash_map>; using namespace std; using namespace __gnu_cxx; namespace __gnu_cxx { #ifndef __HASH_STRING__ #define __HASH_STRING__ template <> struct hash<string> { size_t operator() (const std::string& s) const { size_t h = 0; std::string::const_iterator p, p_end; for(p = s.begin(), p_end = s.end(); p != p_end; ++p) { h = 31 * h + (*p); } return h; } }; #endif }; int main() { hash_map<string, int> months; months["january"] = 1; months["february"] = 2; months["march"] = 3; months["april"] = 4; months["may"] = 5; months["june"] = 6; months["july"] = 7; months["august"] = 8; months["september"] = 9; months["october"] = 10; months["november"] = 11; months["december"] = 12; return 0; } A: Assuming you've got the full STL, it actually includes a hash function, hash<T>, which in its included form is suitable for a few different key types including char* (C strings). I don't know details of its performance, but the STL is generally engineered to have acceptable performance for most applications. As for collisions, that's for hash_map to deal with, you needn't worry about it.
{ "pile_set_name": "StackExchange" }
Q: Strange javascript code I have found this code snippet: ; 100% function($) { // WTF? var _true_ = true; // WTF? var _false_ = false; // WTF? var go = function(location, date) { location || (location = {}); var result = _false_; if (date && date.day) { result = geoService.go(location, date); } return !!result; } var process = function(func) { var args = [].prototype.slice.call(arguments, 1); return function() { return func.apply(this, args); } } // ... }(jQuery, undefined); In here: http://www.dofactory.com/products/javascript-jquery-design-pattern-framework (sorry, no id-s have been found on the page) I don't understand what these parts are doing: the "100%" in the second line the var _true_ = true; and var _false_ = false; assignments in the 3-4 lines I'm curious, what is the purpose of these. A: the "100%" in the second line It's the number 100 followed by a modulus operator. It's not used for anything (since the result isn't captured) other than to force the right hand side to be treated as a function expression instead of a function declaration. It's a very uncommon and unintuitive approach that I've never seen before. It is more usual to wrap the function expression in parens or precede it with a not operator. the var true = true; and var false = false; assignments in the 3-4 lines The author appears to be trying to draw attention to the uses of true and false by copying them to variables that include non-alpha numerica characters in the name instead of using literals throughout. Again, this is very odd and not something I've ever seen before. A: It looks like it is a collection of wrongly used "best practices" which not led to exceptions but definitely odd and obscured. Look at second and last lines. There is best practice used exactly vice versa: (function ($, undefined){ // do the stuff })(jQuery); undefined here will be the real undefined because when function call there is no second argument. But what on Earth can be the reason pass the "undefined" argument to the function and do not use it? It looks like a prank. The same thing is on 5 line: it looks (and actually acts) as "default argument" assigning but done in strange manner (traditionally and more obviously it used as location = location || {};). I beleive that only reasons to write it that way it done can be obfuscation, joke or misunderstanding. The same thing is with 100%. You can use any operators to indicate function expression. The most common way is to use parenthesis. But often you can also meet: !function(){ }(); or: +function(){ }(); but you can also write 42 * function(){ }(); it all acts the same way only parenthesis are most obvious and common.
{ "pile_set_name": "StackExchange" }
Q: Broadleaf Commerece Tomcat installation I have successfully installed Broadleaf Demo site with the help of ecllipse,Now i want to install it in tomcat standalone,so any body cant tell me the steps or any link will be helpful. Thanks A: bulid the war file of the project using eclipse . then deploy that war file in that tomcat webapps folder. now start the tomcat service.
{ "pile_set_name": "StackExchange" }
Q: DDD, Event store, UI I have a project which is designed or at least should be according to the well known DDD principles. Back - DDD + CQRS + Event Store UI - ngrx/store I have a lot of questions to ask about it but for now I will stick to these two: How should the UI store be updated after a single Command/Action is executed ? a) subscribe to response.ok b) listen to domain events c) trigger a generic event holding the created/updated/removed object ? Is it a good idea to transfer the whole aggregate root dto with all its entities in each command / event or it is better to have more granular commands / events for ex.: with only a single property ? A: How should the UI store be updated after a single Command/Action is executed ? The command methods from my Aggregates return void (respecting CQS); thus, the REST endpoints that receive the command requests respond only with something like OK, command is accepted. Then, it depends on how the command is processed inside the backend server: if the command is processed synchronously then a simple OK, command is accepted is sufficient as the UI will refresh itself and the new data will be there; if the command is processed asynchronously then things get more complicated and some kind of command ID should be returned, so a response like OK, command is accepted and it has the ID 1234-abcd-5678-efgh; please check later at this URI for command completion status At the same time, you could listen to the domain events. I do this using Server sent events that are send from the backend to the UI; this is useful if the UI is web based as there could be more than one browser windows open and the data will be updated in the background for pages; that's nice, client is pleased. About including some data from the read side in the command response: this is something that depends on your specific case; I avoid it because it implies reading when writing and this means I can't separate the write from the read on a higher level; I like to be able to scale independently the write from the read part. So, a response.ok is the cleanest solution. Also, it implies that the command/write endpoint makes some query assumptions about the caller; why should a command handler/command endpoint assume what data the caller needs? But there could be exceptions, for example if you want to reduce the number of request or if you use an API gateway that do also a READ after the command is send to the backend server. Is it a good idea to transfer the whole aggregate root dto with all its entities in each command / event or it is better to have more granular commands / events for ex.: with only a single property ? I never send the whole Aggregate when using CQRS; you have the read-models so each Aggregate has a different representation on each read-model. So, you should create a read-model for each UI component, in this way you keep&send only the data that is displayed on the UI and not some god-like object that contains anything that anybody would need to display anywhere. A: Commands basically fall into one of two categories : creation commands and the rest. Creation commands With creation commands, you often want to get back a handle to the thing you just created, otherwise you're left in the dark with no place to go to further manipulate it. I believe that creation commands in CQS and CQRS can return an identifier or location of some sort : see my answer here. The identifier will probably be known by the command handler which can return it in its response. This maps well to 201 Created + Location header in REST. You can also have the client generate the ID. In that case, see below. All other commands The client obviously has the address of the object. It can simply requery its location after it got an OK from the HTTP part. Optionally, you could poll the location until something indicates that the command was successful. It could be a resource version id, a status as Constantin pointed out, an Atom feed etc. Also note that it might be simpler for the Command Handler to return the success status of the operation, it's debatable whether that really violates CQS or not (again, see answer above).
{ "pile_set_name": "StackExchange" }
Q: Dynamicaly updating field that references other table? Lets say I have Tables A and B : A +---------+-------+----------+--------+ | ID | NAME | B_ID | B_NAME | +---------+-------+----------+ -------+ | 1 | Joe | 1 | Sue | +---------+-------+----------+--------+ B +---------+-------+ | ID | NAME | +---------+-------+ | 1 | Sue | +---------+-------+ where A.B_ID references B.ID as foreign key. Is there any way to declare that A.B_NAME := B.NAME such that A.B_NAME is updated when I update B.NAME or can this only be achieved by a trigger that fires on updates on A ? A: The truth is: your data model is wrong, it isn't normalized. Remove column B_NAME from table A, it shouldn't exist there. Maintain the name in table B. Whenever you need to reference it, do so by joining A.B_ID = B.ID.
{ "pile_set_name": "StackExchange" }
Q: Wordpress.com blog flexibility I'm pretty new to web design (but a very experienced programmer) and I'm creating a pretty simple wordpress blog for a friend. Take a look if you want: http://beachief.com/. Right now he has a domain mapped to a wordpress.com account, so I have no access to plugins or custom themes. This is what he wants me to add: A like/dislike function for all posts One section with 2 blog columns side by side (not supported by the theme - or any others that I could find) The ability to let users log in and post their own content My question is: is this stuff possible to do with a wordpress.com account? Or will he need to switch to an independently hosted wordpress.org site? Let me know what you think, thanks A: He will need to switch to Wordpress.org and be independantly hosted. Wordpress.com does not let you use plugins or custom themes, unless you pay for a VIP plan.
{ "pile_set_name": "StackExchange" }
Q: Two argument V() with ground As part of a project, I am writing a program which generates ngspice scripts. For my purposes, it would be very convenient if I could always use the two argument v() instead of using v(node) when I want to compare against ground and v(node1,node2) when I want to use two arbitrary nodes. When I execute "listing", it shows a ".global gnd", but v(node,gnd) returns Error(parse.c--checkvalid): gnd: no such vector. Is there a way to get a vector of zeros, or must I hardcode both cases independently? A: I cannot find why you can't refer a voltage with respect to gnd, like v(node1,gnd). However, as work around, you could make a node called psuedoground (or whatever you want to name it), connecting it to a voltage source of 0V as follows: use psuedoground netlist ... V999 0 psuedoground 0 ... This way, a voltage source V999 is connected between ground node named '0' and the node named psuedognd with value 0V, making psuedognd effectively equal to gnd. Now, you can use always 2 arguments with v(): V(node1, node2) or v(node1, psuedognd). Note that 0 and gnd is the same. From the manual The ground node must be named ‘0’ (zero). For compatibility reason gnd is accepted as ground node, and will internally be treated as a global node and be converted to ‘0’. If you plot v(node1,psuedoground) you'll see that very name v(node1,psuedoground) in the plot as well. In ngspice, you could also name the node 00 instead of psuedognd above as per manual Note the difference in ngspice where the nodes are treated as character strings and not evaluated as numbers, thus ‘0’ and 00 are distinct nodes in ngspice but not in SPICE2.
{ "pile_set_name": "StackExchange" }
Q: Re-assigning methods in Go Suppose I have the following: package main import "fmt" type I1 interface { m1() } func f1() { fmt.Println("dosomething") } func main() { var obj I1 obj.m1 = f1 obj.m1() } This generates the error ./empty.go:16: cannot assign to obj.m1 Why can't I assign to 'method fields'? In C, I can just pass around function pointers. What is the equivalent in Go? A: You can't assign a function to an interface, you can do it for a struct, for example: type S1 struct { m1 func() } func f1() { fmt.Println("dosomething") } func main() { var obj S1 obj.m1 = f1 obj.m1() } // another example type I1 interface { m1() } type S1 struct {} func (S1) m1() { fmt.Println("dosomething") } type S2 struct { S1 } func (s S2) m1() { fmt.Println("dosomething-2") //s.S1.m1() //uncomment to call the original m1. } func doI1(i I1) { i.m1() } func main() { doI1(S1{}) doI1(S2{S1{}}) } play
{ "pile_set_name": "StackExchange" }
Q: invoke js from controller - simple search form I have a search form in a modal as below: <%= form_tag "/search", :remote => true, :method => :get do %> <input type="text" name="search_name"> <button type="submit">Search</button> <% end %> <div id="channels"></div> the above form invoke below method in my controller: #my_controller.rb: def search parsed_json = JSON.parse(@json_string) # fetch some json data render do |index| ndex.html {} index.js {} end end so I expect the above search method to render below index.js.erb from app/view/my_controller to update my div : $("#channels").html("<%= render :partial => "channels" %>") after clicking the search button the view can not be updated because index.js.erb can not be invoked, any idea? P.S: I am using rails 3.2 A: You are missing the 'escape_javascript' tag in your index.js.erb Replace: $("#channels").html("<%= render :partial => "channels" %>") With: $("#channels").html("<%= escape_javascript render :partial => "channels" %>") Read more on this using: Why escape_javascript before rendering a partial? Also, I may be wrong but this seems a bit odd: render do |index| index.html {} index.js {} end Shouldn't it be: respond_to do |format| format.html {} format.js { render 'index.js.erb' } # ^ If you dont write this, it will look for search.js.erb in your views dir end
{ "pile_set_name": "StackExchange" }
Q: Does $\frac{1}{n}\sum_{i=1}^n|x_i|\to L<\infty$ imply $\frac{1}{n}\max_{1\leq i\leq n}|x_i|=0$? To simplify notation, let us assume that $\{x_n\}_{n\geq 1}$ is a sequence of nonnegative real numbers. Does $$ \frac{1}{n}\sum_{i=1}^nx_i\to L $$ for some finite $L$ imply $$ \frac{1}{n}\max_{1\leq i\leq n}x_i\to 0? $$ I have been thinking about this question for the past few hours because of here but I can't come up with anything. If you can help me, please feel free to head there to resolve that post as well. A: Yes. Assume for contradiction that for some $\varepsilon > 0$ there are arbitrarily large $n$ such that $$\frac{1}{n} \max_{1 \leqslant i \leqslant n} x_i \geqslant \varepsilon,$$ i.e. $$\max_{1 \leqslant i \leqslant n} x_i \geqslant n \varepsilon.$$ From there it follows (it takes some proof though) that there are arbitrarily large $n$ such that $x_n \geqslant n \varepsilon$. Therefore $$ \frac{x_1 + \ldots + x_n}{n} \geqslant \frac{x_1 + \ldots + x_{n-1}}{n} + \varepsilon = \frac{n-1}{n} \cdot \frac{x_1 + \ldots + x_{n-1}}{n-1} + \varepsilon$$ so $L \geqslant 1 \cdot L + \varepsilon$ and we have a contradiction. A: Based on the proof of Adayah, I came up with the following direct proof. I post it as a community wiki post, because the original idea is due to Adayah: Let $\varepsilon > 0$. Take $n_0 \in \Bbb{N}$ such that $$ \left| \frac{1}{n}\sum_{i=1}^n x_i - L \right| < \varepsilon \text{ and } \left| L - \frac{n}{n+1} L \right| < \varepsilon. $$ This yields \begin{align*} \left|\frac{x_{n+1}}{n+1}\right| & =\left|\frac{1}{n+1}\sum_{i=1}^{n+1}x_{i}-\frac{1}{n+1}\sum_{i=1}^{n}x_{i}\right|\\ & \leq\left|\frac{1}{n+1}\sum_{i=1}^{n+1}x_{i}-L\right|+\left|L-\frac{n}{n+1}L\right|+\frac{n}{n+1}\left|L-\frac{1}{n}\sum_{i=1}^{n}x_{i}\right|\\ & <3\varepsilon \end{align*} for all $n \geq n_0$ and hence $$ \frac{1}{n} \max_{1 \leq i \leq n} x_i \leq \frac{1}{n} \max_{1 \leq i \leq n_0} x_i + \max_{n_0 + 1 \leq i \leq n} \frac{x_i}{n} \leq \frac{1}{n} \max_{1 \leq i \leq n_0} x_i + \max_{n_0 + 1\leq i \leq n} \frac{x_{i}}{i} < \varepsilon + \frac{1}{n} \max_{1 \leq i \leq n_0} x_i \to \varepsilon, $$ which establishes $\limsup_n \frac{1}{n} \max_{1 \leq i \leq n} x_i = 0$, because $\varepsilon > 0$ was arbitrary.
{ "pile_set_name": "StackExchange" }
Q: How to set images for multiple screen sizes - android I have a gallery as below and i have used hdpi - 752 x 752 mdpi -502 x 502 xhpi - 1002 x 1002 xxhdpi - 1502 x 1502 As in http://android-ui-utils.googlecode.com/hg/asset-studio/dist/nine-patches.html after i uploaded a 500px pic it gave me the pics for each dpi but when i use a small screen i get it one after the other and a larger screen i get in in a corner with the blocks. as i have used relative layout everything is connected. so how can i adjust all images for all multiple screens sizes? A: One way your layout will show up the same on different devices / screens is by making more layout folders. This way when a large screen is detected it will use the layout large folder. If it is in landscape, the device / app will use the layout land folder etc.. Layout port = layout portrait etc... You can get more info here. http://developer.android.com/guide/practices/screens_support.html Jump to the part that says Table 1. Configuration qualifiers that allow you to provide special resources for different screen configurations. Just copy your layout xml to your new layout folders... Then edit the xml to appear how you want for for each layout. Here is an example of changes made to the manifest. View here: http://developer.android.com/guide/topics/manifest/supports-screens-element.html <supports-screens android:resizeable=["true"| "false"] android:smallScreens=["true" | "false"] android:normalScreens=["true" | "false"] android:largeScreens=["true" | "false"] android:xlargeScreens=["true" | "false"] android:anyDensity=["true" | "false"] android:requiresSmallestWidthDp="integer" android:compatibleWidthLimitDp="integer" android:largestWidthLimitDp="integer"/> <manifest> If you are having trouble with 9patch files or other method, This is one way to do it.
{ "pile_set_name": "StackExchange" }
Q: Free memory while reading binary file i have a binary file where i save my struct: struct vec { string author; string name; int pages; string thread; vec *next; }; write to file function: void logic::WriteInfoToFile() { if (first != NULL) { pFile = fopen(way.c_str(), "wb"); if (pFile != NULL) { fseek(pFile, 0, SEEK_SET); temp = first; while (temp != NULL) { WriteString(temp->author,pFile); WriteString(temp->name,pFile); fwrite(&temp->pages,sizeof(int), 1, pFile); WriteString(temp->thread,pFile); temp = temp->next; } } fclose(pFile); } } write srtig function: void logic::WriteString(string s, FILE *pFile) { if (pFile != NULL) { char *str = new char[s.length() + 1]; strcpy(str, s.c_str()); int size = strlen(str); fwrite(&size, sizeof(int), 1, pFile); fwrite(str, size, 1, pFile); delete [] str; } } read file: void logic::ReadInfoFromFile() { pFile = fopen(way.c_str(), "rb"); if (pFile != NULL) { fseek(pFile, 0, SEEK_END); if (ftell(pFile) != 0) { fseek(pFile, 0, SEEK_SET); int check; while (check != EOF) //while (!feof(pFile)) { temp = new vec; temp->author = ReadString(pFile); temp->name = ReadString(pFile); fread(&temp->pages, sizeof(int), 1, pFile); temp->thread = ReadString(pFile); temp->next = NULL; if (first == NULL) { first = temp; first->next = NULL; } else { temp->next = first; first = temp; } recordsCounter++; check = fgetc(pFile); fseek(pFile, -1, SEEK_CUR); } } } fclose(pFile); } read string: string logic::ReadString(FILE *pFile) { string s; if (pFile != NULL) { int size = 0; fread(&size, sizeof(int), 1, pFile); char *str = new char[size]; fread(str, size, 1, pFile); str[size] = '\0'; s = str; //delete [] str; //WHY?????????!!!!! return s; } else return s = "error"; } trouble is in read string function, where i free memory. " delete [] str " i get crash of program on this line. but if i dont exempt memorry works good. Help me please! A: You're off by one allocating size chars but overwriting size+1 (with the terminal '\0'). The memory manager doesn't like that. char *str = new char[size]; fread(str, size, 1, pFile); str[size] = '\0'
{ "pile_set_name": "StackExchange" }
Q: Is there a way to turn down my PS4 controller's speaker volume? So in some games (Electronic Super Joy, God Eater 2), I've noticed that the game makes use of the controller's built-in speaker to say things to the player. It's a cool feature and I like having it on, but it's just way too loud. Is there a setting in-game (for any games) or on the system itself to change the volume of the controller's speaker output? A: Try this: Hold the Playstation Button Select Adjust Devices Lower the volume (Source) A: You can set the controller speaker volume in the console's system settings: Settings > Devices > Controllers > Volume Control (Speaker for Controller) From there, you can adjust the volume, which will apply to all games. Edit: Timmy Jim's answer provides a much quicker and simpler way of doing this.
{ "pile_set_name": "StackExchange" }
Q: strtolwer strange behaviour in different environments with multibyte characters There are 5 machines. Mine is a win10 64bit, php 5.6, production server is latest debian 64bit with php 5.6. Both of two machines run the same script with the same results. The strange is the difference between run the script from web, and from command line. The code: $string = chr(194) . chr(160); var_dump($string); var_dump(bin2hex($string)); var_dump(bin2hex(strtolower($string))); var_dump(bin2hex(mb_strtolower($string))); The output from web: string(2) " " string(4) "c2a0" string(4) "c2a0" string(4) "c2a0" Strange is, that both machine is do the same in command line: string(2) " " string(4) "c2a0" string(4) "e2a0" <-- Listen this! string(4) "c2a0" for some reason, strtolower has changed the first byte of the UTF8 char. My colleaguge has a raspberry 32 bit, with PHP7, another server with 64bit CentOs with PHP7, and there is one more machine CentOs 64bit PHP 5.3.3. But these machines dumps everywhere the c2a0. Of course, we use UTF8 charset everywhere for everything. What can cause this? EDIT: On production: setlocale(LC_ALL,0); Command line: LC_CTYPE=en_US;LC_NUMERIC=C;LC_TIME=C;LC_COLLATE=C;LC_MONETARY=C;LC_MESSAGES=C;LC_PAPER=C;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=C;LC_IDENTIFICATION=C From web: string(1) "C" On my localhost machine: From web: string(1) "C" Command line: LC_COLLATE=C;LC_CTYPE=Hungarian_Hungary.1250;LC_MONETARY=C;LC_NUMERIC=C;LC_TIME=C A: You should use setlocale function before using string functions on multibyte strings.
{ "pile_set_name": "StackExchange" }
Q: Access a key in Firebase I am stuck trying to retrieve and modify the file below. I want to change the 'rating' from 0 to 5, however, I have no idea how to access the keyfile. A: To change the rating value you can use updateChildren(): DatabaseReference mDatabase = FirebaseDatabase.getInstance().getReference("all_uploaded_image"); Map<String, Object> childUpdates = new HashMap<>(); childUpdates.put("rating", 5); mDatabase.child("-Luk55E0wNUE9bXk_pZ0").updateChildren(childUpdates); To simultaneously write to specific children of a node without overwriting other child nodes, use the updateChildren() method. More info here: https://firebase.google.com/docs/database/android/read-and-write#update_specific_fields
{ "pile_set_name": "StackExchange" }
Q: WAN Optimization Resources I'm looking for resources on writing software to do WAN optimization. I've googled this and searched SO, but there doesn't seem to be much around. The only things I've found are high-level articles in tech magazines, and info for network admins on how to make use of existing WAN optimization products. I'm looking for something on the techniques etc. used to write WAN optimization software. It seems to be a dark art, and the people who know how to do it, guard their secrets closely. Any suggestions? A: you can start with traffic squeezer - an opensouce WAN Optimizer (http://www.trafficsqueezer.org/)
{ "pile_set_name": "StackExchange" }
Q: Prove Polynomial is Reducible in Field of Prime Characteristic Let F be a field of characteristic h, where h is prime. Prove that $x^h+1$ is reducible in F[x]. Is the following sufficient? The Freshman's Dream is applicable in fields of prime characteristic. Thus, $(x+1)^h=(x^h+1^h)=(x^h+1)$. This implies every polynomial of the form $(x^h+1)$ is reducible by $(x+1)$. A: Yes, you've shown that you can express $x^h - 1$ as a product of two nonconstant polynomials (namely, $x - 1$ and $(x - 1)^{h-1}$). If this is for an assignment, you might want to make sure you're allowed to assume the freshman's dream statement, as one might consider that the result trivializes the problem. More generally, if $h > 1$, $x^h - 1$ is always reducible in any characteristic; by the formula for geometric series, we have $$ \frac{x^h - 1}{x - 1} = 1 + x + \dots + x^{h-1}, $$ so that $$ x^h - 1 = (x - 1)(1 + x + \dots + x^{h-1}) $$ is a factorization into two nonconstant polynomials. After the edit: Yes, that argument shows that $(x + 1)(x + 1)^{h-1}$ is a decomposition of $x^h + 1$ into two nonconstant polynomials. Just make sure you can justify using the freshman's dream statement, and that you state it properly: $(x + y)^p = x^p + y^p$ if $x,y\in A$, where $A$ is a ring of characteristic $p$ (which $F[x]$ is, if $F$ is a field of characteristic $p$).
{ "pile_set_name": "StackExchange" }
Q: VueJS Router: Determine first page load to disable/change page transitions I want to change page transitions in my App.vue based on if its an inital page load (eg the user types test.com/something into his browser) or if the user uses a router link (the user is already on the page and clicks a router link to /something), but can't get it to work. Is there a lifecycle-hook that is fired only once after inital page load? A: Found it. Actually its quite easy: $route.from.name is null on first page load, so you can determine the first page load like this in your App.vue: export default { data() { return { firstLoad: undefined, } }, watch: { $route(to, from) { this.firstLoad = from.name == null ? true : false }, }, }
{ "pile_set_name": "StackExchange" }
Q: Responsive navbar with angular 2 and bootstrap 4 I am trying to build a web app with a nav bar on the top with some options. However, when the website is view on mobile device, the navbar shrinks and a icon appears for users to press on to show the options. Exactly the behaviour as shown from this website Responsive website I am using angular 2 with bootstrap 4. I have tried bootstrap 4 examples but they dont seem to work too well with angular 2 (Dropdown does not work). This is when I found that the angular team has actually been working on their own framework to integrate with bootstrap called ng-bootstrap. However, there is nothing about responsive navbar in there. Is there a quick and easy way to build such navbar without doing it manually by checking screen size and change things around? A: You can combine Bootstrap with Angular to do this. I'm using Angular 4, but this solution should work with 2 as well. I'm also using Bootstrap 4 (beta) and I know this was a little different if you were using the alpha version. The markup: <nav class="navbar navbar-expand-sm navbar-light bg-light"> <a class="navbar-brand" href="#">Brand/Logo</a> <button class="navbar-toggler" (click)="collapse=!collapse" type="button" aria-expanded="false" aria-label="Toggle navigation" <span class="navbar-toggler-icon"></span> </button> <div class="navbar-collapse" (click)="collapse=true" [hidden]="collapse"> <ul class="navbar-nav mr-auto"> <li class="nav-item"> <a class="nav-link" href="#">Home <span class="sr-only">(current)</span></a> </li> <li class="nav-item"> <a class="nav-link" href="#">About</a> </li> </ul> </div> </nav> The component: import { Component } from '@angular/core'; @Component({ selector: 'app-nav-bar', templateUrl: './nav-bar.component.html', }) export class NavBarComponent { collapse: boolean = true; constructor() { } } What we're doing in this solution is getting rid of the Bootstrap collapse plugin and using a really simple version of it done in Angular. We let Bootstrap handle the show/hide of the menu on larger screens while we hide the collapsible menu on smaller screens until the user clicks the toggle button. We do this by using the [hidden] directive in Angular and tying it to a boolean variable called collapse. We toggle collapse when the button is clicked and set it to false when a menu item is selected. This answer is adapted from an earlier answer (that unfortunately I can't find) someone gave on this same topic a few years ago, but that answer was for Bootstrap 3.x and AngularJS (1.x).
{ "pile_set_name": "StackExchange" }
Q: Is there a way to prevent Google App Engine datastore viewer from cutting off text objects? I am running Java GAE SDK 1.7.2. This is the datastore viewer on localhost. Also, is it possible to not have returns (\n) removed from the text, so that I can see them in datastore viewer? Look at the second row input - that is where the text is cut off. A: Actually, Paul C's tip was not very good. Instead of creating your own interface, the easiest thing to do is to try and use Greasemonkey or something to alter the page.
{ "pile_set_name": "StackExchange" }
Q: VBA adding a list empty fields from a userform to a message box I have created a userform that requires the user to input several strings or integers. I am trying to get a message box come up if several of the mandatory boxes are not filled in. I want to list the empty fields but skip the fields that are filled in. I know how to do a for loop if the values were integers but most of the inputs are strings. I think I could do something with Dim C as control, and I know the general layout of my message box, but I am stumped beyond that. Please help so that I don’t have to write six separate conditional statements with six separate message boxes! The six form field names are: Proposal_Name, Date_of_Submission, cboContraact_type, Contract_Neg_Name, Contract_Neg_Number, and Validity_Period The general layout of the message box I had in mind is as follows: MsgBox "You have left the following mandatory fields empty:" & vbCrLf & vbCrLf & "Proposal_Name" & vbNewLine & "Date_of_Submission" & Chr(10) & "cboContraact_type" & Chr(10) & "Contract_Neg_Name" & Chr(10) & "Validity_Period" A: Since you are only concerned about 6 fields, I would not go down the path of looping through the form controls, determining the control type, checking for missing value, etc. Here is an OnClick for a Command Button that might work for you: Private Sub Command12_Click() Dim sMissingValues As String sMissingValues = "" If Nz(Me!Proposal_Name, "") = "" Then sMissingValues = sMissingValues + vbCrLf + "Proposal_Name" If Nz(Me!Date_of_Submission, "") = "" Then sMissingValues = sMissingValues + vbCrLf + "Date_of_Submission" If Nz(Me!cboContraact_type, "") = "" Then sMissingValues = sMissingValues + vbCrLf + "cboContraact_type" If Nz(Me!Contract_Neg_Name, "") = "" Then sMissingValues = sMissingValues + vbCrLf + "Contract_Neg_Name" If Nz(Me!Contract_Neg_Number, "") = "" Then sMissingValues = sMissingValues + vbCrLf + "Contract_Neg_Number" If Nz(Me!Validity_Period, "") = "" Then sMissingValues = sMissingValues + vbCrLf + "Validity_Period" If sMissingValues <> "" Then MsgBox "You have left the following mandatory fields empty:" & vbCrLf & sMissingValues End If End Sub
{ "pile_set_name": "StackExchange" }
Q: GridBagLayout: equally distributed cells Is it possible to completely emulate the behavior of a GridLayout with the GridBagLayout manager? Basically, I have a 8x8 grid in which each cell should have the same width and height. The GridLayout automatically did this. But I want to add another row and column to the grid which size is not the same as the other ones. That row/column should take up all the remaining space that might be left over (because the available size couldn't be equally distributed into 8 cells). Is that even possible, or do I – again – have to use a different layout manager? edit Here is a simple graphic of what I want to achieve, simplified to just 4 cells: The colored cells are the ones I added to the actual grid (gray) which has cells with the same height and width x. So the grid's height and width is 4*x. I now want the additional cells to have the necessary width/height (minimumSize) plus the rest of the available width/height from the full size. If the whole panel's size is changed, the gray grid cells should again take up as much as space as possible. A: set weightx and weighty of GridBagConstraints of the fixed cells to 0 and the fill to NONE. For the floating cells set fill to BOTH, for the floating cells that should expand only horizontally set weightx to 1 and for the vertically expanding ones set weighty to 1. The cells only expand if they have any content, so you need to fill it with something. I chose JLabels and set fixed dimensions for the labels in the fixed cells. On resize you need to recalculate the dimensions and call invalidate() to recalculate the layout. Here is an example for a w x h grid: import java.awt.Color; import java.awt.Component; import java.awt.Container; import java.awt.Dimension; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import java.awt.event.ComponentAdapter; import java.awt.event.ComponentEvent; import javax.swing.BorderFactory; import javax.swing.JFrame; import javax.swing.JLabel; public class GridBag { public static void main(String[] args) { final JFrame f = new JFrame("Gridbag Test"); final Container c = f.getContentPane(); c.setLayout(new GridBagLayout()); final Dimension dim = new Dimension(70, 70); final int w = 4; final int h = 4; final JLabel[] yfloating = new JLabel[w]; final JLabel[] xfloating = new JLabel[h]; final JLabel[][] fixed = new JLabel[w][h]; // adding the vertically floating cells final GridBagConstraints gc = new GridBagConstraints(); gc.fill = GridBagConstraints.BOTH; gc.weightx = 0.0; gc.weighty = 1.0; for(int i = 0; i < w; ++i) { yfloating[i] = new JLabel("floating " + i); yfloating[i].setBorder(BorderFactory.createLineBorder(Color.BLACK)); yfloating[i].setHorizontalTextPosition(JLabel.CENTER); yfloating[i].setVerticalTextPosition(JLabel.CENTER); gc.gridy = 0; gc.gridx = i+1; c.add(yfloating[i], gc); } // adding the horizontally floating cells gc.fill = GridBagConstraints.BOTH; gc.weightx = 1.0; gc.weighty = 0.0; for(int i = 0; i < w; ++i) { xfloating[i] = new JLabel("floating " + i); xfloating[i].setBorder(BorderFactory.createLineBorder(Color.BLACK)); xfloating[i].setHorizontalTextPosition(JLabel.CENTER); xfloating[i].setVerticalTextPosition(JLabel.CENTER); gc.gridy = i+1; gc.gridx = 0; c.add(xfloating[i], gc); } // adding the fixed cells gc.fill = GridBagConstraints.NONE; gc.weightx = 0.0; gc.weighty = 0.0; for(int i = 0; i < w; ++i) { for(int j = 0; j < h; ++j) { fixed[i][j] = new JLabel("fixed " + i); fixed[i][j].setBorder(BorderFactory.createLineBorder(Color.BLACK)); fixed[i][j].setMaximumSize(dim); fixed[i][j].setMinimumSize(dim); fixed[i][j].setPreferredSize(dim); gc.gridx = i+1; gc.gridy = j+1; c.add(fixed[i][j], gc); } } c.addComponentListener(new ComponentAdapter() { @Override public void componentResized(ComponentEvent e) { final Component comp = e.getComponent(); final int newSize = Math.min(comp.getHeight() / h, comp.getWidth() / w); final Dimension newDim = new Dimension(newSize, newSize); for(int i = 0; i < w; ++i) { for(int j = 0; j < h; ++j) { fixed[i][j].setMaximumSize(newDim); fixed[i][j].setMinimumSize(newDim); fixed[i][j].setPreferredSize(newDim); } } comp.invalidate(); } }); f.pack(); f.setVisible(true); } } A: After trying many different things with the built-in layout managers, I decided to create a custom layout manager for this problem as well. I didn't do it yet, as I didn't have the time to continue with this project, but when I have it done, I'll make sure to post the layout manager code here, so that anyone interested in a similar solution can use it. edit didxga reminded me in the comments that I wanted to post my solution. However, after digging out the project from back then and looking at it, I actually cannot post my solution because it turns out that I never got to creating it! It was a uni project that finished official mid September 2010. We actually wanted to continue working on it afterwards, which is probably why I said that I would post it (as that was one thing I wanted to improve), but we never really got around doing it – sadly. Instead I simply left out those extra column and row (which was meant as a label for the rows/columns btw). So yeah, I’m terribly sorry that I cannot post a layout that does what I initially wanted… :( Maybe if there are enough requesting such a layout, I would create it, but as of now, I’m not really willing to dive into Java layouting again ;)
{ "pile_set_name": "StackExchange" }
Q: spidev cannot control the chip select signal I use kernel 3.12.rc4 on an embedded linux device (olimex imx233 micro). My aim is to use /dev/spidev to be able to communicate with another spi device. I edit arch/arm/boot/dts/imx23-olinuxino.dts as: ssp1: ssp@80034000 { #address-cells = <1>; #size-cells = <0>; compatible = "fsl,imx23-spi"; pinctrl-names = "default"; pinctrl-0 = <&spi2_pins_a>; clock-frequency = <1000000>; status = "okay"; spidev: spidev@0 { compatible = "spidev"; spi-max-frequency = <1000000>; reg = <1>; }; }; arch/arm/boot/dts/imx23.dtsi: has this config spi2_pins_a: spi2@0 { reg = <0>; fsl,pinmux-ids = < 0x0182 /* MX23_PAD_GPMI_WRN__SSP2_SCK */ 0x0142 /* MX23_PAD_GPMI_RDY1__SSP2_CMD */ 0x0002 /* MX23_PAD_GPMI_D00__SSP2_DATA0 */ 0x0032 /* MX23_PAD_GPMI_D03__SSP2_DATA3 */ >; fsl,drive-strength = <1>; fsl,voltage = <1>; fsl,pull-up = <1>; }; Device binding looks correct. When I compile the kernel I get the /dev/spidev1.1. After that I use spidev_test.c and monitor the pins by an oscilloscope. The SCK and MOSI output signals correctly, however, the chipselect is set to the logic high even during the data transfer. Is there any way to determine why spidev cannot set to logic low during the transmission? It seems like either additional things needs to be passed on kernel or there is an issue on spidev that cannot control the chip select . I wonder if I need to change anything on the spidev.h or spidev.c on the driver/spi directory of the kernel? or how can I solve it? The reference manual for the processor A: I never used device tree, but I try to help you anyway. The kernel create the device /dev/spidev1.1, so spidev is connected to SPI bus 1, chip select 1. The chip select numeration start from 0, and you do not have any other device associated to SPI bus 1. As far as I know reg = <1> tell to the SPI core that spidev is connected to chip select 1., but maybe your device is connected to the chip select 0. So, reg = <0>
{ "pile_set_name": "StackExchange" }
Q: Simplest way to build a semantic analyzer I want to build a semantic analyzer i.e., to find how similar the meaning of two sentences are. For example- English: Birdie is washing itself in the water basin. English Paraphrase: The bird is bathing in the sink. Similarity Score: 5 ( The two sentences are completely equivalent, as they mean the same thing.) I have to find the similarity between the meaning of those sentences. Here is a github repo of what I want to implement. https://github.com/anantm95/Semantic-Textual-Similarity Is there any simpler approach? A: Is there any simpler approach? Very unlikely, semantic similarity is a very complex problem related to Natural Language Understanding (NLU). You could look at the techniques used for textual entailment, Question Answering and summarization. Simple methods like the baseline system proposed in the github link, but they don't really try to analyze the semantics.
{ "pile_set_name": "StackExchange" }
Q: Setting up spring app with spring data repositories and mongo db I am facing an issue when defining mongo repository in application-context.xml Following is the error i get in xml Error occured processing XML tried to access method org.springframework.context.annotation.AnnotationConfigUtils.processCommonDefinitionAnnotations (Lorg/springframework/beans/factory/annotation/AnnotatedBeanDefinition;)V from class org.springframework.data.repository.config.RepositoryComponentProvider'. See Error Log for more details servlet-context.xml /master/WebContent/WEB-INF/config line 24 Spring Beans Problem I am attaching a screenshot of env for reference. I am using eclipse Kepler version and pom properties File is like this <java-version>1.7</java-version> <org.springframework-version>4.0.1.RELEASE</org.springframework-version> <org.jackson-version>2.3.0</org.jackson-version> <spring-data-mongodb>1.4.0.RELEASE</spring-data-mongodb> Spring data commons version is 1.7 spring data mongo db version 1.4. I see the error in eclipse project when I open context xml. Interestingly I have another project that works well.Only difference is that it doesn't have spring MVC and jackson binaries otherwise its similar project. exception stack trace: !ENTRY org.springframework.ide.eclipse.beans.core 1 0 2014-03-01 00:04:11.839 !MESSAGE Error occured processing '/master/WebContent/WEB-INF/config/servlet-context.xml' !STACK 0 java.lang.IllegalAccessError: tried to access method org.springframework.context.annotation.AnnotationConfigUtils.processCommonDefinitionAnnotations(Lorg/springframework/beans/factory/annotation/AnnotatedBeanDefinition;)V from class org.springframework.data.repository.config.RepositoryComponentProvider at org.springframework.data.repository.config.RepositoryComponentProvider.findCandidateComponents(RepositoryComponentProvider.java:121) at org.springframework.data.repository.config.RepositoryConfigurationSourceSupport.getCandidates(RepositoryConfigurationSourceSupport.java:69) at org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport.getRepositoryConfigurations(RepositoryConfigurationExtensionSupport.java:54) at org.springframework.data.repository.config.RepositoryConfigurationDelegate.registerRepositoriesIn(RepositoryConfigurationDelegate.java:88) at org.springframework.data.repository.config.RepositoryBeanDefinitionParser.parse(RepositoryBeanDefinitionParser.java:67) at org.springframework.beans.factory.xml.NamespaceHandlerSupport.parse(NamespaceHandlerSupport.java:74) at org.springframework.ide.eclipse.beans.core.internal.model.namespaces.DelegatingNamespaceHandlerResolver$ElementTrackingNamespaceHandler.parse(DelegatingNamespaceHandlerResolver.java:177) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1427) at org.springframework.ide.eclipse.beans.core.internal.model.BeansConfig$ErrorSuppressingBeanDefinitionParserDelegate.parseCustomElement(BeansConfig.java:1400) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1417) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:187) at org.springframework.ide.eclipse.beans.core.internal.model.BeansConfig$ToolingFriendlyBeanDefinitionDocumentReader.doRegisterBeanDefinitions(BeansConfig.java:1330) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:110) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:494) at org.springframework.ide.eclipse.beans.core.internal.model.BeansConfig$2.registerBeanDefinitions(BeansConfig.java:402) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:391) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:335) at org.springframework.ide.eclipse.beans.core.internal.model.BeansConfig$2.loadBeanDefinitions(BeansConfig.java:388) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:303) at servlet context.xml <beans xmlns="http://www.springframework.org/schema/beans" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:mongo="http://www.springframework.org/schema/data/mongo" xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <!-- Enables the Spring MVC @Controller programming model --> <mvc:annotation-driven /> <context:component-scan base-package="com.xxxx.yyyyy" /> <!-- Mongo DB Configuration --> <mongo:mongo id="mongo" host="monopolyvm3" port="27017" /> <mongo:db-factory dbname="test" mongo-ref="mongo" /> <mongo:db-factory id="mongoDbFactory" dbname="cloud" mongo-ref="mongo" /> <mongo:repositories base-package="com.xxxx.yyyyy" /> <bean id="mappingContext" class="org.springframework.data.mongodb.core.mapping.MongoMappingContext" /> <bean id="defaultMongoTypeMapper" class="org.springframework.data.mongodb.core.convert.DefaultMongoTypeMapper"> <constructor-arg name="typeKey"><null/></constructor-arg> </bean> <bean id="mappingMongoConverter" class="org.springframework.data.mongodb.core.convert.MappingMongoConverter"> <constructor-arg name="mongoDbFactory" ref="mongoDbFactory" /> <constructor-arg name="mappingContext" ref="mappingContext" /> <property name="typeMapper" ref="defaultMongoTypeMapper" /> </bean> <bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg ref="mongoDbFactory" /> <constructor-arg name="mongoConverter" ref="mappingMongoConverter" /> <property name="writeConcern" value="SAFE" /> </bean> </beans> error is seen at the following line.. A: In my case it was spring data jpa version which was causing problem. I am not using spring data mongodb but spring data jpa instead. I started this projected today with latest versions (spring-framework 4.0.2.RELEASE, spring-data-jpa 1.5.0.RELEASE). I ensured that all transitive dependencies from maven (from spring side as well as from spring data side) are that of latest version but no lock. Following this thread, tried to downgrade spring version to 4.0.0.RELEASE but no luck. I even explored the org.springframework.context.annotation.AnnotationConfigUtils class from spring-context-4.0.2.RELEASE.jar (and also in spring-context-4.0.0.RELEASE.jar) from within my workspace and confirmed that indeed processCommonDefinitionAnnotations is a public method thus IllegalAccessError can not be an issue resulting from these jars. Finally I downgraded my spring-data-jpa from 1.5.0.RELEASE to 1.4.4.RELEASE and voila all problems are solved on maven update. I am using STS 3.4 if it helps anyone. Since this was the first post I found on googling this error, thought of posting it here so that others who are facing same problems can potentially solve it with this tip. I have opened bug report at https://jira.springsource.org/browse/DATAJPA-490 @Oliver, tried the dependency management suggested but no luck. I have also added dependency:list output to the bug report as requested. A: Finally I changed the spring jar version to 4.0.0 and then removed all spring jars from the maven repository and tried to (updated maven first)build again..It worked. I am pretty sure that it will work with 4.0.1 spring jars also.( I was having another project with the same configuration and it was working fine with 4.0.1 jars:)) I contribute this issue to maven and eclipse. Some issue that I don't have any clue at all.
{ "pile_set_name": "StackExchange" }
Q: How to split comma-separated key-value pairs with quoted commas I know there are a lot of other posts about parsing comma-separated values, but I couldn't find one that splits key-value pairs and handles quoted commas. I have strings like this: age=12,name=bob,hobbies="games,reading",phrase="I'm cool!" And I want to get this: { 'age': '12', 'name': 'bob', 'hobbies': 'games,reading', 'phrase': "I'm cool!", } I tried using shlex like this: lexer = shlex.shlex('''age=12,name=bob,hobbies="games,reading",phrase="I'm cool!"''') lexer.whitespace_split = True lexer.whitespace = ',' props = dict(pair.split('=', 1) for pair in lexer) The trouble is that shlex will split the hobbies entry into two tokens, i.e. hobbies="games and reading". Is there a way to make it take the double quotes into account? Or is there another module I can use? EDIT: Fixed typo for whitespace_split EDIT 2: I'm not tied to using shlex. Regex is fine too, but I didn't know how to handle the matching quotes. A: You just needed to use your shlex lexer in POSIX mode. Add posix=True when creating the lexer. (See the shlex parsing rules) lexer = shlex.shlex('''age=12,name=bob,hobbies="games,reading",phrase="I'm cool!"''', posix=True) lexer.whitespace_split = True lexer.whitespace = ',' props = dict(pair.split('=', 1) for pair in lexer) Outputs : {'age': '12', 'phrase': "I'm cool!", 'hobbies': 'games,reading', 'name': 'bob'} PS : Regular expressions won't be able to parse key-value pairs as long as the input can contain quoted = or , characters. Even preprocessing the string wouldn't be able to make the input be parsed by a regular expression, because that kind of input cannot be formally defined as a regular language. A: It's possible to do with a regular expression. In this case, it might actually be the best option, too. I think this will work with most input, even escaped quotes such as this one: phrase='I\'m cool' With the VERBOSE flag, it's possible to make complicated regular expressions quite readable. import re text = '''age=12,name=bob,hobbies="games,reading",phrase="I'm cool!"''' regex = re.compile( r''' (?P<key>\w+)= # Key consists of only alphanumerics (?P<quote>["']?) # Optional quote character. (?P<value>.*?) # Value is a non greedy match (?P=quote) # Closing quote equals the first. ($|,) # Entry ends with comma or end of string ''', re.VERBOSE ) d = {match.group('key'): match.group('value') for match in regex.finditer(text)} print(d) # {'name': 'bob', 'phrase': "I'm cool!", 'age': '12', 'hobbies': 'games,reading'} A: You could abuse Python tokenizer to parse the key-value list: #!/usr/bin/env python from tokenize import generate_tokens, NAME, NUMBER, OP, STRING, ENDMARKER def parse_key_value_list(text): key = value = None for type, string, _,_,_ in generate_tokens(lambda it=iter([text]): next(it)): if type == NAME and key is None: key = string elif type in {NAME, NUMBER, STRING}: value = { NAME: lambda x: x, NUMBER: int, STRING: lambda x: x[1:-1] }[type](string) elif ((type == OP and string == ',') or (type == ENDMARKER and key is not None)): yield key, value key = value = None text = '''age=12,name=bob,hobbies="games,reading",phrase="I'm cool!"''' print(dict(parse_key_value_list(text))) Output {'phrase': "I'm cool!", 'age': 12, 'name': 'bob', 'hobbies': 'games,reading'} You could use a finite-state machine (FSM) to implement a stricter parser. The parser uses only the current state and the next token to parse input: #!/usr/bin/env python from tokenize import generate_tokens, NAME, NUMBER, OP, STRING, ENDMARKER def parse_key_value_list(text): def check(condition): if not condition: raise ValueError((state, token)) KEY, EQ, VALUE, SEP = range(4) state = KEY for token in generate_tokens(lambda it=iter([text]): next(it)): type, string = token[:2] if state == KEY: check(type == NAME) key = string state = EQ elif state == EQ: check(type == OP and string == '=') state = VALUE elif state == VALUE: check(type in {NAME, NUMBER, STRING}) value = { NAME: lambda x: x, NUMBER: int, STRING: lambda x: x[1:-1] }[type](string) state = SEP elif state == SEP: check(type == OP and string == ',' or type == ENDMARKER) yield key, value state = KEY text = '''age=12,name=bob,hobbies="games,reading",phrase="I'm cool!"''' print(dict(parse_key_value_list(text)))
{ "pile_set_name": "StackExchange" }
Q: Is it discouraged to use "onlclick" with HTML element while using "addEventListener" method? Before I explain my question, this piece of code is going to be considered: HTML: <div> <button type="button" id="btn" onclick="disAl()">Click</button> </div> JS: function disAl(){ var x = document.getElementById("btn"); if(x.addEventListener){ x.addEventListener("click", altTxt); } else if (x.attachEvent){ x.attachEvent("onclick", altTxt); } } function altTxt(){ alert("Hello"); } Now, if I run the program and click the button first time, nothing happens. However, from the second click the alert pops up. Interestingly enough, when I remove onclick="disAl()" from button element, and also remove the function definition from the script, the problem gets fixed. Like the following: var x = document.getElementById("btn"); if (x.addEventListener) { x.addEventListener("click", altTxt); } else if (x.attachEvent) { x.attachEvent("onclick", altTxt); } function ... .... So does it mean onclick="disAl()" method is unnecessary in my case? A: Here is what is happening: First time: Because of this part onclick="disAl()", you are setting up button click handler to a function called disAl(). Due to this, you get inside the function disAl() when you click the button. When inside, you are again setting up click event handler to altTxt. This causes two handlers to be chained to click event. Then when you click second time, let's see what happens. Second time: Now when the click happens, first disAl() is called which again unnecessarily sets up altTxt as click event handler. Once this handler is over, altTxt is called and that is when you see the alert. Second case when you remove the function: In this case, you are setting up button click event handler when your page is loaded since it is not a function anymore. So when you click the button, you call altTxt and see the alert. So, yes disAl() is unnecessary in your case. Also, as a good practice, event handlers should not be set in the html but they should be set in the code by addEventListener. This allows you to remove event listener if you so desire by calling removeEventListener(). Hope this helps!
{ "pile_set_name": "StackExchange" }
Q: how to resolve circular dependency I have added another project "xml" to my project "synchronise". So program.cs (in the xml project) runs the getDetails(), which runs the FectchDetails() in the synchronise project and returns the result as an object to the xml/getDetails(). If an error occurs in the Synchronise/FecthDetails() I want to re run the xml/getDetails(). Ive tried xml.getDetails, but it is saying it doesnt exist, because its not reference to the project so I try to add the xml project to the synchroinse but its telling me I cant do this as it would cause circular dependency....how can I resolve this thanks A: Basically, you have a project X depending on project Y (X --> Y) , and project Y depending on project X ( Y --> X). In other words, something like: ( X <---> Y) This situation means that the compiler does not know what to compile first, and therefore complains. To solve this, look for common things / pieces of logic that can be moved out from one or both of the projects, and create a third project that can be built before both of the others. Place all common stuff in this new project, and you should be fine; your dependency should then be as below, where it does not matter if X or Y is compiled first, as long as Z is compiled before both of them: ( X --> Z <-- Y ) A: If you encounter an error within Synchronise/FecthDetails(), propbably you can throw an application exception and catch that in xml/getDetails. Then you can decide whether to re-try or inform the user about it. I am sorry if I misunderstood you question. if possible post some psuedo-code.
{ "pile_set_name": "StackExchange" }
Q: logging file does not exist when running in celery first, I'm sorry about my low level english I create a website for study I create send SMS feature using django + redis + celery tasks/send_buy_sms.py from celery import Task from items.utils import SendSMS class SendBuyMessageTask(Task): def run(self, buyer_nickname, buyer_phone, saler_phone, selected_bookname): sms = SendSMS() sms.send_sms(buyer_nickname, buyer_phone, saler_phone, selected_bookname) items/utils.py import os import requests import json class SendSMS(): def send_sms(self, buyer_nickname, buyer_phone, saler_phone, selected_bookname): appid = [...] apikey = [...] sender = '...' receivers = [saler_phone, ] content = '...' url = os.environ.get("URL") params = { 'sender': sender, 'receivers': receivers, 'content': content, } headers = {...} r = '...' return params when I send sms using my code it has no problem [2017-09-12 17:20:43,532: WARNING/Worker-6] Task success and I want make log file and insert log "success send SMS" when user click "send sms button" wef/wef/decorators.py from django.utils import timezone import logging def log_decorator(func): logging.basicConfig(filename='../../sendsms.log', level=logging.INFO) def wrap_func(self, *args, **kwargs): time_stamp = timezone.localtime(timezone.now()).strftime('%Y-%m-%d %H:%M') logging.info('[{}] success send SMS'.format(time_stamp)) print(logging) return func(self, *args, **kwargs) return wrap_func but when I click 'send sms' button task is Ok , but log file doesn't created... So I want to know 'what is the problem?' I change my code create logfile -> print log wef/wef/decorators.py from django.utils import timezone def log_decorator(func): def wrap_func(self, *args, **kwargs): time_stamp = timezone.localtime(timezone.now()).strftime('%Y-%m-%d %H:%M') ## print log print('[{}] succes send sms'.format(timestamp)) ## print log return func(self, *args, **kwargs) return wrap_func when I click 'send sms button' the log print in celery I'm so confused because print() is working but create log file doesn't working... I think my code(create logging file) is no problem because when I practice create log file without django,celery,redis, log file was created successfully same code, same feature but not working in django and celery please give me some advise thank you A: I guess you have to add logger like - # import the logging library import logging # Get an instance of a logger logger = logging.getLogger(__name__) def my_view(request, arg1, arg): ... if bad_mojo: # Log an error message logger.error('Something went wrong!') Here I am assuming that you have configured your loggers, handlers, filters and formatters For more information visit URL
{ "pile_set_name": "StackExchange" }
Q: pandas convert to wide table I want convert my df to a single row wide table. The input data is below: data = {"headings": [{'heading': 'item1', 'random_assignment_percent': 'item2', }, {'heading': 'item3', 'random_assignment_percent': 'item4', }]} Table that we have: ___________________________________________ | |heading |random_assignment_percent| |-----------------------------------------| |0 |item1 |item2 | |1 |item3 |item4 | ------------------------------------------- And that I would like to have: _____________________________________________________________________________________________ | |heading_0 |heading_1 |random_assignment_percent_0 |random_assignment_percent_1| --------------------------------------------------------------------------------------------- | 1 |item1 |item3 |item2 |item4 | --------------------------------------------------------------------------------------------- Could someone help me to get my df exactly as on last table? A: One approach can be unstack + transpose: out = df.unstack().to_frame().T out.columns = out.columns.map('{0[0]}_{0[1]}'.format) print(out) heading_0 heading_1 random_assignment_percent_0 random_assignment_percent_1 0 item1 item3 item2 item4
{ "pile_set_name": "StackExchange" }
Q: how to raise a alert dialog from broadcast receiver with listof events? when ever i get a notification from gcm i need to raise a alert dialog with list of events in that dialog.i do that by using the custom toast message.but i am unable to write the clicking event for the list in the alert dialog box. I call this method when ever i get new notification.alert dialog is appear but onclick event is not working for the list.. public void displayToast() { LayoutInflater mInflater = LayoutInflater.from(con); View myView = mInflater.inflate(R.layout.statusbar, null); Toast toast = new Toast(con.getApplicationContext()); toast.setGravity(Gravity.CENTER_VERTICAL, 0, 0); TextView tv = (TextView) myView.findViewById(R.id.notificationtype); ListView lv = (ListView) myView.findViewById(R.id.listView1); lv.setAdapter(new StatusAdapter(con, list)); tv.setText("MESSAGES"); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> arg0, View arg1, int arg2,long arg3) { // TODO Auto-generated method stub if (list.get(arg2).getType().equals("S Notification")) { Intent it = new Intent(con,ViewEventActivity.class); it.putExtra("eventid", list.get(arg2).getId()); it.putExtra("event", "team"); con.startActivity(it); //dialog.dismiss(); } if (list.get(arg2).getType().equals("S R Notification")) { Intent it = new Intent(con,GameDetailsActivity.class); it.putExtra("id", list.get(arg2).getId()); con.startActivity(it); //dialog.dismiss(); } if (list.get(arg2).getType().equals("A Notification")) { Intent intent = new Intent(con,ViewItemActivity.class); intent.putExtra("id", "" + list.get(arg2).getId()); con.startActivity(intent); //dialog.dismiss(); } if (list.get(arg2).getType().equals("D Notification")) { Intent it = new Intent(con,PersonalDetails.class); it.putExtra("personId", list.get(arg2).getId()); con.startActivity(it); //dialog.dismiss(); } if (list.get(arg2).getType().equals("M Notification")) { Intent it = new Intent(con,MessageContentActivity.class); it.putExtra("messageid", list.get(arg2).getId()); con.startActivity(it); // dialog.dismiss(); } } }); toast.setDuration(Toast.LENGTH_LONG); toast.setDuration(1800000); toast.setView(myView); toast.show(); } A: Toast Window has the property WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE set by the framework, So any Views that you have cannot be sent events to it.
{ "pile_set_name": "StackExchange" }
Q: JavaScript: Form: Validating e-mail address and checking another field with Starts.With before sending I am a total JavaScript noob, but I have been searching for a solution for quite some time now and can't seem to find it. What I want to do is: Before sending a form, check if the entered e-mail address is valid and also check if the input of the phone field starts with http://. Validating the e-mail address is working like a charm. However, the second part isn't. The form is just sent without any alert or anything. Can someone enlighten me? Thank you! function validate(){ var email = document.getElementById('email').value; var phone = document.getElementById('phone').value; var filter = /^([a-zA-Z0-9_.-])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$/; if (!filter.test(email)) { alert('Please enter a valid e-mail address.'); return false; } else if (phone.StartsWith("http://")) { alert('Please correct your phone number.'); return false; } } For anyone wondering: this is supposed to be a "simple" spam blocker. Maybe it would be even more interesting if the phone field was checked for any characters except numbers, plus, spaces, hyphens and dots... What do you think? A: First to answer your question: Edit: Nowadays JavaScript does have a startsWith function on the String object. JavaScript does not have a 'StartsWith' method on the String object. You will have to use regular expressions for this. Here's an example: function validate() { var email = document.getElementById('email').value; var phone = document.getElementById('phone').value; var emailFilter = /^([a-zA-Z0-9_.-])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$/; var phoneFilter = /^http:\/\//; if (!emailFilter.test(email)) { alert('Please enter a valid e-mail address.'); return false; } if (!phoneFilter.test(phone)) { alert('Please correct your phone number.'); return false; } return true; } I created a jsfiddle to show you how it works: http://jsfiddle.net/cnVQe/1/ However: it's best to use a declarative instead of a programmatic approach for validating forms. There are many tools that do this. One of them is this jQuery library that does most of the heavy listing for you: http://jqueryvalidation.org/ It works by just adding classes to to the fields and have the plugin do the checking , error displaying and preventing of form submission. Just to add: you approach is not wrong, but if the form gets bigger and you want to check more fields you will inevitably run into complexity that has been solved over and over by other people. Even if it means that you pull in additional complexity with the extra plugin it will all be worth it as soon as you start to make the form bigger. Also: your regular expression for the e-mail address check will not accept the + sign, which is something many people use to create special gmail accounts like [email protected]. Using libraries such as the one I describes very often have robust checks already. So that's an additional benefit on using existing libraries.
{ "pile_set_name": "StackExchange" }
Q: Prove that the sum is not an integer Prove that if a / b and c / d are two irreducible rational numbers such that gcd (b, d) = 1 then the sum (a/b + c/d) is not an integer. I was thinking about the proof by contradiction, but then I haven't find the correct answer yet... A: Do note that we need at least one of $b,d$ to be $\neq 1$ Note that $$a/b+c/d=\frac{ad+bc}{bd}$$ For this to be an integer, we must have $ad+bc$ divisible by $bd$ Since $\gcd(b,d)=1$, showing that $ad+bc$ is not divisible by any one of them $\neq 1$ is sufficient. We show that $b\neq 1$ does not divide it. Suppose it does. Then $b\mid ad+bc\iff b\mid ad$. Since $\gcd(b,d)=1$, this is equivalent to $b\mid a$, a contradiction
{ "pile_set_name": "StackExchange" }
Q: Tube cathode biasing with CCS I'm unable to understand how the biasing of the 12AX7 Tube in the schematic below works. In particular, I can't understand how the LM334 can switch on, where does it find the required voltage to switch on? A: The 12AX7 double triode is connected as a differential amplifier (long-tailed pair). In the Wikipedia article you see a similar circuit implemented with BJTs, but the principle is the same: consider the cathodes to be the emitters, etc. For optimum performance the AC resistance on the common cathode should be high. This is achieved using an LM334 connected as a current source (in this case it acts as a current sink actually). Due to the symmetry of the circuit when no signal is applied, the current sinked by the LM334 is shared in equal parts by the two triodes. The shared bias current is set by the 34Ω resistor, as explained in the datasheet: Note that the set current is temperature dependent, and will increase linearly with temperature (the LM334 is also used as a temperature sensor). At 25°C, i.e. 298K, from the formulas above you get: $$ I_{SET}=\frac {227 \mu V / K} {34 \Omega} \times 298K \approx 1.98mA $$ Therefore each triode will have a quiescent cathode current of about 1mA. ...where does it find the required voltage to switch on? The LM334 contains a complete feedback amplifier connected as a shunt regulator. It doesn't need a "power supply": it takes the power to function directly by sinking a current and regulating that current using negative feedback. Of course you have to design the circuit so that it sinks a minimum amount of current, which translates to a minimum voltage across the \$V^{+}\$ and \$V^{-}\$ terminals. For the range of currents we are interested in, the datasheet declares this minimum voltage to be about 1V (emphasis mine): Figure 11 tells you more about the voltage you can expect to find across the current source (enhanced by me): Interpolating the curve for \$R_{SET} = 34 \Omega\$ (in blue) and setting the current to about 2mA, you see that you'll have about 1V across that source, which is no problem, since the whole triode pair is powered by 240V. In other words, the circuits inside the LM334 automatically adjust the voltage across its terminals in order to maintain that 2mA shared current constant. A: I'm sure I remember John Brooskie covered this @ tubecad.com in one of his articles. Which is where this schematic came from. Over the years, I've noticed how hard it is to search for something on his site, google is touch and go with the results. To answer the question, the voltage source is from the cathode. But to further answer why it doesn't explode seeing the 240V at the ccs when you first apply power: at start up time, when the tubes are cold, the tube is in a near non conductive state, its internal resistance is very high, thus protecting the ccs from the massive current flow that would be there otherwise.
{ "pile_set_name": "StackExchange" }
Q: Logstash Grok for Cisco Call Manager logs I am working on getting Call Manager logs into logstash and i need some help with the grok parser for the logs. Can anyone help me come up with a grok pattern for the following log entry: <190>136768: Dec 23 2019 10:48:59.476 UTC : %UC_AUDITLOG-6-AdministrativeEvent: %[UserID=administrator][ClientAddress=192.168.1.5][Severity=6][EventType=UserAccess][ResourceAccessed=CUCMServiceability][EventStatus=Success][CompulsoryEvent=No][AuditCategory=AdministrativeEvent][ComponentID=Cisco CCM Servicability][CorrelationID=][AuditDetails=Attempt to access data was successful.User is authorized to access alarmconfig][AppID=Cisco Tomcat][ClusterID=][NodeID=cm01.home.local]: Audit Event is generated by this application I am trying to use the Grok Debugger, but i am not getting very far https://grokdebug.herokuapp.com/ So far i have this: <%{NUMBER:message_type_id}>%{NUMBER:internal_id}:%{SPACE}%{CISCOTIMESTAMP:cisco_timestamp}%{SPACE}%{DATA:gmt}:%{SPACE}%{PROG}: A: Try this: INPUT: <190>136768: Dec 23 2019 10:48:59.476 UTC : %UC_AUDITLOG-6-AdministrativeEvent: %[UserID=administrator][ClientAddress=192.168.1.5][Severity=6][EventType=UserAccess][ResourceAccessed=CUCMServiceability][EventStatus=Success][CompulsoryEvent=No][AuditCategory=AdministrativeEvent][ComponentID=Cisco CCM Servicability][CorrelationID=][AuditDetails=Attempt to access data was successful.User is authorized to access alarmconfig][AppID=Cisco Tomcat][ClusterID=][NodeID=cm01.home.local]: Audit Event is generated by this application GROK PATTERN: <%{NUMBER:message_type_id}>%{NUMBER:internal_id}:%{SPACE}%{CISCOTIMESTAMP:cisco_timestamp}%{SPACE}%{DATA:gmt}%{SPACE}:%{SPACE}%{PROG}:%{SPACE}\%\[UserID=%{GREEDYDATA:UserID}\]\[ClientAddress=%{IP:ClientAddress}\]\[Severity=%{NUMBER:Severity}\]\[EventType=%{GREEDYDATA:EventType}\]\[ResourceAccessed=%{GREEDYDATA:ResourceAccessed}\]\[EventStatus=%{GREEDYDATA:EventStatus}\]\[CompulsoryEvent=%{GREEDYDATA:CompulsoryEvent}\]\[AuditCategory=%{GREEDYDATA:AuditCategory}\]\[ComponentID=%{GREEDYDATA:ComponentID}\]\[CorrelationID=%{GREEDYDATA:CorrelationID}\]\[AuditDetails=%{GREEDYDATA:AuditDetails}\]\[AppID=%{GREEDYDATA:AppID}\]\[ClusterID=%{GREEDYDATA:ClusterID}\]\[NodeID=%{GREEDYDATA:NodeID}\]:%{SPACE}%{GREEDYDATA:description} OUTPUT: { "message_type_id": [ [ "190" ] ], "BASE10NUM": [ [ "190", "136768", "6" ] ], "internal_id": [ [ "136768" ] ], "SPACE": [ [ " ", " ", " ", " ", " ", " " ] ], "cisco_timestamp": [ [ "Dec 23 2019 10:48:59.476" ] ], "MONTH": [ [ "Dec" ] ], "MONTHDAY": [ [ "23" ] ], "YEAR": [ [ "2019" ] ], "TIME": [ [ "10:48:59.476" ] ], "HOUR": [ [ "10" ] ], "MINUTE": [ [ "48" ] ], "SECOND": [ [ "59.476" ] ], "gmt": [ [ "UTC" ] ], "PROG": [ [ "%UC_AUDITLOG-6-AdministrativeEvent" ] ], "UserID": [ [ "administrator" ] ], "ClientAddress": [ [ "192.168.1.5" ] ], "IPV6": [ [ null ] ], "IPV4": [ [ "192.168.1.5" ] ], "Severity": [ [ "6" ] ], "EventType": [ [ "UserAccess" ] ], "ResourceAccessed": [ [ "CUCMServiceability" ] ], "EventStatus": [ [ "Success" ] ], "CompulsoryEvent": [ [ "No" ] ], "AuditCategory": [ [ "AdministrativeEvent" ] ], "ComponentID": [ [ "Cisco CCM Servicability" ] ], "CorrelationID": [ [ "" ] ], "AuditDetails": [ [ "Attempt to access data was successful.User is authorized to access alarmconfig" ] ], "AppID": [ [ "Cisco Tomcat" ] ], "ClusterID": [ [ "" ] ], "NodeID": [ [ "cm01.home.local" ] ], "description": [ [ "Audit Event is generated by this application " ] ] }
{ "pile_set_name": "StackExchange" }
Q: How to add a mute button to my app? I have a few events in my app that make sounds. For example, Background music which is looped Background on click makes a sound When two rectangles collide it makes a sound What i want to have is a button which when clicked toggles between letting the sounds play and not letting them play. I don't want to have a load of if statements on each sound, so is there a way around this ? Here is how im calling my sounds at the moment //HTML <div id='mainRight'></div> //JS var mainRight = $('#mainRight'); $(mainRight).width(windowDim.width/2).height(windowDim.height); $(mainRight).addClass('mainRight'); var sounds = { coinHit : new Audio('./sound/coinCollect.wav'), playerClick : new Audio('./sound/playerClick.wav'), gameOver : new Audio('./sound/gameOver.wav'), backgroundMusic : new Audio('./sound/backgroundMusic.wav') } sounds.backgroundMusic.addEventListener('ended', function() { this.currentTime = 0; this.play(); }, false); sounds.backgroundMusic.play(); sounds.backgroundMusic.volume = 0.01; $('#mainRight').click(function() { sounds.playerClick.load(); sounds.playerClick.play(); } A: Try this... function toggleAudio() { for(var key in sounds) { sounds[key].muted = !sounds[key].muted; } } Just fire that every time you hit the mute button. It will toggle the muted state of each Audio object in the sounds object.
{ "pile_set_name": "StackExchange" }
Q: Drop Shadow in Raphael I am wanting to create a drop shadow for an object (or anything for that matter) with Raphael. I was searching around the web and found some sources, but was unclear as to how I would apply it to my code. From what I understand there is a blur() method in Raphael but I didn't find it in their documentation. Anyways I am new to Raphael so if someone could provide some assistance I would appreciate it. Here is the code I have so far... <html> <head><title></title> <script src="raphael-min.js"></script> <script src="jquery-1.7.2.js"></script> </head> <body> <div id="draw-here-raphael" style="height: 200px; width: 400px;"> </div> <script type="text/javascript"> //all your javascript goes here var r = new Raphael("draw-here-raphael"), // Store where the box is position = 'left', // Make our pink rectangle rect = r.rect(20, 20, 50, 50).attr({"fill": "#fbb"}); var shadow = canvas.path(p); shadow.attr({stroke: "none", fill: "#555", translation: "4,4"}); var shape = canvas.path(p); </script> </body> </html> A: you need to use glow. here's an example
{ "pile_set_name": "StackExchange" }